LaunchDarkly Outage History
Past incidents and downtime events
Complete history of LaunchDarkly outages, incidents, and service disruptions. Showing 50 most recent incidents.
January 2026(4 incidents)
Data ingestion delays
4 updates
This incident has been resolved and all data processing pipelines are fully caught up. No data was lost.
A fix has been implemented and our event processing pipeline for Observability and Opentelemetry are fully caught up. We're continuing to monitor as our event processing pipeline catches up for Flag Status, Evaluations, and Contexts.
We have identified the issue and are continuing our work to resolve it.
All customers are experiencing data ingestion delays with the following: - Observability sessions and errors - Opentelemetry logs, traces, and metrics - Flag status - Evaluations - Contexts We are investigating and will provide updates as they become available. No data loss is expected.
Elevated error rate when configuring Okta SCIM
4 updates
This incident has been resolved.
We believe the issue is resolved for all customers. We're continuing to monitor the situation.
Some customers using Okta SCIM are encountering errors when provisioning and managing LaunchDarkly members. We continue to work on a remediation and have engaged with Okta's support team.
Some customers are experiencing errors when configuring LaunchDarkly with Okta SCIM. We have identified the issue and are continuing our work to resolve it.
Unable to edit JSON flag variations
4 updates
This incident has been resolved.
A fix has been implemented for the issue preventing editing some JSON flag variations.
We've identified a front-end issue that is causing issues editing certain JSON flag variations and are working on a fix.
Customers may experience issues editing JSON flag variation. We are investigating the root cause and will provide updates shortly.
Guarded releases event ingestion delays
4 updates
This incident has been resolved.
Events are caught up.
A fix has been implemented and we are monitoring the results. We expect to catch up on all events within the next 15 minutes, no data loss is expected.
We are currently experiencing delays with guarded releases event ingestion. We are investigating and will provide updates as they become available.
December 2025(5 incidents)
Delay in Observability product data ingest
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We have identified the cause of the ingest delay and are catching up on the backlogged messages. We expect to be caught up on all delayed sessions and errors in the next hour. Data loss is not expected.
Sessions and errors may be delayed by up to 3 hours. We are investigating the root cause.
Investigating - Increase in SDK errors
8 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are observing a reduction in SDK errors. We are continuing to work on a fix.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are also observing a small percentage timeouts when modifying feature flags via our API or UI. We are continuing to investigate this error.
We are continuing to investigate this issue.
We are investigating an increase in SDK error rates affecting a small portion of requests, currently estimated at less than one percent. SDKs will automatically retry these errors, so the primary customer impact is expected to be longer SDK initialization times rather than request failures. We believe the issue is related to an ongoing incident affecting one of our vendors. Our team is actively working to mitigate the impact and will provide additional updates as more information becomes available.
Event Processing Delays - Experiment Results Utilizing Attribute Filtering affected
2 updates
We have recovered from delays in experimentation results that are sliced by attributes. No data has been lost.
We are investigation an issue with delays in experimentation results that are sliced by attributes. No data has been lost.
Delays in publishing data export events
4 updates
All of the delayed data has been processed and this incident is resolved.
We are continuing to process the delayed data. The data is now updated through 2025-12-04, 01:00:00 UTC.
We are continuing to process the data and data is current as of 2025-12-03, 08:00:00 UTC. We'll continue to update as we process data.
Some customers who have configured Snowflake, BigQuery, or Redshift data export destinations may be experiencing delays in events published. There is no data loss. Exported data events are currently 32 hours behind. We are recovering steadily and will continue to send updates
Elevated error rates in apac region for server side sdks
1 update
Elevated error rates in apac region for server side sdks attempting to make new connections to the streaming service from 8:05 AM PT to 8:11 AM PT. The issue is now resolved
November 2025(7 incidents)
Intermittent issues accessing flag details
3 updates
This incident has been resolved.
We are no longer seeing any errors, and the issue was contained to the euw1 region. We'll continue to monitor and update this as necessary.
We are currently investigating an issue intermittently preventing our flags details pages from loading.
Delayed Event Processing
4 updates
This incident has been resolved.
A fix has been implemented and our event processing pipeline is fully caught up. We're continuing to monitor.
We are continuing to work on a fix for this issue, and remain at an approximate 20 minute delay in flag event processing.
We have identified a delay in our event processing pipeline, and are working to mitigate the issue. Features that show flag usage metrics are affected, and data is approximately 20 minutes delayed right now.
Investigating elevated latency
2 updates
The issue with the AI Configs list page has been resolved. Impacted services have returned to normal operation.
We detected elevated latencies loading the flag list and AI configs list pages. The flag list’s performance has recovered, and we continue to investigate remediation on the AI configs list page.
Elevated error rates for a small number of customers
1 update
Between 7:37am and 8:19am PT, a small number of customers in the us-east-1 region encountered elevated error rates with Polling SDK and API requests. This was caused by a minor issue affecting a CDN POP, which has since been resolved.
Customers unable to edit custom rules on flags
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
AI Configs monitoring page tab failing to load
5 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
Delayed flag updates for small number of customers
1 update
A limited number of customers (primarily in EU regions) with Polling SDK connections experienced elevated latency and errors rates between 9:05am and 9:58am PT, caused by a service incident in our CDN provider.
October 2025(7 incidents)
Live Events not loading
3 updates
We've resolved an issue causing Live Events to not load.
We've identified an issue that was causing Live Events to not load (starting Oct 23 11:03am PT) and are resolving the issue.
We've received reports of Live Events not loading and are investigating.
Experiment results and metrics unavailable
3 updates
We've resolved an issue causing Experiment results to fail to load.
We've identified an issue affecting the display of Experiment results and are working on a fix.
We are investigating reports of experiment results and metrics failing to load.
Elevated latencies and delays
30 updates
This incident has been resolved. One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update IP allowlists in their firewalls or proxy servers in order for their services to continue establishing streaming connections from server side SDKs to LaunchDarkly without disruption. Please refer to documentation at https://docs.launchdarkly.com/home/advanced/public-ip-list for more information. Refer to https://app.launchdarkly.com/api/v2/public-ip-list for complete list of public IPs. Customers who switched from streaming to polling mode as a workaround are clear to revert back to streaming mode.
One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update IP allowlists in their firewalls or proxy servers in order for their services to continue establishing streaming connections from server side SDKs to LaunchDarkly without disruption. Please refer to documentation at https://docs.launchdarkly.com/home/advanced/public-ip-list for more information. Refer to https://app.launchdarkly.com/api/v2/public-ip-list for complete list of public IPs. We will continue to actively monitor our services and provide updates if anything changes. We recommend that customers who switched from streaming to polling mode as a workaround remain in polling mode for now. We will continue to provide updates to this recommendation. We’ll provide another update within 60 minutes. The following stable IPs were added: - 52.22.11.124/32 - 98.90.74.184/32 - 44.214.199.141/32 - 54.158.1.193/32 - 52.20.244.244/32 - 3.222.86.128/32 - 3.209.231.150/32 - 98.87.97.132/32 - 54.243.249.198/32 - 52.205.29.16/32 - 52.200.155.176/32 - 72.44.54.239/32 - 44.193.41.212/32 - 44.193.145.213/32 - 3.230.174.47/32 - 34.193.141.46/32 - 54.145.215.104/32 - 54.83.149.69/32 - 54.167.133.6/32 - 98.86.214.67/32 - 3.210.111.117/32 - 44.198.65.246/32 - 3.223.193.186/32 - 54.164.149.203/32 - 52.202.164.129/32 - 54.211.161.195/32 - 52.44.175.163/32 - 54.87.94.27/32 - 34.196.162.28/32 - 3.229.200.95/32 - 34.206.243.165/32 - 44.198.216.81/32 - 98.85.64.100/32 - 34.193.205.73/32 - 54.82.179.12/32 - 35.169.61.114/32 - 3.225.212.129/32 - 44.214.230.241/32 - 44.197.94.28/32 - 54.225.42.164/32 - 3.232.151.250/32 - 98.88.212.98/32 - 44.206.106.7/32 - 44.219.171.95/32 - 54.81.117.83/32 - 3.212.29.247/32 - 52.207.48.173/32 - 52.21.24.75/32 - 44.209.163.213/32 - 3.212.26.71/32 - 3.232.245.239/32 - 44.214.85.107/32 - 54.85.9.44/32 - 3.212.63.158/32 - 44.214.25.250/32 - 34.225.52.183/32 - 54.144.244.40/32 - 13.216.151.182/32 - 34.205.184.16/32 - 54.243.39.147/32 - 52.21.118.82/32 - 44.208.247.20/32 - 44.209.6.233/32 - 98.85.24.70/32 - 52.206.193.249/32 - 52.203.145.124/32 - 34.207.21.226/32 - 52.6.144.34/32 - 3.221.55.92/32 - 54.160.1.221/32 - 54.236.171.5/32 - 3.210.143.243/32 - 18.204.254.23/32 - 34.224.206.32/32 - 54.152.40.39/32 - 52.201.30.87/32 - 98.86.87.228/32 - 52.70.143.213/32 - 34.199.166.40/32 - 54.225.71.167/32 - 100.26.67.253/32 - 13.219.10.149/32 - 52.203.44.182/32 - 3.215.17.57/32 - 3.217.93.49/32 - 3.215.154.205/32 - 3.224.166.159/32 - 44.205.194.1/32 - 54.162.82.157/32 - 54.175.84.251/32 - 54.211.58.167/32 - 52.22.199.197/32 - 35.169.162.188/32 - 44.205.162.192/32 - 54.224.162.1/32 - 50.16.48.228/32 - 52.203.187.144/32 - 52.22.34.71/32 - 52.44.226.138/32 - 35.169.87.104/32 - 50.17.142.209/32 - 34.226.53.28/32 - 50.16.209.122/32 - 54.173.173.176/32 - 54.197.143.76/32 - 52.45.14.195/32 - 54.84.144.50/32 - 52.205.140.231/32 - 52.1.64.188/32 - 23.22.17.50/32 - 44.213.219.16/32 - 54.211.63.220/32 - 34.236.195.69/32 - 100.29.106.41/32 - 107.20.48.118/32 - 107.22.84.205/32 - 107.23.47.163/32 - 174.129.120.2/32 - 174.129.25.155/32 - 18.204.101.179/32 - 18.207.77.1/32 - 18.214.59.159/32 - 3.208.63.99/32 - 3.209.142.240/32 - 3.210.8.83/32 - 3.211.0.174/32 - 3.211.171.106/32 - 3.211.40.100/32 - 3.211.78.169/32 - 3.212.153.172/32 - 3.212.215.241/32 - 3.212.69.145/32 - 3.215.132.92/32 - 3.215.85.74/32 - 3.217.156.217/32 - 3.217.33.194/32 - 3.222.172.85/32 - 3.225.49.136/32 - 3.226.201.70/32 - 3.232.113.99/32 - 3.81.156.201/32 - 3.94.227.253/32 - 34.192.228.56/32 - 34.196.53.78/32 - 34.197.220.63/32 - 34.197.229.208/32 - 34.198.5.248/32 - 34.205.180.137/32 - 34.206.142.57/32 - 34.225.210.63/32 - 34.225.44.159/32 - 34.232.120.176/32 - 34.235.101.237/32 - 34.237.149.109/32 - 34.237.7.234/32 - 35.153.62.144/32 - 35.171.42.112/32 - 35.172.28.29/32 - 35.175.51.91/32 - 44.193.160.19/32 - 44.193.176.64/32 - 44.193.192.114/32 - 44.195.178.165/32 - 44.205.130.196/32 - 44.205.142.202/32 - 44.205.242.41/32 - 44.207.32.19/32 - 44.208.215.105/32 - 44.210.2.163/32 - 44.221.72.252/32 - 44.223.189.67/32 - 50.16.53.115/32 - 52.0.20.18/32 - 52.1.126.54/32 - 52.20.44.107/32 - 52.200.10.183/32 - 52.201.19.0/32 - 52.202.18.147/32 - 52.205.199.141/32 - 52.205.74.149/32 - 52.206.123.108/32 - 52.21.16.31/32 - 52.22.120.141/32 - 52.22.75.64/32 - 52.23.189.51/32 - 52.3.131.52/32 - 52.3.164.32/32 - 52.3.203.3/32 - 52.4.17.19/32 - 52.55.197.16/32 - 52.6.134.5/32 - 52.7.81.224/32 - 54.147.67.241/32 - 54.156.155.61/32 - 54.158.114.255/32 - 54.158.201.166/32 - 54.167.202.203/32 - 54.235.4.229/32 - 54.243.165.178/32 - 54.243.220.97/32 - 54.243.227.67/32 - 54.243.238.143/32 - 54.243.34.157/32 - 54.243.54.147/32 - 54.243.58.248/32 - 54.243.79.193/32 - 54.80.39.21/32 - 54.81.213.212/32 - 54.84.21.101/32 - 54.84.245.230/32 - 98.82.52.30/32 - 98.82.55.107/32
One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update the IP allowlists in their firewalls or proxy servers to ensure that their services can continue establishing streaming connections from server-side SDKs to LaunchDarkly without disruption. Approximately 88% of traffic to stream.launchdarkly.com will continue to be routed to existing stable IPs. We are working with AWS to provide a list of additional stable IPs and will post another update as soon as they become available. We will continue to actively monitor our services and provide updates if anything changes. We recommend that customers who switched from streaming to polling mode as a workaround remain in polling mode for now. We will continue to provide updates to this recommendation. We’ll provide another update within 60 minutes.
Server-side streaming is healthy. The load balancer upgrade, along with the addition of another load balancer, has restored our service to healthy levels. We will continue to actively monitor our services and provide updates if anything changes. We recommend that customers who switched from streaming to polling mode as a workaround remain in polling mode for now. We will continue to provide updates to this recommendation. We’ll provide another update within 60 minutes.
We're seeing signs of recovery, reported error rates for server-side SDKs are dropping significantly. The initial load balancer unit was upgrading and has begun handling traffic successfully. The additional load balancer is online and is beginning to handle traffic. Customers may still experience delayed flag updates. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage. An additional load balancer has been brought online and is being configured to receive traffic. When we confirm that this is successful, we'll bring the other additional load balancer units online to handle the increased volume in traffic and restore service to our customers. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage. We are in the process of deploying additional load balancer units that are about to go online. We expect them to successfully handle the increased volume in traffic and restore service to our customers. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage. We're still working on creating additional load balancer units to distribute and handle the increased volume in traffic. AWS is providing active support to LaunchDarkly as we work to restore service to our customers. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage and the reported error rates for server-side SDKs are reducing. We've added an additional load balancer unit to distribute the traffic which is helping. Based on the volume of traffic, we're going to add five additional load balancer units to give our service enough capacity to handle it. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage and the error rates for server-side SDKs are still high. We've escalated the recovery process with our AWS technical support team to accelerate the redeployment of our ALB for SDK connections to restore service. They are updating our ALB load balance capacity units (LCU) to accommodate increased levels of inbound traffic to our platform. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage and the error rates for server-side SDKs are still high. We're working with our AWS technical support team to accelerate the redeployment of our ALB for SDK connections to restore service. As a temporary workaround, we recommend switching server-side SDK configs from streaming to polling. Customers connecting their server-side SDKs directly to LD's streaming capabilities can reconfigure their SDKs to use polling to mitigate. Node: - Set LDOptions.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-nodejs-server-side-code-sample - https://launchdarkly.github.io/js-core/packages/sdk/server-node/docs/interfaces/LDOptions.html#stream Python - Set Config.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-python-code-sample - https://launchdarkly-python-sdk.readthedocs.io/en/latest/api-main.html#ldclient.config.Config.stream Java: - Use Components.pollingDataSource() instead of the default Components.streamingDataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-java-code-sample - https://launchdarkly.github.io/java-core/lib/sdk/server/com/launchdarkly/sdk/server/LDConfig.Builder.html#dataSource-com.launchdarkly.sdk.server.subsystems.ComponentConfigurer- .NET: - create a builder with PollingDataSource(), change its properties with the methods of this class, and pass it to DataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-net-server-side-code-sample - https://launchdarkly.github.io/dotnet-server-sdk/pkgs/sdk/server/api/LaunchDarkly.Sdk.Server.Integrations.PollingDataSourceBuilder.html Enterprise customers connecting their server-side SDKs to a Relay Proxy cluster can reconfigure their Relay Proxy to be in Offline Mode to mitigate. https://launchdarkly.com/docs/sdk/relay-proxy/offline We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage in our main US region and we're continuing our efforts to restore service. We're redirecting traffic to an EU region to help distribute the load to healthy servers while we work to restore our primary region. Customers connecting their server-side SDKs directly to LD’s streaming capabilities can reconfigure their SDKs to use polling to mitigate. Node: - Set LDOptions.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-nodejs-server-side-code-sample - https://launchdarkly.github.io/js-core/packages/sdk/server-node/docs/interfaces/LDOptions.html#stream Python - Set Config.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-python-code-sample - https://launchdarkly-python-sdk.readthedocs.io/en/latest/api-main.html#ldclient.config.Config.stream Java: - Use Components.pollingDataSource() instead of the default Components.streamingDataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-java-code-sample - https://launchdarkly.github.io/java-core/lib/sdk/server/com/launchdarkly/sdk/server/LDConfig.Builder.html#dataSource-com.launchdarkly.sdk.server.subsystems.ComponentConfigurer- .NET: - create a builder with PollingDataSource(), change its properties with the methods of this class, and pass it to DataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-net-server-side-code-sample - https://launchdarkly.github.io/dotnet-server-sdk/pkgs/sdk/server/api/LaunchDarkly.Sdk.Server.Integrations.PollingDataSourceBuilder.html Enterprise customers connecting their server-side SDKs to a Relay Proxy cluster can reconfigure their Relay Proxy to be in Offline Mode to mitigate. https://launchdarkly.com/docs/sdk/relay-proxy/offline We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage and the error rates for server-side SDKs are still high. We're redeploying our ALB for SDK connections to restore service. As a temporary workaround, we recommend switching server-side SDK configs from streaming to polling. Error rates for client side streaming SDKs are low, but flag updates are still delayed. All other service component are fully recovered and we've updated their status to Operational. We will provide our next update within 60 minutes.
We're redeploying parts of our service to address the high error rates for client and server side SDK connections that we continue to see. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 60 minutes.
Server-side streaming connections continue to be impacted by this incident. The event ingestion pipeline is fully functional again. This means that the following product areas are functional for all customers while data sent between Sunday Oct 19 11:45pm PT and Monday Oct 20 2:45pm PT may be unrecoverable: - AI Configs Insights - Contexts - Data Export - Error Monitoring - Event Explorer - Experimentation - Flag Insights - Guarded rollouts - Live Events Additionally, Observability functionality has recovered as mentioned in our previous update. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 30 minutes.
The LaunchDarkly web application is fully recovered for customer traffic. Flag Delivery traffic has been scaled back up to 100% and connection error rates are decreasing but non-zero. Active streaming connections should receive flag updates once successfully connected. If disconnected, these connections will automatically retry in accordance with our SDK behavior until being able to connect successfully. We've currently enabled 7.5% of traffic for the event ingestion pipeline and will continue to enable it progressively. As of 1:40pm PT Observability data is successfully flowing again and we are catching up on data backlog. Observability data between 1:50am PT and 1:40pm PT is unrecoverable due to an outage in the ingest pipeline. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 60 minutes.
We've hit our target of healthy, stable nodes that are available for LaunchDarkly web application and are increasing traffic from 10% to 20%. We'll continue to monitor as we scale the web application back up. Recovering the Flag Delivery service for all customers is our top priority. We're working on stabilizing the Flag Delivery Network. We are beginning to progressively enable the event ingestion pipeline for the LaunchDarkly service. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 60 minutes.
The impacted AWS region continues to recover and make resources available which we are using to improve the availability of the LaunchDarkly platform. As we continue to recover and scale up, so do our customers. This increase in traffic is slowing our ability to reduce the impact of the outage. For customers who are using the LaunchDarkly SDKs, we do not recommend making changes to your SDK configuration at this time as doing so will impact our ability to continue service during our recovery. For Flag Delivery, server-side streaming is back online and no longer impacted by the incident for most customers. Customers using big segments or payload filtering are still impacted. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. The event ingestion pipeline will remain disabled to limit the traffic volume within LaunchDarkly's services during our recovery. We will provide our next update within 60 minutes.
We've made significant progress on our recovery from this incident. Our engineers are continuing to bring the LaunchDarkly web application into a healthy state and have more than tripled the number of healthy nodes to serve our customers. The status of many service components has been upgraded from Major outage to Partial Outage. The following components are still experiencing a Major Outage: - Experiment Results Processing - Global Metrics - Feature Management Context Processing - Feature Management Data Export - Feature Management Flag Usage Metric The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. The event ingestion pipeline will remain disabled to limit the traffic volume within LaunchDarkly's services during our recovery. We will provide our next update within 30 minutes.
We continue to work towards recovering from this incident. We're actively working towards restoring the LaunchDarkly service into a healthy state. We now have 58% of the LaunchDarkly web application in a healthy state. The EU and FedRAMP LaunchDarkly instances are not impacted by this incident. While working towards a resolution for our customers, we disabled the event ingestion pipeline to limit the traffic volume within LaunchDarkly's services. This means that the following product areas have unrecoverable data loss: - AI Configs Insights - Contexts - Data Export - Error Monitoring - Event Explorer - Experimentation - Flag Insights - Guarded rollouts - Live Events - Observability While recovering, there is continued impact to customers using our SDKs to connect to our Flag Delivery network. Our engineers are continuing to recover our service in our main region. We will provide our next update within 30 minutes.
While we continue to resolve the ongoing impact, we want to clarify the ongoing impact to our Flag Delivery Network and SDKs: - Customers using client-side or server-side SDKs should continue to see the last known flag values if a local cache exists, or fall back to in-code values. - Customers using our Relay Proxy should continue to see last known flag values if a local cache exists. - Customers using our Edge SDKs should continue to see last known flag values. Additionally, our event ingestion pipeline is dropping events that power product features such as flag insights, experimentation, observability, and context indexing.
We're continuing to work on resolving the immediate impact from this incident. We're actively working on recovering within our AWS us-east-1 region while also working on options to move traffic to a healthier region.
We are continuing to work on a fix for this issue.
We are aware that our web app and API are experiencing high error rates due to scaling issues in AWS us-east-1 region.
We are still experiencing delays in flag updates and event ingestion pipeline, affecting experimentation, data export, flag status metrics and others. Additionally we are experiencing elevated error rate on client side SDK streaming API in us-east-1 region due to scaling issues in that AWS region.
We are still experiencing delays in flag updates and event ingestion pipeline, affecting experimentation, data export, flag status metrics and others. Additionally, observability data (session replays, errors, logs, and traces) has also been impacted starting ~1:50am PT.
We are seeing initial recovery for the following services - Flag updates - SDK requests for environments using Big Segments We are monitoring for the recovery of the rest of the services.
We are continuing to work on the issue. Additionally impacted services - Delayed flag updates to SDKs - Dropped SDK events impacting Experimentation, Data Export
We have identified issue with elevated error rates and event pipelines. Currently impacted services are - SDK and Relay Proxy requests for environments using Big Segments in us-east-1 region - Guarded rollouts - Scheduled flag changes - Experimentation - Data export - Flag usage metrics - Emails and notifications - Integrations web hooks
We are investigating elevated latencies and delays in multiple services including scheduled flag changes, flag updates and events processing. We will post updates as they are available.
Delays in event data
3 updates
The issue with delays in event data has been resolved. Event data is up to date and impacted services have returned to normal operation
Customers are experiencing up to 21 minute delays with product features using event data. We have identified the issue and are continuing our work to resolve it. Data loss is not expected. Customers may begin seeing recovery of affected services at this time.
All customers are experiencing up to 21 minute delays with product features using event data. We are investigating and will provide updates as they become available. Data loss is not expected.
Delayed flag updates for small number of customers
2 updates
The issue has been resolved. Flag updates have returned to normal operation.
A small number of customers experienced delayed flag updates made between 15:24 and 15:34 PT. The issue has been mitigated and we will continue monitoring.
Errors generating new client libraries
2 updates
Users are now able to generate new client libraries.
We're aware of intermittent difficulties generating new client libraries. We're investigating.
Delay in event processing
4 updates
This incident has been resolved.
We've implemented a fix and are monitoring the results. Impact to Data Export was limited to our streaming data export product.
We've mitigated the impact on processing events for all features outside of Data Export. We're continuing to investigate.
We are currently investigating an issue recording events, some flag, metric, and experimentation events won't be showing in the UI.
September 2025(8 incidents)
Self-serve legacy customers are unable to check out or modify plan
3 updates
The issue with legacy self-serve check out has been resolved.
The issue with legacy self-serve plans has been identified and a fix has been implemented. We are continuing to monitor the performance of impacted services. We will continue to update this page until it is resolved.
Customers on legacy plans (such as Starter, Professional) are unable to check out or modify the plan. We have identified a fix and will provide an update as soon as the fix is ready. Please contact Support if you need to make an immediate change to your plan.
Increased error rate on flag status API
1 update
From 11:38 am PT - 11:46 am PT we experienced an elevated error rate on the flag evaluation and flag status APIs, used by flag list, flag targeting, and feature monitoring endpoints.
Goals Endpoint Initialization Failures Impacting Experiments
3 updates
The issue with the Goals endpoint has been resolved. We will continue monitoring to ensure normal operation.
A fix has been implemented for the Goals endpoint. We are actively monitoring the system to ensure experiments are functioning as expected.
We’ve identified an issue where the Goals endpoint is failing to initialize in some instances. This is currently impacting experiments. Our team is actively working on a fix to the cause.
Customers are unable to edit flag JSON
2 updates
The issue where the JSON failed to load when clicking “Edit JSON” has been fixed. Functionality is now fully restored.
The JSON fails to load when clicking “Edit JSON.” We’ve identified the issue and are deploying a fix.
Customers on Foundation plans are unable to invite new members
6 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
All customers on Foundation plans are experiencing issues inviting new members via the LaunchDarkly UI. As a workaround, these customers can navigate to https://app.launchdarkly.com/projects/default/onboarding to invite members using the onboarding page. We are investigating a fix and will provide updates as they become available.
Trial Customers Blocked from Foundational Plan Upgrade
3 updates
The issue preventing trial customers from upgrading to the Foundational plan has been resolved. All functionality is now operating as expected.
We have identified the issue preventing trial customers from upgrading to the Foundational plan. A fix is being implemented.
We are investigating an issue where trial customers are unable to upgrade to the Foundational plan.
Delayed flag updates for small number of customers
1 update
A small number of customers experienced delayed flag updates made between 16:22 and 16:49 PT. The issue has been resolved, and all services are now operating normally.
Guarded releases and Experiments are experiencing elevated errors
6 updates
We have turned on Views for Early Access users and confirmed there are no issues with Guarded Releases or Experiments. This issue is fully resolved.
We are continuing to monitor for any further issues.
Only customers who had Early Access to the new Views feature were affected by this incident. We've temporarily turned off Views while implementing a long term fix.
We have identified the issue, implemented a fix, and monitoring the results.
We are continuing to investigate this issue.
We are currently investigating this issue.
August 2025(8 incidents)
Elevated TLS negotiation error rate for server side SDKs in streaming mode
1 update
Between 10:12 AM and 10:52 AM PT, a small subset of customers using LaunchDarkly server-side SDKs in streaming mode on older TLS versions may have experienced TLS negotiation errors when initializing SDKs. The issue has been fully resolved, and all services are now operating normally.
CORS errors for a small subset of customers
1 update
A small number of customers using the LaunchDarkly JavaScript client side SDK with use cases that involved a single browser session requesting the same LaunchDarkly environment from two or more domains in certain situations experienced CORS errors when initializing the SDK from secondary domains. The impact started on Aug 22 and was mitigated on Aug 26. This issue has now been resolved.
Onboarding/Quickstart Not Visible
4 updates
This incident has been resolved.
A fix has been deployed and we will continue to monitor for a period of time.
The issue has been identified and the team is working on a rollback to restore the onboarding and Quickstart experience.
We are currently investigating an issue where onboarding and Quickstart functionality in the application are not visible to customers.
Flag Evaluation Latency
3 updates
We have addressed the source of the latency on flag evaluations and all affected endpoints.
We are continuing to investigate the intermittent latency and have engaged our backend partner to facilitate the investigation.
We are aware of some customers experiencing brief periods of latency in flag evaluations and flag list endpoints. We are continuing to investigate and will provide an update shortly.
EU West region customers may experience failures in initial streaming requests
2 updates
The streaming failures in EU West region have been resolved. Impacted services have returned to normal operation
5% of customers are experiencing failures in initial streaming requests in the EU West region. SDKs will retry. We are investigating and will provide updates as they become available.
Elevated TLS negotiation error rate for SDKs in streaming mode
4 updates
The issue with TLS handshake errors from SDKs to our streaming service has been resolved.
We are no longer experiencing elevated error rates. We are continuing to monitor the performance of our streaming service.
We have identified the issue with the TLS handshake errors from SDKs for small number of customers. We have reverted the change and expect elevated error rates intermittently while the change is rolled out.
We are investigating reports of TLS handshake errors from SDKs to our streaming service for small number of customers. Polling SDKs are not affected.
Elevated error rates in SDK streaming connections
1 update
From 8:23 PM UTC until 8:37 PM UTC we observed elevated error rates for streaming connections in SDKs across all regions. These errors self-resolved as clients successfully retried, and there is no ongoing impact.
Account Usage Charts for Server Side SDKs Degraded
4 updates
This incident has been resolved.
We have implemented a fix for the Account Settings Usage Charts for server side SDKs under-reporting the connection counts and we are monitoring the results. Customers should start seeing new connection counts show up in the charts. We are unable to backfill the missing under-reported data from July 26 to Aug 5.
The issue with account usage charts for server side SDKs connection metrics has been identified and the fix is being implemented.
We are investigating the issue with the Account Settings Usage Charts for server side SDKs under-reporting the connection counts for most customers since July 26.
July 2025(9 incidents)
Observability [EAP] Logs and Traces Ingest Failures
1 update
Between 18:42-19:52 UTC (11:42-12:52 PDT), Logs and Traces were not ingested due to an issue in the hosted OpenTelemetry collector. Logs and Traces sent during this time will not appear in the Observability product. We apologize for the inconvenience and have taken measures to prevent such an incident in the future.
Session Data Not Loading in App
1 update
During the period from 12:15PM PT to 2:42PM PT, customers monitoring user sessions via the SDK may have experienced issues loading data. At 2:42PM PT, a fix was implemented to address this issue.
Blank Pages in the Web App
1 update
A guarded rollout serving less than 1% of LaunchDarkly users rendered blank pages in the web application between 12:34pm PT and 12:59pm PT.
Event Processing Delays
2 updates
The intermittent delays have been resolved.
Event processing is currently delayed. Some features may show stale data for a period of time until the issue is resolved.
Elevated error rates in usage and flag endpoints
4 updates
This incident has been resolved.
We have addressed the backend performance issues and are no longer observing elevated error rates on the impacted endpoints.
We are continuing to work on a fix for this issue.
We have identified an issue causing elevated error rates on a small subset of requests on the LaunchDarkly platform. Some users may experience 5xx errors when calling the following endpoints. - Usage data endpoints - GET flag(s) endpoint We are currently working to mitigate the underlying root cause.
Elevated Error Rates in Event Ingestion
5 updates
This incident has been resolved.
All event ingestion errors have been resolved and downstream services processing event data have been restored to normal operations.
Engineers have scaled the event service to resolve the majority of error rates. Users should experience a reduction in the errors on events.launchdarkly.com. Some downstream services in the event data warehouse are still not being updated. We will update to monitoring once we observe the remaining services recover.
We are continuing to investigate this issue and implementing steps to address the error rates. Users may experience errors when calling the events.launchdarkly.com endpoint.
We are currently observing elevated errors in event data ingestion related to experiments, data export, and other areas of the LaunchDarkly platform. Users may experience stale data when using these features.
Observability [EAP] Pages Outage
2 updates
From 10:49:39 AM PDT to 1:11 PM PDT, observability pages were not loading correctly. This incident has been resolved, and there was no loss of customer observability data.
Customers cannot load the observability data, but customer observability data is still being ingested.
Partial Event Processing Interruption
1 update
Between 1:20 AM and 1:40 AM PT on July 10, 2025, we experienced an issue that affected the processing of some analytical events. These events power several features, including flag evaluation graphs, experimentation results, and data exports. Flag evaluation functionality remained fully operational throughout the incident. All systems are now functioning normally, and we are actively monitoring performance while continuing to investigate the root cause to prevent recurrence. We apologize for any inconvenience this may have caused and appreciate your patience.
Docker Image for Relay Proxy (latest) incorrectly pointing to the v9 alpha release.
2 updates
This incident has been resolved.
There is an issue with the latest Relay Proxy Docker image pointing to the v9 alpha release between 2025 Jun 30 12:11 PM - 8:52 PM PDT. This has been identified, and we have replaced the latest Relay Proxy Docker image with the latest working v8 version now. If you encountered an issue with the latest Relay Proxy image during the incident time, please grab the latest again, and that should resolve the issue. Please do not use the v9 alpha release without reaching out to LaunchDarkly. The SDKs are not affected.
June 2025(2 incidents)
CUPED-Adjusted Iterations Delayed
2 updates
All missing data has been successfully backfilled.
Users of CUPED-adjusted iterations in experimentation may be missing data from June 11th 4:30AM PST to June 13th 10AM PST. There is no underlying data loss and our engineers are working to backfill this data as quickly as possible. We will provide another update once the missing data has been fully restored.
Delays for Customers Using the Cloudflare Edge SDK
5 updates
Our partner has confirmed full restoration of their platform. We began observing full recovery starting at 2:07PM PT.
Our partner is reporting partial recovery on their platform. We are still observing intermittent delays in the Edge SDK and warehouse-native data export. We'll continue to monitor and update when full platform recovery has been completed by our partner.
We have been notified our vendor handling data warehouse export functionality is also impacted. LaunchDarkly customers using warehouse-native data export may experience data delays until the partner issue is mitigated.
We are continuing to monitor the status with our partners. Edge SDK customers continue to experience intermittent delays.
LaunchDarkly customers leveraging the Cloudflare Edge SDK may experience intermittent delays in processing flag changes. This is due to broad platform wide outages at Cloudflare. We're continuing to investigate the status with this partner. All other edge SDK's should be functioning normally. We'll post an update when available from Cloudflare.