Bitbucket Outage History
Past incidents and downtime events
Complete history of Bitbucket outages, incidents, and service disruptions. Showing 50 most recent incidents.
January 2026(3 incidents)
Disrupted Bitbucket availability in eu-west-1
2 updates
On January 28, 2026, affected Bitbucket Cloud users in eu-west-1 may have experienced some service disruption. The issue has now been resolved, and the service is operating normally for all affected customers.
We received reports of a partial service disruption affecting Bitbucket in eu-west-1 for some customers. We have identified the cause of the issue and our teams have applied mitigations and we are seeing signs of recovery. We'll continue to monitor closely to confirm stability.
Unable to reach bitbucket site
5 updates
### Summary On Jan 7, 2026, between 15:28 UTC and 17:04 UTC, Atlassian customers using Bitbucket Cloud could not load the dashboard landing page. Users also faced degraded performance and intermittent failures navigating other parts of the application or using public REST APIs. The event was caused by an unexpected load on a public API, causing long-running queries on a database which resulted in failed web and api requests. The incident was detected within three minutes by automated monitoring systems and mitigated by introducing stricter limits on the API for certain traffic, while taking manual actions on the impacted database, restoring Bitbucket to a healthy state. ### **IMPACT** Occurring on Bitbucket Cloud on Jan 7, 2026, between 15:28 UTC and 17:04 UTC, the incident caused degraded performance and intermittent failures to a subset of customers interacting with the Bitbucket web application and public APIs. Git operations over SSH and HTTPS were not impacted. ### **ROOT CAUSE** The event was caused by an unexpected load on a public API. The request volume during this period resulted in high resource utilization our central database’s read replicas, impacting website and our API performance and reliability. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know outages reduce your productivity. Although we have several testing and prevention processes, this issue went undetected because a specific request pattern on a public API was not tested against the traffic volume seen during the incident. We prioritized the following actions to prevent repeating this type of incident: * Improve rate limiting and caching capabilities at multiple points in our networking and application layers. * Apply stricter rate limits for specific public APIs to protect infrastructure health and shared application resources. * Optimize performance of specific queries and codepaths on these APIs to handle high request loads. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
On Wednesday, January 7, 2026, Bitbucket Cloud experienced a disruption, and services were unavailable to affected users. The issue has now been resolved, and the service is operating normally for all affected customers.
The issue has now been resolved, and the service is operating normally for all affected customers. We will continue to monitor closely to confirm stability.
We are actively investigating reports of a service disruption affecting Bitbucket Cloud. We'll share updates here within the next hour or as more information is available.
We are actively investigating reports of a partial service disruption affecting Bitbucket Cloud for some customers. We'll share updates here within the next hour or as more information is available.
Bitbucket workspace invitations failing for all users
2 updates
We have successfully mitigated the incident and the affected service is now fully operational. Our teams have verified that normal functionality has been restored. Thank you for your patience and understanding while we worked to resolve this issue.
Users of Bitbucket Cloud currently face difficulties when trying to invite new users to their workspaces. This issue is resulting in a '400 Client Error: Bad Request' message, which stems from a recent change in the invitation flow. The team is actively working on a resolution and is deploying a hotfix. We will provide an update within the next hour.
December 2025(1 incident)
Outbound Email, Mobile Push Notifications, and Support Ticket Delivery Impacting All Cloud Products
3 updates
### Summary On **December 27, 2025**, between **02:48 UTC and 05:20 UTC**, some Atlassian cloud customers experienced failures in sending and receiving emails and mobile notifications. Core Jira and Confluence functionality remained available. The issue was triggered when **TLS certificates used by Atlassian’s monitoring infrastructure expired**, causing parts of our metrics pipeline to stop accepting traffic. Services responsible for email and mobile notifications had a critical path dependency on monitoring path leading to service disruptions. All impacted services were fully restored by **05:20 UTC**, around **2.5 hours** after customer impact began. ### IMPACT During the impact window, customers experienced: * **Outbound product email failures** \(notifications and other product emails did not send\). * **Identity and account flow failures** where emails were required \(e.g. sign‑ups, password resets, one‑time‑password / step‑up challenges\). * **Jira and Confluence mobile push notifications** * **Customer site activations and some admin policy changes** failing and requiring later reprocessing. ### ROOT CAUSE The incident was caused by: 1. **Expired TLS certificates** on domains used by our monitoring and metrics infrastructure caused by **misconfigured DNS authorization record** which prevented automatic renewal. 2. **Tight coupling of services to metrics publishing**, which caused them to fail when monitoring endpoints became unavailable, instead of degrading gracefully. ### REMEDIAL ACTIONS PLAN & NEXT STEPS We recognize that outages like this have a direct impact on customers’ ability to receive important notifications, complete account tasks, and operate their sites. We are prioritizing the following actions to improve our existing testing, monitoring and certificate management processes: * **Hardening monitoring and certificate infrastructure** * We are refining DNS and certificate configuration across our monitoring domains and strengthening proactive checks to detect and address failed renewals and certificate issues well before expiry. * We are also improving alerting on our monitoring and metrics pipeline. * **Decoupling monitoring from critical customer flows** We are updating services such as outbound email, identity, mobile push, provisioning, and admin policy changes so they no longer depend on metrics publishing to operate. If monitoring becomes unavailable, these services will continue to run and degrade gracefully by dropping or buffering metrics instead of failing customer operations. We apologize to customers impacted during this incident. We are implementing the improvements above to help ensure that similar issues are avoided. Thanks, Atlassian Customer Support
We have successfully mitigated the incident and all affected services are now fully operational. Our teams have verified that normal functionality has been restored across all areas. Thank you for your patience and understanding while we worked to resolve this issue.
We have taken steps to mitigate the issue and are seeing recovery in the affected services. Our teams will continue to closely monitor the situation and are actively working to confirm that all services are fully restored. We will provide further updates as we make additional progress.
November 2025(1 incident)
Bitbucket availability degraded
6 updates
## Summary On November 11, 2025, between 16:25 and 19:13 UTC, Atlassian customers were unable to access Bitbucket Cloud services. Customers experienced a period of 1 hour and 16 minutes where performance was degraded and a period of 1 hour and 32 minutes where the Bitbucket Cloud website, APIs, and Git hosting were unavailable. The event was triggered by a code change that unintentionally impacted how we evaluate feature flags, impacting all customers. The incident was detected within 5 minutes by automated monitoring systems and mitigated by scaling multiple services and deploying a fix which put Atlassian systems into a known good state. The total time to full resolution was about 2 hours and 48 minutes. ### **IMPACT** The overall impact was between November 11, 2025, 16:25 UTC and November 11, 2025, 19:13 UTC on Bitbucket Cloud. Between 16:25 UTC and 16:50 UTC, users were seeing degraded experiences with both Git services and pull request experiences within the Bitbucket Cloud site. Starting at 16:50 UTC, users were unable to access Bitbucket Cloud and associated services entirely. ### **ROOT CAUSE** During a routine deployment, a code change had a negative impact on a component used for feature flag evaluation. To mitigate this issue the Bitbucket engineering team manually scaled up Git services. This inadvertently resulted in hitting a regional limit with our hosting provider, causing new Git service instances to fail. This ultimately led to degradation of multiple dependent services and an increased number of failed requests via Bitbucket Cloud’s website and public APIs. ### **ACTIONS TAKEN** Our team immediately began investigating the issue and testing various mitigations, including scaling the impacted services, in an effort to reduce the effects of the change. However, these efforts were unsuccessful due to an unexpected scaling limit imposed by our underlying hosting platform. Attempts to roll back the code change were also unsuccessful, as the platform’s scaling limit prevented new infrastructure from being provisioned during the rollback process. In particular, any attempts to provision new infrastructure caused a high volume of calls to occur in a short period, leading to failures, retries, and a feedback loop that worsened the situation. To address this, the team scaled down certain services to reduce load on the platform, which allowed for the successful deployment of a fix and restoration of service. Once the fix was in place, healthy services were scaled back up to meet customer demand. ### **REMEDIATION AND NEXT STEPS** We recognize the significant impact outages have on our customers’ productivity. Despite our robust testing and preventative measures, this particular issue related to feature flag evaluation was not detected in other environments and only became apparent under high load conditions that had not previously occurred. The incident has provided valuable information about our hosting platform’s scaling limits, and we are actively applying these learnings to enhance our resilience and response times. To help prevent similar incidents in the future, we have taken the following actions: * Enhanced the resiliency of the affected feature gate component to to prevent future changes from resulting in widespread service impact. * Updated application logic to prevent services from hitting these platform scaling limits. * Implemented additional safeguards to detect and handle platform-imposed limits proactively during deployment and rollback scenarios. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
On November 11th, 2025, Bitbucket Cloud experienced a disruption, and services were unavailable to affected users. The issue has now been resolved, and the service is operating normally for all affected customers. We are committed to transparency and will publish our public post mortem on our Statuspage once this investigation is complete. We expect to publish this within the next 30 days.
The issue has now been resolved, and the service is operating normally for all affected customers. We will continue to monitor closely to confirm stability.
Our teams are continuing to address a service disruption affecting Bitbucket Cloud. We are now seeing signs of recovery, however, affected users may experience intermittent performance degradation. We'll share additional updates here in 60 minutes, or sooner as as more information is available.
We are continuing to investigate a service disruption affecting Bitbucket Cloud. We'll share updates in 60 minutes, or sooner as things progress.
We are actively investigating reports of performance degradation affecting Bitbucket Cloud and git services. We'll share updates here as more information is available.
October 2025(1 incident)
Atlassian Cloud Services impacted
25 updates
### Postmortem publish date: Nov 19th, 2025 ### Summary All dates and times below are in UTC unless stated otherwise. Customers utilizing Atlassian products experienced elevated error rates and degraded performance between Oct 20, 2025 06:48 and Oct 21, 2025 04:05. The service disruptions were triggered due to an [AWS DynamoDB outage](https://aws.amazon.com/message/101925/#:~:text=1%3A50%20PM.-,DynamoDB,-Between%2011%3A48) and further affected by subsequent failures in [AWS EC2](https://aws.amazon.com/message/101925/#:~:text=service%20disruption%20event.-,Amazon%20EC2,-Between%2011%3A48) and [AWS Network Load Balancer](https://aws.amazon.com/message/101925/#:~:text=service%20disruption%20event.-,Amazon%20EC2,-Between%2011%3A48) within the us-east-1 region. The incident started at Oct 20, 2025 06:48 and was detected within six minutes by our automated monitoring systems. Our teams worked to restore all core services by Oct 21, 2025 04:05. Final cleanup of backlogged processes and minor issues was completed on Oct 22, 2025. We recognize the critical role our products play in your daily operations, and we offer our sincere apologies for any impact this incident had on your teams. We are taking immediate steps to enhance the reliability and performance of our services, so that you continue to receive the standard of service you have come to trust. ### IMPACT Before examining product-level impacts, it's helpful to understand Atlassian's service topology and internal dependencies. Products such as Jira and Confluence are deployed across multiple AWS regions. The data for each tenant is stored and processed exclusively within its designated host region. This design is intentional and represents the desired operational state, as it limits the impact of any regional outage strictly to tenants in-region, in this case us-east-1. While in-scope application data is pinned to the region selected by the customer, there are times when systems need to call other internal services that may be based in a different region. If a problem occurs in the main region where these services operate, systems are designed to automatically fail over to a backup region, usually within three minutes. However, if unexpected issues arise during this failover, it can take longer to restore services. In rare cases, this could affect customers in more than one region. It’s important to note that all in-scope application data for supported products is pinned according to a customer’s chosen region. **Jira** Between Oct 20, 2025 06:48 and Oct 20, 2025 20:00, customers with tenants hosted in the us-east-1 region experienced increased error rates when accessing core entities such as Issues, Boards, and Backlogs. This disruption was caused by AWS's inability to allocate AWS EC2 instances and elevated errors in AWS Network Load Balancer \(NLB\). During this window, users may also have observed intermittent timeouts, slow page loads, and failures when performing operations like creating or updating issues, loading board views, and executing workflow transitions. Between Oct 20, 2025 08:36 and Oct 20, 2025 09:23, customers across all regions experienced elevated failure rates when attempting to load Jira pages. This disruption was caused by the regional frontend service entering an unhealthy state during this specific time interval. Normally, the frontend service connects to the primary AWS DynamoDB instance located in the us-east-1 to retrieve the most recent configuration data necessary for proper operation. Additionally, the service is designed with a fallback mechanism that references static configuration data in the event that the primary database becomes inaccessible. Unfortunately, a latent bug existed in the local fallback path. When the frontend service nodes restarted, they were unable to load critical operational configuration data from primary or fallback sources, leading to the observed failures experienced by customers. Between Oct 20, 2025 06:48 and Oct 21, 2025 06:30, customers experienced significant delays and missing Jira in-app notifications across all regions. The notification ingestion service, which is hosted exclusively in us-east-1, exhibited an increased failure rate when processing notification messages due to AWS EC2 and NLB issues. This issue resulted in notifications being delayed - and in some cases, not delivered at all - to users worldwide. **Jira Service Management \(JSM\)** JSM was impacted similarly to Jira above, with the same timeframes and for the same reasons. Between Oct 20, 2025 08:36 and Oct 20, 2025 09:23, customers across all regions experienced significantly elevated failure rates when attempting to load JSM pages. This affected all JSM experiences including the Help Centre, Portal, Queues, Work Items, Operations, and Alerts. **Confluence** Between Oct 20, 2025 06:48 and Oct 21, 2025 02:45, customers using Confluence in the us-east-1 region experienced elevated failure rates when performing common operations such as editing pages or adding comments. The primary cause of this service degradation was the system's inability to auto-scale due to AWS EC2 issues to manage peak traffic load effectively. Though the AWS outage ended at Oct 20, 21:09, a subset of customers continued to experience failures as some Confluence web server nodes across multiple clusters remained in an unhealthy state. This was ultimately mitigated by recycling the affected nodes. To protect our systems while AWS recovered, we made a deliberate decision to enable node termination protection. This action successfully preserved our server capacity but, as a trade-off, it extended the time required for a full recovery once AWS services were restored. **Automation** Between Oct 20, 2025 06:55 and Oct 20, 2025 23:59, automation customers whose rules are processed in us-east-1 experienced delays of up to 23 hours in rule execution. During this window, some events triggering rule executions were processed out of order because they arrived later during backlog processing. This caused potential inconsistencies in workflow executions, as rules were run in the order events were received, not when the action causing the event occurred. Additionally, some rule actions failed because they depend on first-party and third-party systems, which were also affected by the AWS outage. Customers can see most of these failures in their audit logs; however, a few updates were not logged due to the nature of the outage. By Oct 21, 2025 5:30, the backlog of rule runs in us-east-1 was cleared. Although most of these delayed rules were successfully handled, there were some additional replays of events to ensure completeness. Our investigation confirmed that a few events may never have triggered their associated rules due to the outage. Between Oct 20, 2025 06:55 and Oct 20, 2025 11:20, all non-us-east-1 regional automation services experienced delays of up to 4 hours in rule execution. This was caused by an upstream service that was unable to deliver events as expected. The delivery service encountered a failure due to a cross-region dependency call to a service hosted in the us-east-1 region. Because of this dependency issue, the delivery service was unable to successfully deliver events throughout this time frame, resulting in customer-defined rules not being executed in a timely manner. **Bitbucket and Pipelines** Between Oct 20, 2025 06:48 and Oct 20, 2025 09:33, Bitbucket experienced intermittent unavailability across core services. During this period, users faced increased error rates and latency when signing in, navigating repositories, and performing essential actions such as creating, updating, or approving pull requests. The primary cause was an AWS DynamoDB outage that impacted downstream services. Between Oct 20, 2025 06:48 and Oct 20, 2025 22:46, numerous Bitbucket Pipeline steps failed to start, stalled mid-execution, or experienced significant queueing delays. Impact varied, with partial recoveries followed by degradation as downstream components re-synchronized. The primary cause was an AWS DynamoDB outage, compounded by instability in AWS EC2 instance availability and AWS Network Load Balancers. Furthermore, Bitbucket Pipelines continued to experience a low but persistent rate of step timeouts and scheduling errors due to AWS bare-metal capacity shortages in select availability zones. Atlassian coordinated with AWS to provision additional bare-metal hosts and addressed a significant backlog of pending pods, successfully restoring services by 01:30 on Oct 21, 2025. **Trello** Between Oct 20, 2025 06:48 and Oct 20, 2025 15:25, users of Trello experienced widespread service degradation and intermittent failures due to upstream AWS issues affecting multiple components, including AWS DynamoDB and subsequent AWS EC2 capacity constraints. During this period, customers reported elevated error rates when loading boards, opening cards, adding comments or attachments. **Login** Between Oct 20, 2025 06:48 and Oct 20, 2025 09:30, a small subset of users experienced failures when attempting to initiate new login sessions using SAML tokens. This resulted in an inability for those users to access Atlassian products during that time period. However, users who already had valid active sessions were not affected by this issue and continued to have uninterrupted access. The issue impacted all regions globally because regional identity services relied on a write replica located in the us-east-1 region to synchronize profile data. When the primary region became unavailable, the failover to a secondary database in another region failed, which delayed recovery. This failover defect has since been addressed. **Statuspage** Between Oct 20, 2025 06:48 and Oct 20, 2025 09:30, Statuspage customers who were not already logged in to the management portal were unable to log in to create or update incident statuses. This impact was restricted only to users who were not already logged in at the time. The root cause was the same as described in the Login section above, and it was resolved by the same remediation steps. ### REMEDIAL ACTION PLAN & NEXT STEPS We have completed the following critical actions designed to help prevent cross-region impact from similar issues: * Resolved the code defect in the fallback option to ensure that Jira Frontend Services in other regions remain unaffected during a region-wide outage. * Fixed the issue that prevented timely failover of the identity service which impacted new login sessions. * Resolved the code defect so that delivery services in unaffected regions remain operational during region-wide outages. Additionally, we are prioritizing the following improvement actions: * Implement mitigation strategies to strengthen resilience against region-wide outages in the notification ingestion service. Although disruptions to our cloud services are sometimes unavoidable during outages of the underlying cloud provider, we continuously evaluate and improve test coverage to strengthen resilience of our cloud services against these issues. We recognize the critical importance of our products to your daily operations and overall productivity, and we extend our sincere apologies for any disruptions this incident may have caused your teams. If you were impacted and require additional details for internal post-incident reviews, please reach out to your Atlassian support representative with affected timeframes and tenant identifiers so we can correlate logs and provide guidance. Thanks, Atlassian Customer Support
Our team is now able to see full recovery across the vast majority of Atlassian products. We are aware of some ongoing issues with specific components such as migrations and JSM virtual service agents, and our team is continuing to investigate with urgency. We apologise for the inconvenience that this incident has caused and we will provide further information when the Post Incident Investigation has been completed.
The issue relating to the Atlassian Support portal displaying a message to customers to use our temporary support channel has now been resolved. The Atlassian Support portal is fully functional for any ongoing support issues. With regards to other Atlassian products, we continue to see recovery continuing across all impacted products and our teams are continuing to monitor as the recovery continues. We will provide further update on our recovery status within two hours.
We continue to see recovery progressing across all impacted products as backlogged items continue to be processed. The Atlassian Support portal is currently displaying a message directing customers to our temporary support channel. Please note that our support portal is currently fully functional for those attempting to raise requests. We are continuing to look into this alert to remove this message. We will provide further update on our recovery status in two hours.
Our team is now seeing recovery across all impacted Atlassian products. We are continuing to monitor for individual products that may still be processing backlogged items now that services are restored. The Atlassian Support portal is currently still displaying a message directing customers to our temporary support channel. Please note that our support portal is currently fully functional for those attempting to raise requests. We are continuing to look into this alert to remove this message. We will provide further update on our recovery status in one hour.
Our teams are continuing to monitor the recovery of systems across Atlassian products. This update is to inform that the Atlassian Support portal is fully operational at this time for customers that wish to contact support.
Monitoring - We've started seeing continued product experience improvement. While we still have a backlog of event processing, we are seeing improvements in systems operational capabilities across all products. We estimate a significant improvement with the next few hours and will continue to monitor the health of AWS services and the effects on Atlassian customers. We appreciate your continued patience and remain committed to full resolution as we work through this situation. We will post our next update in two hours.
There have been no changes since our last update. We will provide our next updated by 9:00PM UTC or sooner as new information becomes available. We are currently aware of an ongoing incident impacting Atlassian Cloud services due to an outage with our public cloud provider, AWS. We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority and we are closely monitoring the health of AWS services. While we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation.
We are currently aware of an ongoing incident impacting Atlassian Cloud services due to an outage with our public cloud provider, AWS. We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority and we are closely monitoring the health of AWS services. While we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will continue to provide updates every hour or sooner as new information becomes available.
Update - We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority. Our public cloud provider is actively working to mitigate this issue with urgency. While we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will continue to provide updates every hour or sooner as new information becomes available.
Update - Thank you for your continued patience. We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority. Our public cloud provider is still actively working to mitigate this issue with urgency and while we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will be providing hourly updates on this issue.
Update - Thank you for your continued patience. We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority. Our public cloud provider is still actively working to mitigate this issue with urgency and while we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will be providing hourly updates on this issue.
Update - We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority. Our public cloud provider is actively working to mitigate this issue with urgency. While we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will continue to provide updates every hour or sooner as new information becomes available.
Update - We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority. Our public cloud provider is actively working to mitigate this issue with urgency. While we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will continue to provide updates every hour or sooner as new information becomes available.
We understand your pain and mitigating or fixing this issue is of utmost importance. Our public cloud provider is actively working to mitigate this issue on priority. We have been seeing partial operational success. We appreciate your patience and will continue to provide updates every hour or sooner.
Our public cloud provider is working to mitigate this issue quickly. We are seeing some early positive indicators and are continuing to monitor. We appreciate your patience and will continue to provide updates every hour or sooner.
Our public cloud provider is working to mitigate this issue quickly. We are seeing some early positive indicators and are continuing to monitor. We appreciate your patience and will continue to provide updates every hour or sooner.
Our public cloud provider is working to mitigate this issue quickly. We are seeing some early positive indicators and are continuing to monitor. We appreciate your patience and will continue to provide updates every hour or sooner.
Atlassian team is actively engaged and continues to work with our public cloud provider to mitigate this issue at the earliest. We are starting to see partial operations succeed. We appreciate your patience. We shall continue to share updates every hour, if not sooner.
We continue to work with our public cloud provider towards mitigating the issue at the earliest. We appreciate your patience. We shall continue to share updates every hour, if not sooner.
We understand that our public cloud provider has identified the cause of the issue. We are starting to see some recovery and is working towards mitigation. We appreciate your patience. We shall continue to share updates every hour, if not sooner.
We are experiencing an outage due to some issue at the end of our public cloud provider. We are working closely with them to get this resolved or mitigated as quickly as possible. ETA of the same is not know at the moment. We shall continue to share updates every hour, if not sooner.
We are experiencing an outage due to some issue at the end of our public cloud provider. We are working closely with them to get this resolved or mitigated as quickly as possible. ETA of the same is not know at the moment. We shall continue to share updates every hour, if not sooner.
Atlassian Cloud services are impacted and we are aware that our customers might not be able to create support tickets. Our teams are actively investigating the same. We shall keep you informed of the progress every hour.
We have noticed that Atlassian Cloud services are impacted and our teams are actively investigating the same. We shall keep you informed of the progress every hour.
September 2025(1 incident)
Delays in running Bitbucket Pipelines
4 updates
Between 03:30 UTC to 03:50 UTC, we experienced delyas in running pipelines for Atlassian Bitbucket. The issue has been resolved and the service is operating normally.
We have identified the root cause of the Pipelines delays and have mitigated the problem. We are now monitoring closely.
We continue to work on resolving the delayed Pipelines for Atlassian Bitbucket. We have identified the root cause and expect recovery shortly.
We are investigating cases of degraded performance for Atlassian Bitbucket Cloud Pipelines customers. We will provide more details within the next hour. We have mitigated the impact on self-hosted runners but cloud users are continuing to see delays on pipelines starting.
August 2025(5 incidents)
Core-daily Pipeline is delayed for 25th Aug run
3 updates
issue resloved pipeline is completed.
Core daily job run id : scheduled__2025-08-24T04:00:00+00:00 has been running for over 30 hrs and all the downstream jobs have failed core daily Aug 25th run has not started yet.
We are currently investigating this issue.
Git operations are slow/timing out.
3 updates
This incident has been resolved.
A root cause has been identified and a mitigation has been implemented; we're monitoring the situation.
- Bitbucket Cloud is investigating an incident affecting Git clone reliability. - The team is investigating the root cause and will update as soon as possible.
Bitbucket Cloud experiencing degraded performance and partial outage
3 updates
Bitbucket Cloud is fully recovered.
Website functionality has recovered and we are monitoring to ensure no further regressions
We are currently investigating the issue further.
Bitbucket cloud degradation
1 update
Today, between 13:55 and 14:35 UTC, we experienced a degradation on Bitbucket cloud, which impacted access to the website. This issue has been resolved, and services are now operating normally for all affected customers. We will monitor it closely to ensure stability
Some Bitbucket customers unable to push and clone
2 updates
Between 08:00UTC to 09:00UTC, we experienced degraded performance with push and clone operations for Atlassian Bitbucket. The issue has been resolved and the service is operating normally.
We are investigating reports of intermittent errors for some Atlassian Bitbucket Cloud customers. We will provide more details once we identify the root cause.
July 2025(2 incidents)
Customers experiencing issues with Bitbucket builds
3 updates
We have received an update that services with our infrastructure provider are fully restored. This issue is now resolved.
The cause of Bitbucket pipeline issues has been connected to an infrastructure provider. We have received an update from their team that services should now be returning to normal. We are monitoring recovery at this time and will update further when we believe services are fully restored.
We are aware of some customers currently experiencing issues relating to build pipelines in Bitbucket. Our team is investigating with urgency and will provide an update as soon as possible.
Users experiencing errors attempting to load Bitbucket
5 updates
**Incident Narrative** On July 1, 2025, at 02:36 UTC, a service disruption occurred leaving Atlassian customers unable to access Bitbucket. The primary issue was identified as an inconsistent network access control list \(ACL\) rule, which inadvertently blocked certain IP ranges necessary for file system access. This network misconfiguration caused a service outage until the rule was corrected at 04:35 UTC, restoring normal service operations. **Root Cause** The root cause of the incident was traced to a network ACL misconfiguration during a routine update. A race condition in the deployment software led to an inconsistent state. This left a gap in network traffic management, disrupting essential services and resulting in inaccessibility of Bitbucket functionalities. **Mitigators** To restore service, the team repaired the inconsistent network ACL configuration, recreated the missing network ACL rule, re-establishing connections to the affected file systems. **Actions** In response to the incident, we have implemented additional mechanisms to help prevent the inconsistent network configuration and better ensure the expected network ACL rules remain intact, including during deployment.
Between 02:36 UTC to 04:45 UTC 1 July 2025, we experienced a network configuration error for Atlassian Bitbucket, resulting in the product being unavailable for most users during this time. The issue has been resolved and the service is operating normally.
Our team has identified a network configuration error that was causing the issues relating to Bitbucket being unavailable for some users. This error has now been rectified. We will continue to monitor services closely. Additional updates will continue to be shared here.
We are continuing to investigate issues that are being experienced by users attempting to load Bitbucket. We apologize for any inconvenience this has caused and we will provide further updates as soon as possible.
We are aware of some users experiencing issues when attempting to load Bitbucket. Our team is investigating with urgency and will provide an update as soon as possible.
June 2025(4 incidents)
Bitbucket is slow in responding to customers
4 updates
Between 06:35 to 9:27 UTC, we experienced degraded performance for Atlassian Bitbucket. The issue has been resolved and the service is operating normally.
We have identified the root cause of the degraded performance and have mitigated the problem. We are now monitoring closely.
We continue to work on resolving the degraded performance and intermittent failures (due to slow loading times) for Atlassian Bitbucket. We have identified the root cause and taken mitigations to handle the increased load and expect recovery shortly.
We are investigating cases of degraded performance for Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
Intermittent issues affecting Bitbucket cloud
5 updates
### **SUMMARY** On June 13, 2025, between 08:58 UTC and 09:34 UTC, a subset of Bitbucket website and API users experienced an error page when attempting to load the website or make API requests. This was caused by high resource usage on a core Bitbucket service’s database, leading to an increase in query response times. A reoccurrence of the issue impacted website reliability again between 12:24 UTC and 14:02 UTC. The incident was detected within 6 minutes by monitoring and was fully mitigated by running scaling database capacity, running database maintenance operations, and removing a faulty read replica, which put Atlassian systems into a known good state. The total time to resolution was about 36 minutes for the first occurrence and one hour and 38 minutes for the second occurrence. ### **IMPACT** The total window during which we saw the impact was June 13, 2025, between 08:58 UTC and 14:02 UTC. The incident caused service disruption to customers trying to access the [Bitbucket.org](http://bitbucket.org/) website and APIs. Customers may have seen a “something went wrong” error page when trying to access the website or APIs. Git operations over SSH and HTTPS were not impacted. ### **ROOT CAUSE** The issue originated from a problem with query plan execution time on read replica instances. This led to increased query latency, spikes in CPU usage, and ultimately resulted in longer request times. As a result, the influx of incoming requests to Bitbucket led to a saturation of connections. This resulted in Bitbucket web requests returning slowly or showing an error page. ### **REMEDIAL ACTION PLAN & NEXT STEPS** We fully understand that outages impact your productivity. While we have a number of testing and preventative processes in place, this specific issue wasn’t identified with existing coverage. We are prioritizing the following improvement actions designed to avoid repeating this type of incident: * Continue to improve database resiliency and read replica routing for Bitbucket Cloud’s dependencies * Improve database monitoring and failover speed to reduce time to recovery * Run database maintenance tasks and improve process to discover and upgrade all databases to well known versions We apologize to customers whose services were impacted during this incident; we are taking steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Between 08:15 and 14:10 UTC on June 14th, 2025, some Bitbucket Cloud customers experienced an issue loading pages and executing API requests. We have deployed a fix to mitigate the issue and have verified that the services have recovered. The conditions that cause the issue have been addressed and we’re actively working on a permanent fix. Services are now operating normally. Once we complete our internal incident review process, we will publish a more detailed postmortem of what went wrong, along with steps we're taking to prevent this from happening again in the future.
Our team continues to actively investigate and monitor the intermittent issues with Bitbucket. We are working to ensure full stability and will provide further updates as more information becomes available.
Our team is focused on investigating and resolving the intermittent issues with Bitbucket, and we’ll keep you posted with further updates.
We’re experiencing a recurrence of the intermittent issue affecting Bitbucket cloud. Our team is working diligently to resolve this issue, and we’ll keep you posted with further updates.
Intermittent issues affecting Bitbucket
4 updates
On June 13, Bitbucket Cloud experienced intermittent availability. The issue has now been resolved, and the service is operating normally for all affected users.
The issues causing intermittence in Bitbucket have been mitigated, and services are now operating normally for all affected users. We will monitor it closely to ensure stability.
Our team is still investigating the intermittent issues with Bitbucket, and we’ll keep you posted with further updates.
We are currently experiencing intermittent issues with Bitbucket. Our team is working diligently to resolve this issue, and we'll keep you posted with further updates.
Customers may experience delays receiving emails
2 updates
Between 2025-06-04 14:11 UTC to 20:18 UTC, we experienced delays in delivering emails for Confluence, Jira Work Management, Jira Service Management, Jira, Trello, Atlassian Bitbucket, Guard, Jira Align, Jira Product Discovery, Atlas, Compass. The issue has been resolved and the service is operating normally.
We were experiencing cases of degraded performance for outgoing emails from Confluence, Jira Work Management, Jira Service Management, Jira, Trello, Atlassian Bitbucket, Guard, Jira Align, Jira Product Discovery, Atlas and Compass Cloud customers. The system is recovering and mail is being processed normally as of 16:45 UTC. We will continue to monitor system performance and will provide more details within the next hour.
May 2025(4 incidents)
Bitbucket - Steps are queued and delayed from starting
4 updates
On May 30th, there was an issue where Bitbucket pipeline steps were queued and delayed from starting. This problem has now been resolved, and the service operates normally for all customers.
The Bitbucket issues, where steps were queued and delayed, have been mitigated. Services are now functioning normally for all affected customers. We will monitor it closely to ensure stability.
The issue has been identified and a fix is being implemented.
We are investigating an issue affecting Bitbucket, where steps are queued and delayed from starting. Our team is working diligently to resolve this issue and restore services as quickly as possible. We'll keep you posted with further updates.
Users receiving errors when attempting to load BitBucket
3 updates
Our team has mitigated the issue that caused error messages when loading Bitbucket. Bitbucket should now function correctly for all previously impacted users.
Our team has mitigated the issue that caused error messages when loading Bitbucket. We are continuing to monitor at this time to ensure performance has been restored, and resolve the root cause.
We are aware of some users experiencing issues loading Bitbucket and may be receiving errors such as '500 internal server error'. Our team is investigating this issue with urgency and will provide an update as soon as possible.
Bitbucket has degraded performance
11 updates
### Summary On May 8, 2025, at 3:26pm UTC, Bitbucket Cloud experienced website and API latency due to an overloaded primary database. The event was caused by a backfill job running from an internal Atlassian service, which triggered an excessive call volume of expensive queries and pressure on database resources. As a result, the primary database automatically failed over, and Bitbucket services recovered in 15 minutes. Our real-time monitoring detected the incident immediately, and the high-intensity backfill job was stopped. However, following the failover, a backlog of retries from downstream services continued to impact overall database performance. Customers may have seen intermittent errors or website latency during this time. During this period following the failover, the engineering team implemented several strategies to further shed database load, successfully alleviating pressure on resources and improving performance. On May 9th at 11:19 AM UTC, Bitbucket Cloud systems were fully operational. ### **Impact** The overall impact occurred between May 8th, 2025, at 3:26 PM and May 9th at 11:19 AM UTC on Bitbucket Cloud. The incident resulted in increased latency and intermittent failures across Bitbucket Cloud services, including the website, API, and Bitbucket Pipelines. ### **Root cause** The issue was caused by an internal high-scale backfill job that triggered excessive load on certain API endpoints, which eventually impacted the database through resource-intensive queries and operations. This led to additional load from retries by dependent services, increasing the total recovery time. ### **Remedial action plan and next steps** We know that outages impact your productivity. While we have several testing and preventative processes in place, this specific issue wasn’t identified during our testing, as it was related to a specific high-scale backfill job run by an internal component, which impacted highly resource-intensive database queries. To prevent this type of incident from recurring, we are prioritizing the following improvement actions: * Improve database request routing so that more reads go to read replicas instead of the write-primary database. * Adjust rate limits for internal API endpoints with resource-intensive database operations. * Optimize database queries so that they can run more efficiently. * Tune retry policies from downstream services. We apologize to customers whose services were interrupted by this incident, and we are taking immediate steps to improve the platform’s reliability. Thanks, Atlassian Customer Support
This incident has been resolved.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Some pipeline builds are not triggering
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are investigating an issue with pipeline builds not triggering that is impacting some Bitbucket customers. We will provide more details within the next hour.
March 2025(4 incidents)
Bitbucket has degraded performance
5 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are still investigating reports of performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
We are still investigating reports of intermittent errors and performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
We are investigating reports of intermittent errors and performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
Bitbucket pipelines showing error when running a build.
2 updates
The issue where users running builds would receive an error message should now be resolved. This issue was impacting only those using self-hosted runners. It is expected that builds should have still run as expected despite this error message in the meantime.
We are aware of an issue where users are receiving an error message when running a build. Although we believe builds are still running successfully our team is investigating with urgency and an update will be provided when available.
Bitbucket has degraded performance
4 updates
This incident has been resolved.
We are continuing to monitor for any further issues.
We have mitigated the problem and currently monitoring the results.
We are investigating reports of intermittent errors and performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
Bitbucket website and git operations down for some customers
4 updates
Between 11/Mar/25 23:02 UTC and 11/Mar/25 23:59 UTC, some Bitbucket Cloud customers experienced issues accessing our website, API, and git services. We have mitigated the problem and will take steps to avoid this issue in the future.
We are monitoring an identified fix and seeing signs of recovery.
We are continuing to investigate this issue.
Some customers are experiencing issues with our web and git operations. We are investigating.
February 2025(2 incidents)
Degraded website performance for some customers.
5 updates
After monitoring, this incident is now resolved. If you continue to see issues, please ensure you are not using a stale / old tab and are loading a fresh version of Bitbucket in your browser.
We have rolled out a fix for this issue and are seeing recovery for Bitbucket users. Please refresh the page and it should fix the issue. We are currently monitoring the fix and further investigating root cause.
We've identified an issue with some customers DNS configuration not resolving Atlassian domains correctly. If you are experiencing errors accessing Bitbucket, we advise allowlisting the following domains in your DNS configuration: https://support.atlassian.com/organization-administration/docs/ip-addresses-and-domains-for-atlassian-cloud-products/ We are still investigating the issue with vendors and will update statuspage as we learn more.
We've identified an issue with some customers DNS configuration that can be blocking some Bitbucket assets. We are still investigating and will update statuspage with more information shortly.
We are currently investigating an issue impacting some customers who are seeing intermittent errors accessing the Bitbucket.org website.
Bitbucket has degraded performance
3 updates
### SUMMARY On February 11, 2025, between 15:41 and 16:26 UTC, Atlassian customers using Bitbucket Cloud experienced workspace access errors \(HTTP 404\) when attempting to access the website, API, and Git over HTTPS/SSH. The event was triggered by a failure in our feature flagging service, which inadvertently blocked some users from core services. The incident was detected within eight minutes by automated monitoring and was resolved 45 minutes later once a change to a feature flag configuration had been fully deployed. ### **IMPACT** A subset of Bitbucket Cloud users were unable to access their workspace. When trying to access their Bitbucket cloud repository through the website, API, or CLI, these users would have seen a 404 error message. ### **ROOT CAUSE** An upstream failure in a feature flagging service resulted in Bitbucket’s application logic not working correctly. This resulted in access errors for a subset of customers. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we have a number of testing and preventative processes in place, this specific failure scenario wasn’t identified during testing. We are prioritizing the following improvement actions designed to avoid repeating this type of incident: * Fixing the root cause of the bug in our feature flag service * Improving Bitbucket’s fallback mechanisms and handling of errors in feature flags * Improving test coverage and war gaming failures with core dependencies. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Between 11/Feb/25 3:41 PM UTC and 11/Feb/25 4:26 PM UTC, some customers for Bitbucket cloud experienced degraded performance and 400 errors. We have mitigated the problem and would take steps to avoid this issue in the future. We will be publishing a public PIR on this incident in this status page, once it becomes available The issue has been resolved and all the services are operating normally.
We are investigating reports of intermittent errors and performance degration for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
January 2025(1 incident)
Bitbucket Cloud web, api, and Pipelines service outage
9 updates
### Summary On January 21, 2025, between 14:02 and 17:49 UTC, Atlassian customers using Bitbucket Cloud were unable to use the website, API, or Pipelines. The event was triggered by write contention in a high traffic database table. The incident was detected within eight minutes. We then worked to both throttle traffic and improve query performance, which allowed services to recover. The total time to resolution was about three hours and 47 minutes. ### **IMPACT** The overall impact was between 14:02 and 17:49 UTC, affecting Bitbucket Cloud. This impacted customers globally, and they were unable to use the website, APIs, or Pipelines services. Git hosting \(SSH\) was unaffected. ### **ROOT CAUSE** The issue was caused by an increase in API traffic triggering write contention on a high-traffic table, resulting in increased CPU usage and degraded database performance. This ultimately impacted the availability of core services \(web, API, and Pipelines\). ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we have several testing and preventative processes in place, this specific issue wasn’t identified because the code path being triggered does not commonly experience this type of traffic. We are prioritizing the following improvement actions to avoid repeating this type of incident: * Running additional maintenance on core database tables * Added throttling on write-heavy operations To improve service resilience and recovery time for our environments, we will implement additional preventative measures such as: * Improving database observability to isolate failures * Continuing to shard data to better distribute traffic load We apologize to customers whose services were impacted by this incident and are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Earlier we experienced database contention on high traffic tables, which resulted in website, API, and Pipelines outages. All Bitbucket services are now operational. A full post mortem will be published.
All Git, Web, API and Pipelines services are now operational. We are continuing to monitor database and Pipelines reliability.
We have identified the root cause of the database issue that impacted Bitbucket website and Git operations; this has been mitigated now. We are experiencing Pipelines degradation that we are working to resolve.
We have identified the root cause of the database issue and have mitigated the problem. We are now monitoring closely.
We are investigating an issue with saturated Bitbucket database that impacts all Bitbucket operations. We will provide more details within the next 30 minutes.
We are investigating an issue with saturated Bitbucket database that impacts all Bitbucket operations. We will provide more details within the next hour.
We are still investigating an issue with Bitbucket Web and Git operations that is impacting Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
We are investigating an issue with Bitbucket Web and Git operations that is impacting Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
December 2024(2 incidents)
Increased error rate in Bitbucket cloud APIs
3 updates
This incident has been resolved.
A solution has been implemented, and we're monitoring the fix to confirm resolution.
Bitbucket Cloud support has observed a small increase in errors with all Bitbucket cloud APIs, we are investigating the issue and looking into root cause.
Issues with attachments, including viewing previews, downloading and uploading
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the result.
We are continuing to work on a fix for this issue.
We have identified the issue and we are working to fix the issue.
November 2024(1 incident)
Unable to invite new users due to missing recaptcha token
3 updates
Between 08:00 UTC to 11:54 UTC, we experienced problems with the invitation of new users for Cloud customers on admin.atlassian.com. The issue has been resolved and the service is operating normally.
We continue to work on resolving the invitation workflow in admin.atlassian.com We have identified the root cause and performed changes in the environment to mitigate the issue.
We are investigating reports of intermittent errors for some Atlassian customers when they are trying to invite users using their admin panels (admin.atlassian.com) We will provide more details once we identify the root cause.
September 2024(4 incidents)
Users are experiencing reCaptcha errors while signing up
3 updates
This issue has been resolved.
We have identified the root cause and the issue appears to be resolved.
Users attempting to sign up are encountering reCaptcha errors that are preventing a successful signup.
Unable to connect to Bitbucket Cloud via SSH
2 updates
### **SUMMARY** On September 10, 2024, between 6:34 PM UTC and 7:19 PM UTC, some Atlassian customers experienced an issue preventing users from connecting to Bitbucket Cloud via SSH. The issue arose from a change in how we determined IP allow lists, which inadvertently blocked access for customers with these controls enabled. The incident was promptly identified through our monitoring systems, and our teams initiated response protocols to mitigate the issue. ### **IMPACT** The incident only affected customers who had IP whitelisting enabled on their Bitbucket Cloud accounts. These customers experienced difficulties connecting via SSH due to the unintended blocking caused by a change in the IP allow list computation. The service interruption lasted approximately 45 minutes, during which time affected users were unable to access their repositories through SSH. ### **ROOT CAUSE** A change to IP allow list evaluation was incompatible with a new Bitbucket Cloud networking configuration. This inadvertently blocked SSH access for customers with specific allow lists restrictions enabled. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** To restore the SSH service, the team quickly rolled back the release responsible for the IP allow list issue. We know that outages impact your productivity. We are prioritizing the following improvement actions to avoid repeating this type of incident: * Improve monitoring coverage of IP allowlisting; * Add additional tests and deployment validation checks for changes to IP allowlist configurations. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Atlassian Customer Support
We experienced connectivity issues with Bitbucket Cloud via SSH, which only affected customers using IP allow listing. The service was unavailable for approximately 40 minutes. However, the issue was identified and resolved, and service was restored around 19:23 UTC.
Bitbucket Cloud website performance degradation
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently experiencing performance degradation with Bitbucket Cloud. Users may encounter slower than expected response times when accessing Bitbucket Cloud website.
Bitbucket Webhooks and Pipelines on push and pull request not being triggered
5 updates
This incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating an issue related to Connect Webhooks not being delivered. This had a downstream impact on Bitbucket Pipelines on push not being triggered. Manually triggered and scheduled pipelines are still working.
August 2024(1 incident)
Bitbucket website is slow to load
2 updates
Between 2024-08-07 5.20am UTC to 2024-08-07 5.40am UTC, we experienced degraded performance with Bitbucket Website. The issue has been resolved and the service is operating normally.
We are currently experiencing an issue where Bitbucket website is slow to load. Our engineering team is actively investigating the root cause and working to resolve the issue as quickly as possible.
July 2024(5 incidents)
Bitbucket Pipelines Failing to Start
3 updates
The issue has been resolved and the service is operating normally.
We have identified the cause and have mitigated the problem. We are now monitoring this closely.
We are currently experiencing an issue where Bitbucket Pipelines are failing to start. Our engineering team is actively investigating the root cause and working to resolve the issue as quickly as possible.
Pipelines failing to start.
6 updates
Between 2024-07-24 3.30am UTC to 2024-07-24 6.40am UTC, we experienced degraded performance with API and Pipelines for Atlassian Bitbucket. The issue has been resolved and the service is operating manually.
Between 2024-07-24 3.30am UTC to 2024-07-24 6.40am UTC, we experienced degraded performance with API and Pipelines for Atlassian Bitbucket. The issue has been resolved and the service is operating manually.
Both Bitbucket API and Pipelines services have recovered. We'll continue to monitor and provide another update in 20min.
A fix has been implemented and API performance has begun to recover. Some Pipelines are still delayed but we are seeing recovery. We'll continue to monitor and provide an update within the hour.
We've identified an issue causing increased API error rate and delayed Pipelines. We are working on implementing a fix to restore service and will post a follow up within one hour.
We are currently investigating an issue affecting pipelines creation due to an increased API error rate, we're investigating and will provide a follow-up soon.
Bitbucket Cloud services degraded
3 updates
This incident has been resolved.
The impact to most Bitbucket Cloud operations has been resolved. We are continuing to monitor the impact to Bitbucket Pipelines.
We are currently investigating an issue that is impacting Bitbucket Cloud.
Some users may experience delays in receiving email notifications
2 updates
Between 12:00am 9th July to 08:00am 10th July, we experienced email deliverability issues for some recipient domains for Confluence, Jira Work Management, Jira Service Management, Jira, Trello, Atlassian Bitbucket, and Jira Product Discovery. The issue has been resolved and future emails will flow normally.
We continue to work on resolving the Email Notifications for Confluence, Jira Work Management, Jira Service Management, Jira, Trello, Atlassian Bitbucket, and Jira Product Discovery. We have identified the root cause.
Some products are hard down
3 updates
Between 03-07-2024 20:08 UTC to 03-07-2024 20:31 UTC, we experienced downtime for Atlassian Bitbucket. The issue has been resolved and the service is operating normally.
We have mitigated the problem and continue looking into the root cause. The outage was between 8:08pm 03/07 UTC - 08:31pm 03/07 UTC We are now monitoring closely.
We are investigating an issue with that is impacting Atlassian, Atlassian Partners, Atlassian Support, Confluence, Jira Work Management, Jira Service Management, Jira, Opsgenie, Atlassian Developer, Atlassian (deprecated), Trello, Atlassian Bitbucket, Guard, Jira Align, Jira Product Discovery, Atlas, Atlassian Analytics, and Rovo Cloud customers. We will provide more details within the next hour.
June 2024(5 incidents)
Intermittent error accessing content
3 updates
Between 2024-06-20 22:04 UTC to 2024-06-20 22:28 UTC, we experienced intermittent issue for users to access the services for some Atlassian Cloud customers. The issue has been resolved and the service is operating normally.
We have identified the root cause of the intermittent errors and have mitigated the problem. We are now monitoring closely.
We are investigating an intermittent issue with accessing Atlassian Cloud services that is impacting some Atlassian Cloud customers. We will provide more details once we identify the root cause.
Bitbucket Pipelines degraded experience
5 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
The team continues to investigate an issue impacting Bitbucket Pipelines, but believes impact is limited to self-hosted runners.
We are currently investigating an issue impacting Bitbucket Pipelines, including self-hosted and cloud runners.
Partially Degraded Experience Running Pipelines
4 updates
This incident has been resolved and Pipelines are back to running normally.
Backlog of Pipelines has been successfully processed and we are running normally again. The team is continuing to monitor the situation.
As a result of this incident, we are now processing a backlog of Pipelines which is causing slowness. The team is working on mitigating this to process remaining Pipelines from the initial incident.
An issue was identified with the ability to parse yml impacting some customers ability to start or complete Pipelines. It has since been resolved and the team is monitoring for further issues.
Degraded Performance of Bitbucket Website and Pipelines
3 updates
This incident has been resolved.
A fix has been applied and performance restored. The team is monitoring to ensure no further recurrence.
We are currently investigating an issue impacting our database that is slowing most functionality across Bitbucket and Pipelines.
Error responses across multiple Cloud products
3 updates
### Summary On June 3rd, between 09:43pm and 10:58 pm UTC, Atlassian customers using multiple product\(s\) were unable to access their services. The event was triggered by a change to the infrastructure API Gateway, which is responsible for routing the traffic to the correct application backends. The incident was detected by the automated monitoring system within five minutes and mitigated by correcting a faulty release feature flag, which put Atlassian systems into a known good state. The first communications were published on the Statuspage at 11:11pm UTC. The total time to resolution was about 75 minutes. ### **IMPACT** The overall impact was between 09:43pm and 10:17pm UTC, with the system initially in a degraded state, followed by a total outage between 10:17pm and 10:58pm UTC. _The Incident caused service disruption to customers in all regions and affected the following products:_ * Jira Software * Jira Service Management * Jira Work Management * Jira Product Discovery * Jira Align * Confluence * Trello * Bitbucket * Opsgenie * Compass ### **ROOT CAUSE** A policy used in the infrastructure API gateway was being updated in production via a feature flag. The combination of an erroneous value entered in a feature flag, and a bug in the code resulted in the API Gateway not processing any traffic. This created a total outage, where all users started receiving 5XX errors for most Atlassian products. Once the problem was identified and the feature flag updated to the correct values, all services started seeing recovery immediately. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we have several testing and preventative processes in place, this specific issue wasn’t identified because the change did not go through our regular release process and instead was incorrectly applied through a feature flag. We are prioritizing the following improvement actions to avoid repeating this type of incident: * Prevent high-risk feature flags from being used in production * Improve the policy changes testing * Enforcing longer soak time for policy changes * Any feature flags should go through progressive rollouts to minimize broad impact * Review the infrastructure feature flags to ensure they all have appropriate defaults * Improve our processes and internal tooling to provide faster communications to our customers We apologize to customers whose services were affected by this incident and are taking immediate steps to address the above gaps. Thanks, Atlassian Customer Support
Between 22:18 UTC to 22:56 UTC, we experienced errors for multiple Cloud products. The issue has been resolved and the service is operating normally.
We are investigating an issue with error responses for some Cloud customers across multiple products. We have identified the root cause and expect recovery shortly.
May 2024(2 incidents)
Git LFS operations aren't working.
4 updates
This incident has been resolved
A fix has been implemented and deployed. Operations over SSH should be working as expected. We will continue to monitor this situation.
We are continuing to investigate the issue. As a workaround, we recommend users attempt to use HTTP as this seems to be impacting SSH
The Bitbucket Cloud team are investigating an issue with git lfs operations, we're working on identifying the root cause and will provide an update soon.
Delay in starting pipelines
3 updates
The incident affecting pipelines has been resolved.
We have identified a bottleneck in a service and scaled up the underlying infrastructure, we are monitoring for resolution in clearing the backlog.
- We've observed a delay in pipelines starting once triggered. - We're isolating a root cause and will implement a fix as soon as possible.
April 2024(1 incident)
Bitbucket Pipelines have a delay in triggering builds
6 updates
This incident has been resolved.
The issue is resolved and pipelines triggers are working as expected. We are monitoring to make sure it is fully functional.
The issue still remains unresolved, we are working on a new approach to fix issue.
A rollback is in-progress, resolution expected in approximately 30 minutes. A workaround option is to manually trigger a build in the Bitbucket Cloud UI for the repositories that have not triggered automatically.
We are continuing to work on a fix for this issue.
Bitbucket Cloud pipeline triggers are not working as expected, a root cause has been identified and a recent change is being rolled back. The impact will be a delay in pipelines starting until resolved, we will provide a follow-up shortly.