B

Is Bitbucket Down?

Real-time status monitoring

All Systems Operational
30d uptimeN/A
N/A
Uptime (30d)
251ms
Response Time
1
Incidents (7d)
10:35:10 PM
Last Checked

Embed Bitbucket Status Badge

Show live Bitbucket status in your README, documentation, or website

Bitbucket Status
Markdown
[![Bitbucket Status](https://apistatuscheck.com/api/badge/bitbucket)](https://apistatuscheck.com/api/bitbucket)
HTML
<a href="https://apistatuscheck.com/api/bitbucket"><img src="https://apistatuscheck.com/api/badge/bitbucket" alt="Bitbucket Status" /></a>

Response Time (24h)

Min: 203msMax: 292msAvg: 248ms
292ms146ms0ms
<500ms
500-2000ms
>2000ms

Recent Incidents

minorresolved

Disrupted Bitbucket availability in eu-west-1

Jan 28, 04:49 PM — Resolved Jan 28, 08:00 PM

On January 28, 2026, affected Bitbucket Cloud users in eu-west-1 may have experienced some service disruption. The issue has now been resolved, and the service is operating normally for all affected customers.

criticalpostmortem

Unable to reach bitbucket site

Jan 7, 04:23 PM — Resolved Jan 7, 06:53 PM

### Summary On Jan 7, 2026, between 15:28 UTC and 17:04 UTC, Atlassian customers using Bitbucket Cloud could not load the dashboard landing page. Users also faced degraded performance and intermittent failures navigating other parts of the application or using public REST APIs. The event was caused by an unexpected load on a public API, causing long-running queries on a database which resulted in failed web and api requests. The incident was detected within three minutes by automated monitoring systems and mitigated by introducing stricter limits on the API for certain traffic, while taking manual actions on the impacted database, restoring Bitbucket to a healthy state. ### **IMPACT** Occurring on Bitbucket Cloud on Jan 7, 2026, between 15:28 UTC and 17:04 UTC, the incident caused degraded performance and intermittent failures to a subset of customers interacting with the Bitbucket web application and public APIs. Git operations over SSH and HTTPS were not impacted. ### **ROOT CAUSE** The event was caused by an unexpected load on a public API. The request volume during this period resulted in high resource utilization our central database’s read replicas, impacting website and our API performance and reliability. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know outages reduce your productivity. Although we have several testing and prevention processes, this issue went undetected because a specific request pattern on a public API was not tested against the traffic volume seen during the incident. We prioritized the following actions to prevent repeating this type of incident: * Improve rate limiting and caching capabilities at multiple points in our networking and application layers. * Apply stricter rate limits for specific public APIs to protect infrastructure health and shared application resources. * Optimize performance of specific queries and codepaths on these APIs to handle high request loads. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support

minorresolved

Bitbucket workspace invitations failing for all users

Jan 7, 06:05 AM — Resolved Jan 7, 06:40 AM

We have successfully mitigated the incident and the affected service is now fully operational. Our teams have verified that normal functionality has been restored. Thank you for your patience and understanding while we worked to resolve this issue.

criticalpostmortem

Outbound Email, Mobile Push Notifications, and Support Ticket Delivery Impacting All Cloud Products

Dec 27, 04:42 AM — Resolved Dec 27, 05:20 AM

### Summary On **December 27, 2025**, between **02:48 UTC and 05:20 UTC**, some Atlassian cloud customers experienced failures in sending and receiving emails and mobile notifications. Core Jira and Confluence functionality remained available. The issue was triggered when **TLS certificates used by Atlassian’s monitoring infrastructure expired**, causing parts of our metrics pipeline to stop accepting traffic. Services responsible for email and mobile notifications had a critical path dependency on monitoring path leading to service disruptions. All impacted services were fully restored by **05:20 UTC**, around **2.5 hours** after customer impact began. ### IMPACT During the impact window, customers experienced: * **Outbound product email failures** \(notifications and other product emails did not send\). * **Identity and account flow failures** where emails were required \(e.g. sign‑ups, password resets, one‑time‑password / step‑up challenges\). * **Jira and Confluence mobile push notifications** * **Customer site activations and some admin policy changes** failing and requiring later reprocessing. ### ROOT CAUSE The incident was caused by: 1. **Expired TLS certificates** on domains used by our monitoring and metrics infrastructure caused by **misconfigured DNS authorization record** which prevented automatic renewal. 2. **Tight coupling of services to metrics publishing**, which caused them to fail when monitoring endpoints became unavailable, instead of degrading gracefully. ### REMEDIAL ACTIONS PLAN & NEXT STEPS We recognize that outages like this have a direct impact on customers’ ability to receive important notifications, complete account tasks, and operate their sites. We are prioritizing the following actions to improve our existing testing, monitoring and certificate management processes: * **Hardening monitoring and certificate infrastructure** * We are refining DNS and certificate configuration across our monitoring domains and strengthening proactive checks to detect and address failed renewals and certificate issues well before expiry. * We are also improving alerting on our monitoring and metrics pipeline. * **Decoupling monitoring from critical customer flows** We are updating services such as outbound email, identity, mobile push, provisioning, and admin policy changes so they no longer depend on metrics publishing to operate. If monitoring becomes unavailable, these services will continue to run and degrade gracefully by dropping or buffering metrics instead of failing customer operations. We apologize to customers impacted during this incident. We are implementing the improvements above to help ensure that similar issues are avoided. Thanks, Atlassian Customer Support

criticalpostmortem

Bitbucket availability degraded

Nov 11, 04:59 PM — Resolved Nov 11, 09:10 PM

## Summary On November 11, 2025, between 16:25 and 19:13 UTC, Atlassian customers were unable to access Bitbucket Cloud services. Customers experienced a period of 1 hour and 16 minutes where performance was degraded and a period of 1 hour and 32 minutes where the Bitbucket Cloud website, APIs, and Git hosting were unavailable. The event was triggered by a code change that unintentionally impacted how we evaluate feature flags, impacting all customers. The incident was detected within 5 minutes by automated monitoring systems and mitigated by scaling multiple services and deploying a fix which put Atlassian systems into a known good state. The total time to full resolution was about 2 hours and 48 minutes. ### **IMPACT** The overall impact was between November 11, 2025, 16:25 UTC and November 11, 2025, 19:13 UTC on Bitbucket Cloud. Between 16:25 UTC and 16:50 UTC, users were seeing degraded experiences with both Git services and pull request experiences within the Bitbucket Cloud site. Starting at 16:50 UTC, users were unable to access Bitbucket Cloud and associated services entirely. ### **ROOT CAUSE** During a routine deployment, a code change had a negative impact on a component used for feature flag evaluation. To mitigate this issue the Bitbucket engineering team manually scaled up Git services. This inadvertently resulted in hitting a regional limit with our hosting provider, causing new Git service instances to fail. This ultimately led to degradation of multiple dependent services and an increased number of failed requests via Bitbucket Cloud’s website and public APIs. ### **ACTIONS TAKEN** Our team immediately began investigating the issue and testing various mitigations, including scaling the impacted services, in an effort to reduce the effects of the change. However, these efforts were unsuccessful due to an unexpected scaling limit imposed by our underlying hosting platform. Attempts to roll back the code change were also unsuccessful, as the platform’s scaling limit prevented new infrastructure from being provisioned during the rollback process. In particular, any attempts to provision new infrastructure caused a high volume of calls to occur in a short period, leading to failures, retries, and a feedback loop that worsened the situation. To address this, the team scaled down certain services to reduce load on the platform, which allowed for the successful deployment of a fix and restoration of service. Once the fix was in place, healthy services were scaled back up to meet customer demand. ### **REMEDIATION AND NEXT STEPS** We recognize the significant impact outages have on our customers’ productivity. Despite our robust testing and preventative measures, this particular issue related to feature flag evaluation was not detected in other environments and only became apparent under high load conditions that had not previously occurred. The incident has provided valuable information about our hosting platform’s scaling limits, and we are actively applying these learnings to enhance our resilience and response times. To help prevent similar incidents in the future, we have taken the following actions: * Enhanced the resiliency of the affected feature gate component to to prevent future changes from resulting in widespread service impact. * Updated application logic to prevent services from hitting these platform scaling limits. * Implemented additional safeguards to detect and handle platform-imposed limits proactively during deployment and rollback scenarios. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support

What is Bitbucket?

Git code hosting and CI/CD platform by Atlassian

Bitbucket Down? Try These Steps

  1. Check the official Bitbucket status page for announcements
  2. Try refreshing your browser or clearing cache
  3. Check your internet connection
  4. Try accessing from a different network or VPN
  5. Check social media for reports from other users

Get Bitbucket Outage Alerts

Be the first to know when Bitbucket go down.