Confluence Outage History
Past incidents and downtime events
Complete history of Confluence outages, incidents, and service disruptions. Showing 50 most recent incidents.
February 2026(1 incident)
Confluence, Jira Mobile and Forge users may experience authentication issues
3 updates
Team has identified the root cause and the issue has now been resolved, and the impacted services are operating normally. Team will continue to monitor the impacted services.
Customers utilizing Confluence and Jira Mobile may experience disruptions in OAuth authentication flows. Forge installations and invocations might also be disrupted. Our team is actively investigating the same and we shall keep you informed of the progress within next 60 mins or sooner.
Customers utilizing Confluence and Jira Mobile may experience disruptions in OAuth authentication flows. Forge installations and invocations might also be disrupted. Our team is actively investigating the same and we shall keep you informed of the progress within next 60 mins or sooner.
January 2026(1 incident)
Confluence site unreachable for some users
5 updates
### Summary On Jan 08, 2026 between 14:54 UTC and 16:30 UTC, Atlassian customers using Confluence Cloud product\(s\) experienced degraded service with view/edit page experiences. The event was triggered by database overload due to an unexpected large burst of traffic. The database overload was the result of a configuration change, which led to a sudden spike in database connections, which impacted a subset of customers in a single partition in the us-east region. The incident was detected within 1 minute by by automated monitoring systems and mitigated by reducing the number of web server hosts connecting to the database layer, which put Atlassian systems into a known good state. The total time to resolution was about 1 hour and 26 minutes. ### **IMPACT** The overall impact was between Jan 08, 2026, 14:54 UTC and 16:30 UTC on Confluence Cloud products. The incident caused service disruption to customers in a single partition in the us-east region impacting the ability to view and edit pages. We observed partition-wide database saturation due to an unexpected large burst of traffic. Confluence Cloud components impacted during this window were: Login, View / Edit Page, Publish Page, Add Page, Comment. ### **ROOT CAUSE** A change was introduced that caused cross-region traffic routing for customers instead of routing to the same region. Some caches processing customer data were stale, which heavily loaded the databases and caused them to restart. As a result, the users of the product above could not login, view or edit pages, and the users received HTTP 500 and 504 errors. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we have a number of testing and preventative processes in place, this specific issue wasn’t identified due to a specific set of factors which resulted in this condition. We are prioritizing the following improvement actions to help avoid repeating this type of incident: * Fix the routing issue that resulted in cross-regional traffic * Decrease maximum number of connections per database instance, to allow for sufficient memory capacity to handle a surge in connections Furthermore, we deploy our changes progressively \(by cloud region\) to avoid broad impact but in this case, the impact was larger than desired. To minimize the impact of breaking changes to our environments, we will implement additional preventative measures such as enabling automatic vertical scaling when database clusters are running low on memory. We apologize to customers whose services were impacted during this incident; we are taking steps to help improve the platform’s performance and availability. Thanks, Atlassian Customer Support
On Thursday, January 8, 2026, affected Confluence Cloud users in us-east-1 region may have experienced some service disruption. The issue has now been resolved, and the service is operating normally for all affected customers.
The issue has been resolved, and services are now operating normally for all affected customers. We'll continue to monitor closely to confirm stability.
The issue has been resolved, and services are now operating normally for all affected customers. We'll continue to monitor closely to confirm stability.
We are actively investigating reports of a partial service disruption affecting Confluence Cloud for some customers. We'll share updates here in an hour or as more information is available.
December 2025(3 incidents)
Outbound Email, Mobile Push Notifications, and Support Ticket Delivery Impacting All Cloud Products
5 updates
### Summary On **December 27, 2025**, between **02:48 UTC and 05:20 UTC**, some Atlassian cloud customers experienced failures in sending and receiving emails and mobile notifications. Core Jira and Confluence functionality remained available. The issue was triggered when **TLS certificates used by Atlassian’s monitoring infrastructure expired**, causing parts of our metrics pipeline to stop accepting traffic. Services responsible for email and mobile notifications had a critical path dependency on monitoring path leading to service disruptions. All impacted services were fully restored by **05:20 UTC**, around **2.5 hours** after customer impact began. ### IMPACT During the impact window, customers experienced: * **Outbound product email failures** \(notifications and other product emails did not send\). * **Identity and account flow failures** where emails were required \(e.g. sign‑ups, password resets, one‑time‑password / step‑up challenges\). * **Jira and Confluence mobile push notifications** * **Customer site activations and some admin policy changes** failing and requiring later reprocessing. ### ROOT CAUSE The incident was caused by: 1. **Expired TLS certificates** on domains used by our monitoring and metrics infrastructure caused by **misconfigured DNS authorization record** which prevented automatic renewal. 2. **Tight coupling of services to metrics publishing**, which caused them to fail when monitoring endpoints became unavailable, instead of degrading gracefully. ### REMEDIAL ACTIONS PLAN & NEXT STEPS We recognize that outages like this have a direct impact on customers’ ability to receive important notifications, complete account tasks, and operate their sites. We are prioritizing the following actions to improve our existing testing, monitoring and certificate management processes: * **Hardening monitoring and certificate infrastructure** * We are refining DNS and certificate configuration across our monitoring domains and strengthening proactive checks to detect and address failed renewals and certificate issues well before expiry. * We are also improving alerting on our monitoring and metrics pipeline. * **Decoupling monitoring from critical customer flows** We are updating services such as outbound email, identity, mobile push, provisioning, and admin policy changes so they no longer depend on metrics publishing to operate. If monitoring becomes unavailable, these services will continue to run and degrade gracefully by dropping or buffering metrics instead of failing customer operations. We apologize to customers impacted during this incident. We are implementing the improvements above to help ensure that similar issues are avoided. Thanks, Atlassian Customer Support
We have successfully mitigated the incident and all affected services are now fully operational. Our teams have verified that normal functionality has been restored across all areas. Thank you for your patience and understanding while we worked to resolve this issue.
We have taken steps to mitigate the issue and are seeing recovery in the affected services. Our teams will continue to closely monitor the situation and are actively working to confirm that all services are fully restored. We will provide further updates as we make additional progress.
We are actively investigating this issue and will share additional updates as soon as more information becomes available. We sincerely apologize for the inconvenience this has caused.
We have identified an issue with outbound email delivery and mobile push notifications that is impacting Atlassian Cloud customers across all products. Importantly, customer support tickets are unable to be generated during this time. We apologise for any inconvenience this may cause. We are currently investigating and will provide more information as it becomes available.
Degraded performance of Admin Hub, Atlassian Analytics, Confluence Cloud, Focus, Jira, Jira Product Discovery, Jira Service Management, and Rovo
4 updates
On December 15th, 2025, Admin Hub, Atlassian Analytics, Confluence Cloud, Ecosystem, Focus, Jira, Jira Product Discovery, Jira Service Management, and Rovo users in the prod-us-east region may have experienced performance degradations and errors on the web page and mobile apps. The issue has now been resolved, and the service is operating normally for all affected customers.
Our teams are continuing to implement mitigations. Error rates have decreased across all products and we are seeing improvements. We will continue to provide updates here in 60 minutes or as more information becomes available.
We have identified the cause of the issue, and our teams are diligently working on mitigations. We're currently seeing signs of improvements. We'll continue to share additional updates here in 60 minutes or as more information becomes available.
We are actively investigating reports of performance degradation affecting Admin Hub, Atlassian Analytics, Confluence Cloud, Focus, Jira, Jira Product Discovery, Jira Service Management, Jira Work Management, and Rovo. We'll share updates here as more information is available.
Errors installing Connect on Forge Apps in Confluence
3 updates
The rollback required to resolve this issue is now completed, and we have verified across the impacted products that App installation behaviour is now working as expected. We apologize for any inconvenience this may have caused.
Our team has identified a root cause for this issue, and have now commenced a rollback of the problematic change and verifying that this is resolved. There should not be any impact to currently installed apps as a result of this incident. However, we are aware that cloud migrations including Confluence Apps may also currently be blocked by this issue. We currently expect the rollback and validation of the fix to be completed within 3 hours. We will update further at that time with the latest status.
We are aware that customers are currently experiencing installation failures when attempting to install Connect on Forge apps into Confluence. Our team is investigating with urgency and we will provide an update within the hour.
November 2025(4 incidents)
Multiple Atlassian services experiencing degraded performance
5 updates
### Summary On November 21, 2025, between 13:44 and 15:16 UTC, Trello customers were intermittently unable to view and update data on their boards. Customers also may have experienced issues authenticating with Atlassian products, and creating new GitHub and Slack integrations. The event was triggered by a bug encountered in the software running our edge proxy fleet, which proxies customer traffic to Atlassian cloud services. The changes included the migration of our edge proxy fleet to hosts running an ARM CPU architecture, rather than the AMD64 CPU architecture they had previously been running, which impacted US East customers. The incident was detected within 1 minute by our automated monitoring systems, and mitigated by a scale up of of fleet size, which put Atlassian systems into a known good state. This was followed by a global migration of edge proxy fleet hosts back to AMD64 CPU architecture the following day. ### **IMPACT** During the impact window, US East customers intermittently could not view or update data in Trello. The same underlying issue also impacted our Identity services and integrations with GitHub and Slack, meaning some customers had trouble signing in to Atlassian products or creating new integrations. At the incident’s peak, the incident impacted up to: * 52% of new Trello network connections. * 9% of new GitHub and Slack integrations. * 8% of new Identity network connections. ### **ROOT CAUSE** The issue was caused by a change to CPU architecture from AMD64 to ARM on our edge proxy fleet. This led to a bug that caused these instances to stall under high load, and refuse up to 52% of new connections. As a result, some customers of the products above could not make new connections to Atlassian services, and customers received CloudFront 504 gateway timeout error responses. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we deploy our changes progressively by cloud region to avoid broad impact, on this occasion, our pre-change load testing had not accurately reflected production loads. As part of our response to this incident, and to help prevent recurrence, we rolled back all edge proxy fleets from ARM to AMD64 CPU architecture globally. To minimise the impact of breaking changes to our environments, we plan to implement additional preventative measures such as: * Adding improved load tests into edge proxy fleet deployment pipelines to catch load-related bugs before deployment to production. * Adding alerts to our edge proxy fleet to catch rises in TCP connect times before customer impact. We apologize to customers whose services were impacted during this incident; we are taking steps to help improve the platform’s performance and availability. Thanks, Atlassian Customer Support
The Trello performance degradation has been resolved. Also, some intermittent errors on some of our other products, has also now been resolved. The issue has now been resolved, and the service is operating normally for all affected customers.
The Trello performance degradation affecting some customers has been resolved. A subset of customers may have experienced intermittent errors on some of our other products, but these should also now be resolved too. We'll continue to monitor closely to confirm stability.
Multiple Atlassian services are experiencing degraded performance. We are investigating and will provide an update within the hour.
Multiple Atlassian services are seeing outages and we are investigating the same. We shall keep you posted on the progress in 60 minutes, if not sooner
Elevated Errors in Confluence Whiteboards and Databases Due to Cloudflare Outage
6 updates
Elevated Errors in Confluence Whiteboards and Databases Due to Cloudflare Outage Cloudflare successfully deployed a fix for the issue that was impacting Confluence. We are no longer seeing Confluence system performance degradation and Confluence systems have been fully restored. We will continue monitoring
We are aware of a global Cloudflare issue that is currently impacting Confluence. Cloudflare is reporting that a fix has been implemented and services are restoring. Our engineering teams continue to monitor the restoration of services. It is possible that you may still encounter some errors or issues. We will update you when services are fully restored. We will continue to provide updates as we learn more.
We are aware of a global Cloudflare issue that is currently impacting Confluence. You may notice elevated server errors in Confluence Whiteboards and Databases. Cloudflare is actively working to restore its services, and we are closely monitoring its progress. We understand how disruptive this can be and truly appreciate your patience as we monitor the situation. We will continue to provide updates as we learn more.
We are aware of a global Cloudflare issue affecting Confluence Whiteboards and are currently looking into the issue.
We are aware of a global Cloudflare issue affecting Confluence Whiteboards and are currently looking into the issue.
We are aware of a global Cloudflare issue affecting Confluence Whiteboards and are currently looking into the issue.
Confluence presenting users with lost connection errors when attempting to edit content
3 updates
Our team has been able to confirm that customers should no longer be experiencing the error while attempting to edit using Confluence. We apologize for any inconvenience this may have caused. This issue is now resolved.
Our team has reverted a recent change that is suspected to have caused these errors, and have seen instances of these error messages to decrease. We are continuing to monitor and will update when this issue is confirmed to be resolved.
We are aware of some customers being impacted by an error in Confluence while attempting to edit content stating 'We've lost our connection to you. Your changes will be saved when we reconnect. Trying to reconnect...'. Our team is investigating with urgency and will provide another update within an hour.
Degraded performance and intermittent errors
3 updates
On November 3, 2025, some Confluence, Jira Service Management, and Jira Cloud users may have experienced performance degradation and errors in the Global Automation UI page. Automation executions were not impacted by this incident. The issue has now been resolved, and the service is operating normally for all affected customers.
We are continuing to investigate the cause of degraded performance and intermittent errors impacting Automations and the Automation UI for some Confluence, Jira Service Management, and Jira Cloud customers. We will provide more details in one hour, if not sooner.
We are investigating reports of degraded performance and intermittent errors for some Confluence, Jira Service Management, and Jira Cloud customers. We will provide more details in one hour, if not sooner.
October 2025(5 incidents)
Degraded performance of Atlassian cloud sites for some customers
5 updates
Incident has been resolved.
Working with our infrastructure provider, our team has been able to identify specific CDN hardware changes that were rolling out this week, and are expected to be correlated to the performance issues some customers have seen loading pages in Jira and Confluence in specific countries, including Australia, Japan, Germany, Spain, Brazil and the USA. Our provider has indicated that this rollout has now been halted indefinitely and the rollout reverted, to restore expected levels of service across all impacted countries. Our network error logging from browser data shows that this mitigation was successful, and we've now received multiple confirmations from previously impacted customers following this change that services have restored. We sincerely apologize for any inconvenience this has caused. If you are continuing to experience any issues please reach out to Atlassian Support.
Considering the complexity and intermittent nature of this issue. Atlassian team believes that further investigation of this issue is required and we will reach out to some of the impacted customers for specific information. Based on the above, we shall be able to provide further update in next 24 hours, if not sooner.
Considering the complexity and intermittent nature of this issue. Atlassian team believes that further investigation of this issue is required and we will reach out to some of the impacted customers for specific information. Based on the above, we shall be able to provide further update in next 24 hours, if not sooner
We have reports of some users in Australia seeing degraded performance of Atlassian cloud. Our team is investigating this and we shall keep you informed in next 60 minutes or sooner.
Interruption to Confluence in APSE region
7 updates
This incident has been resolved.
We see a full recovery in Confluence application experience in the affected AP SouthEast region now.
We have addressed an issue causing increased database load and seeing recovery of user experience. Will continue to monitor before resolving issue.
We are seeing reduced error volumes now which indicate improvement in Confluence availability. Continuing to monitor and investigate/validate root causes.
Addressed one potential cause of load in the affected region. Monitoring for results and continuing to pursue other leads.
We have an active line of investigation. Will update with results within 30 minutes.
Multiple customers in APSE region are experience failure accessing Confluence application. We are currently investigating.
Delay in execution of Automation rules
3 updates
Our teams have identified and fixed the root cause of the issue that caused delay in executing Automation rules. This was caused due to scaling issues following on from the recent AWS outage. This issue is marked as resolved.
Atlassian team has identified the cause of the issue that led to delay in execution of Automation rules. This is a follow-on effect of the recent AWS outage. We shall keep you informed of the progress every hour, if not sooner.
We understand that our customers are experiencing delay in execution of Automation rules. While our team is investigating further on this, it looks like a follow-on effect of the recent AWS outage. We continue to investigate further on this and we shall keep you informed of the progress every hour if not sooner.
Atlassian Cloud Services impacted
24 updates
### Postmortem publish date: Nov 19th, 2025 ### Summary All dates and times below are in UTC unless stated otherwise. Customers utilizing Atlassian products experienced elevated error rates and degraded performance between Oct 20, 2025 06:48 and Oct 21, 2025 04:05. The service disruptions were triggered due to an [AWS DynamoDB outage](https://aws.amazon.com/message/101925/#:~:text=1%3A50%20PM.-,DynamoDB,-Between%2011%3A48) and further affected by subsequent failures in [AWS EC2](https://aws.amazon.com/message/101925/#:~:text=service%20disruption%20event.-,Amazon%20EC2,-Between%2011%3A48) and [AWS Network Load Balancer](https://aws.amazon.com/message/101925/#:~:text=service%20disruption%20event.-,Amazon%20EC2,-Between%2011%3A48) within the us-east-1 region. The incident started at Oct 20, 2025 06:48 and was detected within six minutes by our automated monitoring systems. Our teams worked to restore all core services by Oct 21, 2025 04:05. Final cleanup of backlogged processes and minor issues was completed on Oct 22, 2025. We recognize the critical role our products play in your daily operations, and we offer our sincere apologies for any impact this incident had on your teams. We are taking immediate steps to enhance the reliability and performance of our services, so that you continue to receive the standard of service you have come to trust. ### IMPACT Before examining product-level impacts, it's helpful to understand Atlassian's service topology and internal dependencies. Products such as Jira and Confluence are deployed across multiple AWS regions. The data for each tenant is stored and processed exclusively within its designated host region. This design is intentional and represents the desired operational state, as it limits the impact of any regional outage strictly to tenants in-region, in this case us-east-1. While in-scope application data is pinned to the region selected by the customer, there are times when systems need to call other internal services that may be based in a different region. If a problem occurs in the main region where these services operate, systems are designed to automatically fail over to a backup region, usually within three minutes. However, if unexpected issues arise during this failover, it can take longer to restore services. In rare cases, this could affect customers in more than one region. It’s important to note that all in-scope application data for supported products is pinned according to a customer’s chosen region. **Jira** Between Oct 20, 2025 06:48 and Oct 20, 2025 20:00, customers with tenants hosted in the us-east-1 region experienced increased error rates when accessing core entities such as Issues, Boards, and Backlogs. This disruption was caused by AWS's inability to allocate AWS EC2 instances and elevated errors in AWS Network Load Balancer \(NLB\). During this window, users may also have observed intermittent timeouts, slow page loads, and failures when performing operations like creating or updating issues, loading board views, and executing workflow transitions. Between Oct 20, 2025 08:36 and Oct 20, 2025 09:23, customers across all regions experienced elevated failure rates when attempting to load Jira pages. This disruption was caused by the regional frontend service entering an unhealthy state during this specific time interval. Normally, the frontend service connects to the primary AWS DynamoDB instance located in the us-east-1 to retrieve the most recent configuration data necessary for proper operation. Additionally, the service is designed with a fallback mechanism that references static configuration data in the event that the primary database becomes inaccessible. Unfortunately, a latent bug existed in the local fallback path. When the frontend service nodes restarted, they were unable to load critical operational configuration data from primary or fallback sources, leading to the observed failures experienced by customers. Between Oct 20, 2025 06:48 and Oct 21, 2025 06:30, customers experienced significant delays and missing Jira in-app notifications across all regions. The notification ingestion service, which is hosted exclusively in us-east-1, exhibited an increased failure rate when processing notification messages due to AWS EC2 and NLB issues. This issue resulted in notifications being delayed - and in some cases, not delivered at all - to users worldwide. **Jira Service Management \(JSM\)** JSM was impacted similarly to Jira above, with the same timeframes and for the same reasons. Between Oct 20, 2025 08:36 and Oct 20, 2025 09:23, customers across all regions experienced significantly elevated failure rates when attempting to load JSM pages. This affected all JSM experiences including the Help Centre, Portal, Queues, Work Items, Operations, and Alerts. **Confluence** Between Oct 20, 2025 06:48 and Oct 21, 2025 02:45, customers using Confluence in the us-east-1 region experienced elevated failure rates when performing common operations such as editing pages or adding comments. The primary cause of this service degradation was the system's inability to auto-scale due to AWS EC2 issues to manage peak traffic load effectively. Though the AWS outage ended at Oct 20, 21:09, a subset of customers continued to experience failures as some Confluence web server nodes across multiple clusters remained in an unhealthy state. This was ultimately mitigated by recycling the affected nodes. To protect our systems while AWS recovered, we made a deliberate decision to enable node termination protection. This action successfully preserved our server capacity but, as a trade-off, it extended the time required for a full recovery once AWS services were restored. **Automation** Between Oct 20, 2025 06:55 and Oct 20, 2025 23:59, automation customers whose rules are processed in us-east-1 experienced delays of up to 23 hours in rule execution. During this window, some events triggering rule executions were processed out of order because they arrived later during backlog processing. This caused potential inconsistencies in workflow executions, as rules were run in the order events were received, not when the action causing the event occurred. Additionally, some rule actions failed because they depend on first-party and third-party systems, which were also affected by the AWS outage. Customers can see most of these failures in their audit logs; however, a few updates were not logged due to the nature of the outage. By Oct 21, 2025 5:30, the backlog of rule runs in us-east-1 was cleared. Although most of these delayed rules were successfully handled, there were some additional replays of events to ensure completeness. Our investigation confirmed that a few events may never have triggered their associated rules due to the outage. Between Oct 20, 2025 06:55 and Oct 20, 2025 11:20, all non-us-east-1 regional automation services experienced delays of up to 4 hours in rule execution. This was caused by an upstream service that was unable to deliver events as expected. The delivery service encountered a failure due to a cross-region dependency call to a service hosted in the us-east-1 region. Because of this dependency issue, the delivery service was unable to successfully deliver events throughout this time frame, resulting in customer-defined rules not being executed in a timely manner. **Bitbucket and Pipelines** Between Oct 20, 2025 06:48 and Oct 20, 2025 09:33, Bitbucket experienced intermittent unavailability across core services. During this period, users faced increased error rates and latency when signing in, navigating repositories, and performing essential actions such as creating, updating, or approving pull requests. The primary cause was an AWS DynamoDB outage that impacted downstream services. Between Oct 20, 2025 06:48 and Oct 20, 2025 22:46, numerous Bitbucket Pipeline steps failed to start, stalled mid-execution, or experienced significant queueing delays. Impact varied, with partial recoveries followed by degradation as downstream components re-synchronized. The primary cause was an AWS DynamoDB outage, compounded by instability in AWS EC2 instance availability and AWS Network Load Balancers. Furthermore, Bitbucket Pipelines continued to experience a low but persistent rate of step timeouts and scheduling errors due to AWS bare-metal capacity shortages in select availability zones. Atlassian coordinated with AWS to provision additional bare-metal hosts and addressed a significant backlog of pending pods, successfully restoring services by 01:30 on Oct 21, 2025. **Trello** Between Oct 20, 2025 06:48 and Oct 20, 2025 15:25, users of Trello experienced widespread service degradation and intermittent failures due to upstream AWS issues affecting multiple components, including AWS DynamoDB and subsequent AWS EC2 capacity constraints. During this period, customers reported elevated error rates when loading boards, opening cards, adding comments or attachments. **Login** Between Oct 20, 2025 06:48 and Oct 20, 2025 09:30, a small subset of users experienced failures when attempting to initiate new login sessions using SAML tokens. This resulted in an inability for those users to access Atlassian products during that time period. However, users who already had valid active sessions were not affected by this issue and continued to have uninterrupted access. The issue impacted all regions globally because regional identity services relied on a write replica located in the us-east-1 region to synchronize profile data. When the primary region became unavailable, the failover to a secondary database in another region failed, which delayed recovery. This failover defect has since been addressed. **Statuspage** Between Oct 20, 2025 06:48 and Oct 20, 2025 09:30, Statuspage customers who were not already logged in to the management portal were unable to log in to create or update incident statuses. This impact was restricted only to users who were not already logged in at the time. The root cause was the same as described in the Login section above, and it was resolved by the same remediation steps. ### REMEDIAL ACTION PLAN & NEXT STEPS We have completed the following critical actions designed to help prevent cross-region impact from similar issues: * Resolved the code defect in the fallback option to ensure that Jira Frontend Services in other regions remain unaffected during a region-wide outage. * Fixed the issue that prevented timely failover of the identity service which impacted new login sessions. * Resolved the code defect so that delivery services in unaffected regions remain operational during region-wide outages. Additionally, we are prioritizing the following improvement actions: * Implement mitigation strategies to strengthen resilience against region-wide outages in the notification ingestion service. Although disruptions to our cloud services are sometimes unavoidable during outages of the underlying cloud provider, we continuously evaluate and improve test coverage to strengthen resilience of our cloud services against these issues. We recognize the critical importance of our products to your daily operations and overall productivity, and we extend our sincere apologies for any disruptions this incident may have caused your teams. If you were impacted and require additional details for internal post-incident reviews, please reach out to your Atlassian support representative with affected timeframes and tenant identifiers so we can correlate logs and provide guidance. Thanks, Atlassian Customer Support
Our team is now able to see full recovery across the vast majority of Atlassian products. We are aware of some ongoing issues with specific components such as migrations and JSM virtual service agents, and our team is continuing to investigate with urgency. We apologise for the inconvenience that this incident has caused and we will provide further information when the Post Incident Investigation has been completed.
The issue relating to the Atlassian Support portal displaying a message to customers to use our temporary support channel has now been resolved. The Atlassian Support portal is fully functional for any ongoing support issues. With regards to other Atlassian products, we continue to see recovery continuing across all impacted products and our teams are continuing to monitor as the recovery continues. We will provide further update on our recovery status within two hours.
We continue to see recovery progressing across all impacted products as backlogged items continue to be processed. The Atlassian Support portal is currently displaying a message directing customers to our temporary support channel. Please note that our support portal is currently fully functional for those attempting to raise requests. We are continuing to look into this alert to remove this message. We will provide further update on our recovery status in two hours.
Our team is now seeing recovery across all impacted Atlassian products. We are continuing to monitor for individual products that may still be processing backlogged items now that services are restored. The Atlassian Support portal is currently still displaying a message directing customers to our temporary support channel. Please note that our support portal is currently fully functional for those attempting to raise requests. We are continuing to look into this alert to remove this message. We will provide further update on our recovery status in one hour.
Our teams are continuing to monitor the recovery of systems across Atlassian products. This update is to inform that the Atlassian Support portal is fully operational at this time for customers that wish to contact support.
Monitoring - We've started seeing continued product experience improvement. While we still have a backlog of event processing, we are seeing improvements in systems operational capabilities across all products. We estimate a significant improvement with the next few hours and will continue to monitor the health of AWS services and the effects on Atlassian customers. We appreciate your continued patience and remain committed to full resolution as we work through this situation. We will post our next update in two hours.
There have been no changes since our last update. We will provide our next updated by 9:00PM UTC or sooner as new information becomes available. We are currently aware of an ongoing incident impacting Atlassian Cloud services due to an outage with our public cloud provider, AWS. We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority and we are closely monitoring the health of AWS services. While we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation.
We are currently aware of an ongoing incident impacting Atlassian Cloud services due to an outage with our public cloud provider, AWS. We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority and we are closely monitoring the health of AWS services. While we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will continue to provide updates every hour or sooner as new information becomes available.
Update - We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority. Our public cloud provider is actively working to mitigate this issue with urgency. While we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will continue to provide updates every hour or sooner as new information becomes available.
Update - Thank you for your continued patience. We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority. Our public cloud provider is still actively working to mitigate this issue with urgency and while we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will be providing hourly updates on this issue.
Update - We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority. Our public cloud provider is actively working to mitigate this issue with urgency. While we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will continue to provide updates every hour or sooner as new information becomes available.
Update - We understand the impact this issue is having on your operations and want to assure you that resolving this matter is our highest priority. Our public cloud provider is actively working to mitigate this issue with urgency. While we do not have a definitive ETA at this time, we remain committed to full resolution and deeply appreciate your patience as we work through this situation. We will continue to provide updates every hour or sooner as new information becomes available.
We understand your pain and mitigating or fixing this issue is of utmost importance. Our public cloud provider is actively working to mitigate this issue on priority. We have been seeing partial operational success. We appreciate your patience and will continue to provide updates every hour or sooner.
Our public cloud provider is working to mitigate this issue quickly. We are seeing some early positive indicators and are continuing to monitor. We appreciate your patience and will continue to provide updates every hour or sooner.
Our public cloud provider is working to mitigate this issue quickly. We are seeing some early positive indicators and are continuing to monitor. We appreciate your patience and will continue to provide updates every hour or sooner.
Our public cloud provider is working to mitigate this issue quickly. We are seeing some early positive indicators and are continuing to monitor. We appreciate your patience and will continue to provide updates every hour or sooner.
Atlassian team is actively engaged and continues to work with our public cloud provider to mitigate this issue at the earliest. We are starting to see partial operations succeed. We appreciate your patience. We shall continue to share updates every hour, if not sooner.
We continue to work with our public cloud provider towards mitigating the issue at the earliest. We appreciate your patience. We shall continue to share updates every hour, if not sooner.
We understand that our public cloud provider has identified the cause of the issue. We are starting to see some recovery and is working towards mitigation. We appreciate your patience. We shall continue to share updates every hour, if not sooner.
We are experiencing an outage due to some issue at the end of our public cloud provider. We are working closely with them to get this resolved or mitigated as quickly as possible. ETA of the same is not know at the moment. We shall continue to share updates every hour, if not sooner.
We are experiencing an outage due to some issue at the end of our public cloud provider. We are working closely with them to get this resolved or mitigated as quickly as possible. ETA of the same is not know at the moment. We shall continue to share updates every hour, if not sooner.
Atlassian Cloud services are impacted and we are aware that our customers might not be able to create support tickets. Our teams are actively investigating the same. We shall keep you informed of the progress every hour.
We have noticed that Atlassian Cloud services are impacted and our teams are actively investigating the same. We shall keep you informed of the progress every hour.
Unable to perform UI Operations
4 updates
Between 14:00 UTC to 16:00 UTC, we experienced automation rule functionality degradation for Confluence, Jira Work Management, Jira Service Management, and Jira. Impact Some customers using Jira, Confluence, and Jira Service Management were not able to create or update Automation rules, and some Automation rules which triggered webhooks might have been throttled and require re-running. Approximately 50 percent of Automation API calls were affected in the us-east-1 region with customers receiving 429 status codes. Current Status The incident has been mitigated by increasing the API Gateway rate limit service quota and disabling the internal service that was causing high traffic volume. Next Steps We are conducting a root cause analysis of the internal service. A post incident review will be conducted.
We have recovered from a issue which saw degraded Automation experience. Impact Some Customers using Jira, Confluence, and Jira Service Management were not able to create or update Automation rules, and some Automation rules which triggered webhooks might have been throttled and require re-running. Current Status The incident has been mitigated and we continue to investigate the cause of this incident Next Steps We are continuing to monitor the service, and will provide further updates in the next two hours
Atlassian team is actively investigating the issue with UI based Customer interactions. Existing automation rules continue to run without any issue. Accessing automation rules and creating new automation rules could lead to intermittent failures. We shall keep you informed of the progress in next 2 hours, if not sooner.
UI based Customer interactions are currently degraded and our team is actively investigating the same. We shall keep you informed of the progress in next 2 hours.
September 2025(3 incidents)
Issues for some partners using Connect Javascript APIs
1 update
On September 29-30 2025, for about 24 hours, Confluence Cloud experienced an incident due to the unintended deprecation of ACJS methods (https://developer.atlassian.com/cloud/confluence/about-the-connect-javascript-api/#connect-javascript-api) linked to retired V1 APIs (https://developer.atlassian.com/cloud/confluence/rest/v1/), resulting in 410 errors affecting partners reliant on these APIs. This issue is resolved, we will be working on a longer-term solution to prevent a similar incident from occurring again.
Degraded performance to multiple Atlassian experiences
6 updates
### **SUMMARY** On September 22, 2025, between 04:38 and 04:48 UTC, Atlassian customers experienced connection errors preventing access to their [atlassian.net](http://atlassian.net/) sites. Some customers observed intermittent errors as services gradually recovered until 06:17 UTC. The event was triggered by a faulty configuration change to our Content Delivery Network \(CDN\). The change included an invalid hostname which prevented customers from successfully connecting to [atlassian.net](http://atlassian.net/) domains. The incident was detected within 1 minute by our monitoring systems and mitigated by rolling back the change, which put Atlassian systems into a known good state. The acute impact was resolved in 10 minutes and all lingering errors resolved in 1 hour 38 minutes. ### **IMPACT** The acute impact occurred on September 22, 2025, between 04:38 UTC and 04:48 UTC to Confluence, Compass and Jira, including Jira Service Management. The incident caused service disruption to customers when they attempted to load those products in their browser or interact with APIs. Between 04:48 UTC and 06:17 UTC some customers continued to observe intermittent errors as services gradually recovered. ### **ROOT CAUSE** The issue was caused by a change to the hostname configuration of our Content Delivery Network \(CDN\). As a result, the products mentioned could not receive connections, and the users received TLS handshake errors, followed by HTTP 503 and 403 errors. More specifically a new CDN configuration contained a resource name which conflicted with the existing customer-serving CDN resource, and was able to be deployed, overwriting it. The root cause of the incident was the failure in the detection of the bug by our pre-deployment validations and tests. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that you rely on our products for your daily operations and productivity, and we sincerely apologize for any impact this disruption has had on you, your team or your organisation. We are prioritizing the following improvement actions to help avoid repeating this type of incident: * Improved pre-deployment change controls and testing. * Improved validation of target configurations before deployment to ensure customer-serving CDN resources cannot their hostnames changed. * Sharding of our CDN configuration to enable progressive changes. We sincerely appreciate your understanding and patience as we improve our processes to provide a better customer experience. Thank you, Atlassian Customer Support
A configuration change to our CDN configuration for the atlassian.net domain resulted in customer traffic temporarily being dropped. This was immediately identified by our monitoring systems and rolled back. A public PIR with full details will be issued as a follow up.
Our team has investigated and enabled configuration changes which should be restoring services across all impacted products starting at 06:17AM UTC. We are monitoring for recovery across all products and services at this time, and will provide further update when available.
Our team is performing an investigation into all Atlassian products and services to understand the full impact of the prior certificate changes that caused this incident. While many services are restored we are continuing to monitor for additional impacted service issues that customers may be experiencing.
Our team is continuing to monitor Jira and Confluence products following the earlier issues relating to an erroneous certificate change. We are assessing all impacted products to ensure that services are fully restored and we will provide further update within the hour.
Our team is aware that access to Jira and Confluence cloud products was degraded between 04:39 and 04:52 UTC due to an erroneous certificate. The certificate has now been corrected and functionality has been restored. We are continuing to monitor the situation and will provide further update when available.
Degraded performance in Confluence and Jira for Microsoft Edge users
6 updates
An update has been released by Microsoft to resolve this issue for users of the Edge browser. To verify that you are running the required version of Edge, you can navigate to edge://components/ in the address bar, click the 'Check for update' button for the 'Trust Protections List', ensuring that you are on version 1.0.0.31 or higher to ensure you are running the version that contains the fixed release. All Edge browsers should otherwise receive this update within 10 hours without user intervention.
We have identified the root cause of the Microsoft Edge 140 tracking prevention issue and have mitigated the problem and we continue monitoring closely. The issue has been mitigated however it will not be considered resolved until Microsoft issues an updated version of the Edge browser scheduled for September 18th. This will be the last update until the issue is resolved.
We continue to work on resolving the Microsoft Edge 140 tracking prevention issue for Confluence, Jira Work Management, Jira Service Management, Jira, Jira Align, and Jira Product Discovery. We have identified the root cause and are working on solutions for each product. We will provide the next update within three hours.
Our team is aware that the incorrect classification of Atlassian sites as advertising has been resolved, however these changes will take some time to propagate out to users since we are waiting for Edge from Microsoft to incorporate the changes. We are continuing to monitor the situation and will provide further update when we are able to confirm that the issue is resolved.
Important Update on Attachment Issues with Edge 140 and Atlassian Products We've identified an issue affecting images, media, and Whiteboards in Confluence and Jira for users who have upgraded to Microsoft Edge version 140.0.3485.54 with Strict tracking prevention enabled. These settings can block media (images and videos) content from loading, uploading, or opening. The issue is triggered by Edge's Strict tracking prevention incorrectly classifying Atlassian as advertising, impacting functionality. Recommended Workarounds: - Switch your tracking prevention settings from 'Strict' to the 'Balanced (Recommended)' mode in Edge. - Alternatively, add an 'Exception' for the URL you use to access Atlassian products. - Use a different browser, such as Chrome, Firefox, or Safari. This issue only affects Microsoft Edge. Other browsers are not affected by this issue. Thank you for your patience as we work through this issue. We'll provide updates as they become available.
We are investigating cases of degraded performance for Confluence and Jira Cloud customers using Microsoft Edge. We will provide more details as they emerge.
August 2025(1 incident)
Product invitation emails not being correctly sent to users
2 updates
Between 2025-08-03T21:34 and 2025-08-05T03:43 UTC some users would not have received the emails for product invitations. The change that caused these emails to stop being sent has been rolled back. New invitation emails should now be sent correctly for all products. For users that were impacted by this issue, re-submitting their invitation will also re-send the email to them if required.
Our team is aware of issues with users not currently receiving product invitation emails as expected. We are investigating with urgency and will provide an update as soon as possible. Please note that invitations within the product are still successful, only the email notifications are currently impacted.
July 2025(2 incidents)
Degraded experience adding and accessing media attachments
2 updates
We have resolved the issue impacting adding/accessing media attachments across products. We will continue to monitor the situation.
We are investigating an issue causing a degraded experience when adding attachments or accessing existing media content.
Intermittent failure in Forge app invocation
2 updates
Between 10th July 5:44 am UTC to 7:44 am UTC, we experienced intermittent failure in some app funstionality(smart links, scheduled triggers) for Confluence, Jira Work Management, Jira Service Management, Jira, and Compass. The issue has been resolved and the service is operating normally. All scheduled triggers has been replayed.
We are investigating reports of intermittent errors for some Confluence, Jira Work Management, Jira Service Management, Jira, and Compass Cloud customers failing invocation, mostly scheduled triggers and smart links from 5:44 am UTC to 7:44 am UTC. We have put mitigations in place, and the errors have receded. We are monitoring the incident resolution. Customers will sometimes see errors in app functionality during this time and scheduled triggers might not work. We have replayed the scheduled triggers.
June 2025(5 incidents)
Issues affecting user syncing, Atlassian Administration
4 updates
Between 07:40 UTC and 10:31 UTC, we experienced issues affecting user syncing in Atassian Administration. This affected Confluence, Jira Work Management, Jira Service Management, Jira, Trello, and Guard. The issue has been resolved and the service is operating normally.
We have identified the root cause of the issue and have mitigated the problem. We are now monitoring closely.
We continue to work on resolving the user syncing functionality for Confluence, Jira Work Management, Jira Service Management, Jira, Trello, and Guard. We have identified the root cause and expect recovery shortly.
We are investigating reports of errors loading Users and Groups pages in Atlassian Administration, and errors affecting user IDP syncing. We will provide more details once we identify the root cause.
Attachment Loading Issue in C2C and Sandbox Migrations
1 update
We have resolved an issue that affected attachment loading during C2C and Sandbox data copy migrations from 19-06-2025 18:00 UTC to 24-06-2025 08:49 UTC. If you encountered this issue, please delete the affected projects from the destination site and re-run the migration. For ongoing migrations, allow them to complete and then follow the same steps. For Sandbox FDC migrations, complete them as usual and re-run if needed. We appreciate your patience and cooperation.
Forge invocation errors impacting some instances in Singapore region
2 updates
Between 21:39 UTC on June 09, 2025 and 21:35 UTC on June 10, 2025, we experienced a forge invocation errors impacting a subset of market place apps for some Confluence, Jira Service Management, and Jira, and Atlassian Developer Cloud customers in the Singapore region. The issue has now been resolved and the service is operating normally. We are actively monitoring this capability.
We are investigating an issue impacting a subset of market place apps for some Confluence, Jira Service Management, and Jira, and Atlassian Developer Cloud customers in the Singapore region. We will provide more details within the next hour.
Search functionality degradation in Confluence
4 updates
The issues affecting the search functionality in Confluence have been resolved, and services now operate normally for all affected customers.
The issues affecting the search functionality in Confluence have been resolved, and services now operate normally for all affected customers. We will monitor it closely for the next 30 minutes to ensure stability.
We are continuing to address an issue affecting the search functionality in Confluence for some users in Europe. Our team is actively working to mitigate the impact and restore services promptly. We will keep you updated with further information.
We are investigating an issue affecting the search functionality in Confluence for some users in Europe. Our team is working diligently to resolve the degradation and restore services promptly. We will keep you updated with further information.
Customers may experience delays receiving emails
2 updates
Between 2025-06-04 14:11 UTC to 20:18 UTC, we experienced delays in delivering emails for Confluence, Jira Work Management, Jira Service Management, Jira, Trello, Atlassian Bitbucket, Guard, Jira Align, Jira Product Discovery, Atlas, Compass. The issue has been resolved and the service is operating normally.
We were experiencing cases of degraded performance for outgoing emails from Confluence, Jira Work Management, Jira Service Management, Jira, Trello, Atlassian Bitbucket, Guard, Jira Align, Jira Product Discovery, Atlas and Compass Cloud customers. The system is recovering and mail is being processed normally as of 16:45 UTC. We will continue to monitor system performance and will provide more details within the next hour.
May 2025(4 incidents)
Administration is unreachable
3 updates
The issue with the Administration accessibility has now been resolved, and the service is operating normally for all affected customers.
The issues causing access problems to Administration have been resolved, and services are now functioning normally for all affected customers. We will monitor it closely for the next 30 minutes to ensure stability.
We are aware of an issue where users are receiving a 403 error when attempting to reach Administration. Our team is looking into this issue with urgency and will provide an update as soon as possible.
Issues with authentication across multiple services
3 updates
Monitoring has indicated that users are no longer receiving the error messages caused by this issue and it should now be fully resolved.
A fix for the issue causing authentication errors across multiple apps has been rolled out. We will provide further update when can confirm the issue has been fully resolved.
We are aware of issues relating to authentication resulting in 503 gateway error messages. Our team is investigating with urgency and will provide an update when available.
Smart Answers in Search and AI Definitions outage for Atlassian Intelligence customers
3 updates
Between 2025/05/03 21:38 UTC to 2025/05/06 05:26 UTC, we experienced an outage in Smart Answers in search and AI definitions for all Confluence Cloud Atlassian Intelligence customers. Rovo experiences were not impacted. The issue has been resolved and the service is operating normally.
We have identified the root cause of the intermittent errors for Smart Answers and AI definitions in Confluence Cloud Atlassian Intelligence customers are progressively rollout out a fix to customers, with a fix expected to reach all customers over the next 2 hours. We are monitoring closely to confirm all errors have resolved.
We are investigating reports of intermittent errors for Atlassian Intelligence Confluence Cloud customers. We have identified the root cause and are preparing a hotfix
Confluence, JIRA and JSM unavailability or degraded experience for some EU users
2 updates
Between 7:37 and 8:06 UTC, Confluence, Jira, and JSM were unavailable or experienced degraded performance for some users in the European region. The issue has now been resolved, and the services are operating normally for all impacted customers.
From 7:37 to 8:06 UTC, we experienced issues that made Confluence, JIRA, and JSM unavailable or degraded the experience of some users in the European region. The issue has been mitigated, and services operate normally for all impacted customers. We will continue to monitor it closely for stability.
April 2025(5 incidents)
Automation Rule execution is delayed
9 updates
Between 13:00 UTC to 23:00 UTC on April 23, 2025, we experienced automation rule execution delays for Confluence, Jira Work Management, Jira Service Management, Jira, and Jira Product Discovery. The issue has been resolved and the service is operating normally. For some high volume customers who regularly have rule throttling, there may be an extended period to clear full backlogs. For those high volume customers we expect any extended period rule throttling to be resolved by 07:00 UTC on April 24, 2025.
New executions are continuing to run without delay. Remaining delayed executions will complete in approximately one hour. We will continue to monitor the progress and provide an update within the next hour.
New executions are running without delay. Remaining delayed executions will complete in the next 2 hours. We will continue to monitor the progress and provide an update within the next hour.
A majority of new automation executions are running without delay. Remaining delayed automation executions should be completed within 3 hours. We will continue to monitor the progress and provide an update within the next hour.
We have applied significant mitigations and the majority of delayed automation executions should be completed within 2 hours. Some customers may experience some delay in some executions during the 2 hours following. Rule execution delays should be mitigated in 4 hours. We will continue to monitor the progress and provide an update within the next hour.
We are still investigating automation rule delays that is impacting some Confluence, Jira Work Management, Jira Service Management, Jira, and Jira Product Discovery Cloud customers. The backlog of previous mitigations has decreased and continues to decrease, however some customers may be still be experiencing delays. The backlog of automations is being processed as we continue to apply mitigations and will provide more details within the next hour.
We are still investigating automation rule delays that is impacting some Confluence, Jira Work Management, Jira Service Management, Jira, and Jira Product Discovery Cloud customers. Previous mitigations have decreased the average delays across executions, however some customers may be still be experiencing delays. We are applying more mitigations and will provide more details within the next hour.
We are investigating automation rule execution delays that are impacting some Confluence, Jira Work Management, Jira Service Management, Jira, and Jira Product Discovery Cloud customers. We have applied a temporary mitigation and are continuing to investigate for root cause. We will provide more details within the next hour.
We are investigating cases of degraded performance for automation rules for Confluence, Jira Work Management, Jira Service Management, Jira, and Jira Product Discovery Cloud customers. We will provide more details within the next hour.
Confluence page load errors
3 updates
### Summary On Apr 15, 2025, between 14:29 and 14:55 UTC, some Atlassian customers in the EU central region using Confluence Cloud products encountered errors when viewing pages. The event was triggered by a temporarily spiked error on Confluence backend services due to a capacity issue. Our alerts detected the incident within 1 minute and the impact was mitigated by scaling up the backend service that was under load. This restored the Atlassian services to a fully operational state. The total time to resolution was approximately 26 minutes. ### **IMPACT** The impact was on Apr 15, 2025, between 14:29 and 14:55 UTC, to customers using Confluence Cloud. The incident caused service disruption to some EU central region customers, resulting in reduced functionality and limited access when loading Confluence pages, space overviews, and the home page. ### **ROOT CAUSE** The incident's root cause stemmed from one of Confluence's non-critical backend services not being fully scaled to accommodate an unusual spike in traffic. Although failures from this backend service shouldn't affect page views critically, it was treated as a severe failure, impacting the Confluence core experience in this incident. ### **REMEDIAL ACTION PLAN & NEXT STEPS** We fully understand that outages impact your productivity. We continuously evaluate and validate the capacity of our backend services that are critical to the Confluence user experience. However, the impact of this non-critical backend service on the Confluence page view functionality was not identified beforehand. We are prioritizing the following improvement actions designed to avoid repeating this type of incident: * Reviewing the peak capacity allocated for critical backend services and ensuring that adequate capacity is reserved to encounter traffic spikes. * Introducing fallback mechanisms for failures from non-critical backend services to improve Confluence service resiliency. We apologize to customers whose services were impacted by this incident. We are taking steps designed to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Between 14:29 UTC to 14:55 UTC, some users may have experienced page load errors for Confluence. The issue has been resolved and the service is operating normally. Once we complete our internal incident review process, we will publish a more detailed postmortem of what went wrong, along with steps we're taking designed to prevent this from happening again in the future.
Between 14:29 UTC to 14:55 UTC, some users may have experienced page load errors for Confluence. The issue has been resolved and the service is operating normally. Once we complete our internal incident review process, we will publish a more detailed postmortem of what went wrong, along with steps we're taking designed to prevent this from happening again in the future.
Confluence Page Load Errors
2 updates
### Summary On Apr 14, 2025, between 18:40 and 19:00 UTC, some Atlassian customers in the US East region using Confluence Cloud products encountered errors when viewing pages. The event was triggered by a temporarily spiked error on Confluence backend services due to a capacity issue. Our alerts detected the incident within 11 minutes and the impact was mitigated by scaling up the backend service that was under load. This restored the Atlassian services to a fully operational state. The total time to resolution was approximately 20 minutes. ### **IMPACT** The impact occurred on Apr 14, 2025, between 18:40 and 19:00 UTC, to customers using Confluence Cloud. The incident caused service disruption to some US East region customers, resulting in reduced functionality and limited access when loading Confluence pages, space overviews, and the home page. ### **ROOT CAUSE** The incident's root cause stemmed from one of Confluence's non-critical backend services not being fully scaled to accommodate an unusual spike in traffic. Although failures from this backend service shouldn't affect page views critically, it was treated as a severe failure, impacting the Confluence core experience in this incident. ### **REMEDIAL ACTION PLAN & NEXT STEPS** We fully understand that outages impact your productivity. We continuously evaluate and validate the capacity of our backend services that are critical to the Confluence user experience. However, the impact of this non-critical backend service on the Confluence page view functionality was not identified beforehand. We are prioritizing the following improvement actions designed to avoid repeating this type of incident: * Reviewing the peak capacity allocated for critical backend services and ensuring that adequate capacity is reserved to encounter traffic spikes. * Introducing fallback mechanisms for failures from non-critical backend services to improve Confluence service resiliency. We apologize to customers whose services were impacted by this incident. We are taking steps designed to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Between 18:40 UTC to 19:00 UTC, some users may have experienced page load errors for Confluence. The issue has been resolved and the service is operating normally. Once we complete our internal incident review process, we will publish a more detailed postmortem of what went wrong, along with steps we're taking designed to prevent this from happening again in the future.
Issues loading Administration across Atlassian Products
2 updates
Between 00:00 UTC to 01:05 UTC 14 April, users attempting to load the Administration page for Atlassian products. The issue has been resolved and the service is operating normally.
We are aware of an issue where users attempting to load the Administration page are instead seeing a blank page. Our team is investigating with urgency and will provide further update as soon as possible.
Issues accessing Jira and Confluence in some regions
4 updates
We identified a temporary access issue affecting some customers using several Atlassian Cloud products. All affected products are now back online, and no further impacts have been observed. We apologize for any inconvenience this may have caused.
Our systems are stable, and no recurrences have been observed. We will continue to monitor closely and provide updates as they become available.
We have identified the root cause of the access issues and have mitigated the problem. We are now monitoring this closely and will resolve this incident within the next hour.
For customers that are currently experiencing issues accessing their Jira and Confluence sites, you are able to add /wiki/ to the end of your site name in order to continuing accessing Confluence. For example: https://site.atlassian.net/wiki/ Our team is continuing to investigate an issue impacting Jira sites with urgency.
March 2025(10 incidents)
Find new apps menu missing from Confluence impacting users ability to open marketplace
2 updates
Our team has identified the cause of this issue and has implemented a fix. This should now be mitigated for all users.
We are aware of an issue currently impacting users ability to open the marketplace due to a missing menu item within Confluence. Our team is currently looking into this issue.
Performance degradation for Jira and Confluence User
2 updates
Between 14:36 UTC to 16:15 UTC, we experienced performance degradation for Confluence, Jira Work Management, Jira Service Management, and Jira. The issue has been resolved and the service is operating normally.
We are investigating cases of degraded performance for some Confluence, Jira Work Management, Jira Service Management, and Jira Cloud customers. We will provide more details within the next hour.
Some users experiencing a 'site maintenance' message when attempting to access Confluence and Jira on some APAC based sites.
4 updates
A fix has been implemented and we have not seen any recurrences. This incident has been resolved for Confluence.
A fix has been implemented and we are monitoring the results.
We are investigating 'site maintenance' issue that is impacting some Confluence, Jira Work Management, Jira Service Management, and Jira Cloud customers. We will provide more details within the next hour.
We are aware of some users experiencing a site maintenance window appearing instead of their site for Confluence. Our team is investigating this with urgency.
Automation rule execution are delayed
6 updates
### Summary Jira, Confluence and JSM Automation rules triggered or scheduled to run between 10am and 5pm UTC on March 17, 2025, and between 1pm UTC on March 18 and 12:30am UTC on March 19 were delayed on average by 1.5 hours and up to 12 hours maximum. The incident was triggered by the deployment of a monitoring library upgrade, which slowed the execution of all rules. This reduced the throughput of rule processing which resulted in rules being backed up and delayed. The change manifested in poor rule performance only during periods of high traffic. This incident occurred over two time windows. For the first incident window, backed-up rule executions began to reduce 4 hours 15 minutes after the first alert, and all rules had caught up 7 hours after the first alert. For the second incident window, backed-up rule executions began to reduce 1 hour, 50 minutes after the first alert, and all rules had caught up 10 hours after the first alert. The root cause of both incidents was identified and a change to address it was deployed by 10am UTC on March 19, 2025. ### **IMPACT** The customer’s rules were delayed on average by 1.5 hours and up to 12 hours during both incident windows. A very small number of rules encountered the following error: “_The rule actor doesn't have permission to view the event that triggered this rule_.” This error occurred because of rate limiting implemented by an internal Atlassian service due to increased throughput resulting from our mitigation efforts. These rules failed to complete successfully. All other rules eventually ran successfully. ### **ROOT CAUSE** The issue was caused by a change introduced to an Atlassian monitoring library, which significantly degraded the Automation rule engine's performance. The performance degradation prevented Automation's system from keeping pace with processing throughput, causing a back-up of executions and subsequent customer rule delays. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we have a number of testing and preventative processes in place, this specific issue didn’t manifest itself until our systems were at peak load. The change to the Atlassian monitoring library that was the root cause of the incident has been fixed. We are prioritizing the following improvement actions that are designed to avoid a repeat of this type of incident: * Deploying the fixed monitoring library after thorough testing * Implementing additional monitoring and alerting to the area of our system affected with performance degradation * Introducing additional pre-deployment testing designed to identify performance degradations before they impact customers * Increasing rate limits of certain downstream Atlassian systems to reduce the likelihood of the rule failures that occurred in this incident * Increasing the processing capacity of parts of our system to reduce the impact of backed up rule executions We apologize to customers whose automation rules were impacted during this incident; we are taking immediate steps designed to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Between 13:00 UTC to 16:30 UTC, we experienced delays in automation processing across Confluence, Jira Service Management, and Jira, which was causing rules to appear stuck. The issue has been resolved and we expect new automations to continue processing without delay. For automation processes initiated before 16:30 UTC, they may take until 08:00 UTC March 19th, 2025, however, we anticipate processing to be completed quicker. No action is required from users for these automations to be completed.
We have successfully identified and mitigated the issue affecting all automation customers, which was causing rules to appear stuck. New automations should now process as expected without delay. For automation processes initiated from 13:00 UTC to 16:30 UTC on March 18th, 2025, in Jira, Jira Service Management, and Confluence, execution will continue but may experience delays of up to one day. No action is required from users for these automations to be completed. We are closely monitoring the situation and will provide further updates as more information becomes available.
The delayed execution of automation rules is mitigated and recovery is in progress. We are now monitoring closely.
We are investigating cases of degraded performance regarding automation rules execution for Confluence, Jira Service Management, and Jira Cloud customers. We are applying mitigation to speed up rule execution. We will provide more details within the next hour.
We are investigating cases of degraded performance w.r.t automation rules execution for Confluence, Jira Service Management, and Jira Cloud customers. We will provide more details within the next hour.
Forge app installations failing at an increased rate
3 updates
Between 05:30 UTC and 10:00 UTC, we experienced increased rates of Forge app installation, upgrade and uninstallation for Confluence and Jira. The issue has been resolved and the service is operating normally.
We have identified the root cause of the increased errors and have mitigated the problem. We are now monitoring closely.
We are investigating reports of intermittent errors for Confluence, Jira, and Atlassian Developer Cloud customers. We will provide more details once we identify the root cause.
Service Slowness in Multiple Products
4 updates
Between 09:43 UTC to 15:55 UTC, we experienced delays in automation processing across Confluence, Jira Service Management, and Jira, which was causing rules to appear stuck. The issue has been resolved and we expect new automations to continue processing without delay. For automation processes initiated before 15:55 UTC, we anticipate processing to be completed by 08:00 UTC March 18th, 2025. No action is required from users for these automations to be completed.
We have successfully identified and mitigated the issue affecting all automation customers, which was causing rules to appear stuck. New automations should now process as expected without delay. For automation processes initiated before 15:55 UTC on March 17th, 2025, in Jira, Jira Service Management, and Confluence, execution will continue but may experience delays of up to one day. No action is required from users for these automations to be completed. We are closely monitoring the situation and will provide further updates as more information becomes available.
Following our previous update, we continue to experience delays in automation processing across Jira, Jira Service Management, and Confluence. The issue persists, affecting all automation customers and causing rules to appear stuck. Our team is actively working to resolve the situation and restore normal service levels as quickly as possible. We appreciate your patience and will provide further updates as more information becomes available.
We are currently experiencing delays in automation processing across Jira, Jira Service Management, and Confluence due to high traffic. While rules are firing, execution is delayed. Rules triggered in the last 2.5 hours will be processed, but this may take up to a day. New events should process normally following adjustments made at 12:13 PM UTC to prioritize them. We will provide an update in the next hour. Thank you for your patience.
Some users are experiencing errors while creating new Jira and Confluence sites
2 updates
On March 14th, 2025, between 09:00 and 19:14 UTC, some customers experienced issues creating new sites. The errors occurring for some users during new site creation should now be resolved.
We are investigating cases of errors for some users creating new sites. We will provide more details within the next hour.
Issues with search within Confluence and Compass
3 updates
Searches within Confluence and Compass should now be operating as normal. This issue will now be resolved.
Issues causing error messages when users were attempting to search within Confluence and Compass should now be resolved. Our team is monitoring ongoing performance at this time.
We are aware that some users of Confluence and Compass are currently experiencing errors when attempting to search. Our team is investigating this issue with urgency and will provide an update when available.
Page Update delay for single user edits
2 updates
We recently identified and addressed an issue causing delays in document publishing. A change that led to delays of up to 6 seconds has been reverted, improving performance. With the issue now mitigated, we are resolving this incident while continuing to work internally on a permanent fix.
We are currently investigating this issue.
Issues with 403 user authentication errors across Atlassian products
1 update
We are aware of an issue that was impacting user authentication to Atlassian services between 6:10AM Tuesday 4th March UTC and 6:35AM Tuesday 4th March UTC. Users that were already logged in would not have been impacted by this issue. A deployment suspected of causing this issue was rolled back and the problem was subsequently resolved.
February 2025(6 incidents)
Confluence search not returning results for some customers
3 updates
On February 28, 2025, between 14:47 and 20:30 UTC, some customers experienced issues impacting search results. The issue has been resolved and search is now operating normally.
We are continuing to investigate this issue. We will provide more details once we identify the root cause.
We are investigating reports of intermittent errors for some Confluence Cloud customers. We will provide more details once we identify the root cause.
403 Errors experienced by iOS users when IP Allowlist and Mobile App policy with Allow access from any IP address are enabled
3 updates
We are pleased to inform you that a new version of our app is now available on the Apple App Store. Please update your app to the latest version to resolve the issue.
We are currently aware of an issue impacting users of iOS devices where pages will present a 403 error to the user if IP Allowlist and a Mobile App policy with Allow access from any IP address are configured. A fix has been submitted to the AppStore for review and is expected to become available before the end of this week to resolve this issue. Users can use a mobile device running the Android operating system as a workaround to this problem until the release is available. For more information on IP Allowlists please see https://support.atlassian.com/security-and-access-policies/docs/specify-ip-addresses-for-product-access/ For more information on Mobile App policies please see https://support.atlassian.com/security-and-access-policies/docs/mobile-policy-mam-security-controls-and-supported-apps/
We are currently aware of an issue impacting users of iOS devices where pages will present a 403 error to the user if IP Allowlist and App Trust Mobile Bypass are both enabled. A fix has been submitted to the AppStore for review and is expected to become available before the end of this week to resolve this issue. Users can use a mobile device running on Android operating system as a workaround to this problem until the release is available.
Error in returning some search results in Confluence
2 updates
On February 27, 2025, between 03:00 and 20:40 UTC, some customers experienced intermittent issues impacting Confluence search results. The issue has been resolved and search is operating normally.
We are investigating reports of intermittent errors for some Confluence Cloud customers. We will provide more details once we identify the root cause.
Error in Confluence search
3 updates
Between 7:59 UTC to 22:03 UTC, some customers experienced anonymous access search for Confluence. The root cause was a faulty change in authorization rules that caused an error response. We have deployed a fix to mitigate the issue and have verified that the services have recovered. The conditions that cause the bug have been addressed and we're actively working on a permanent fix. The issue has been resolved and the service is operating normally.
We continue to work on resolving the anonymous access search for Confluence. We have identified the root cause and expect recovery shortly.
We are investigating reports of intermittent errors for some Confluence Cloud customers. We will provide more details once we identify the root cause.
Degraded Performance in Confluence
1 update
On February 13, 2025, between 21:47 UTC and 22:15 UTC, some customers for Confluence Cloud in the Asia Pacific region experienced degraded performance and 500 errors. The issue was caused by a sudden surge in traffic and was resolved within 28 minutes.
Jira and Confluence has degraded performance - scoped to Brazil only
3 updates
### Summary On February 10, 2025, between 16:10 and 18:35 UTC, Atlassian customers in Brazil experienced intermittent failures accessing and using Jira and Confluence Cloud. This disruption was due to our Content Delivery Network \(CDN\) not having sufficient capacity in Brazil under a certain network configuration. Changes in the lead-up to the incident included enabling CDN for Jira and Confluence in Brazil between January 20th and February 6th to improve performance and security. Jira and Confluence experience success rate monitoring detected the incident within 50 minutes. The incident was mitigated by temporarily disabling CDN in Brazil, which put Atlassian systems into a known good state. The total time to resolution was two hours and 25 minutes. ### **IMPACT** The overall impact was on February 10, 2025, between 16:10 and 18:35 UTC to Jira and Confluence Cloud_._ The incident caused service disruption to Brazilian customers when they attempted to access or use any feature of those products. Customers observed a generic error page or “HTTP 500” error presented by our CDN provider for up to a third of all requests at peak. ### **ROOT CAUSE** Our CDN is configured to present Jira and Confluence Cloud from static IP ranges dedicated to Atlassian. This configuration limited the number of edge locations servicing customer requests, causing a high concentration of connections in the Brazilian region on one edge location in particular. When daily traffic volumes in Brazil peaked on February 10, connections through the edge location to our servers encountered a capacity limit and were rejected with a HTTP 500 error. ### **REMEDIAL ACTION PLAN & NEXT STEPS** We understand that outages impact your productivity. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Along with our CDN provider, we have already taken the following actions to avoid repeating this type of incident: * Our CDN provider has completed the rollout of the static, dedicated IP feature to all their edge locations and enabled Jira and Confluence Cloud to be served from all of them, thus moving to the intended configuration that will provide full network capacity. * We are working with our CDN provider to perform more rigorous checks of capacity and configuration for Jira and Confluence Cloud. * Our CDN provider increased the server connection limit for Jira and Confluence Cloud in all locations, further increasing capacity. * Based on the observed failure patterns, we have created additional monitoring and alerting configurations, so we will be alerted immediately if the issue reoccurs. Additionally, we are prioritizing actions to improve the detection of failures that affect a specific region. Thanks, Atlassian Customer Support
Between February 10, 2025, at 17:04 UTC and 18:32 UTC, some customers for Jira Work Management, Jira Service Management, Jira, Jira Product Discovery Cloud and Confluence experienced degraded performance and 500 errors. We found that cloud products in Brazil encountered an internal limit with our cloud service provider. We redistributed traffic to mitigate this issue and are working with our cloud provider to address this issue in the future. The issue has been resolved and all the services are operating normally.
We are investigating cases of degraded performance for Jira Work Management, Jira Service Management,Jira Cloud and confluence customers. We are currently investigating it and will provide more details within the next hour.