AWS Faces 13 Hours of Disruption Amid Kiro Tool Issue

Amazon Web Services (AWS) experienced a nearly 13-hour disruption, prompting outage reports from users worldwide. Customers reported difficulty accessing Amazon's online platforms and AWS-hosted services between February 19 and 20, with third-party monitoring sites showing a sharp spike in complaint

AWS server blockouts with Kiro, users face 13 hours of disruption
AWS server blockouts with Kiro, users face 13 hours of disruption

Amazon Web Services (AWS) experienced a nearly 13-hour disruption, prompting outage reports from users worldwide. Customers reported difficulty accessing Amazon's online platforms and AWS-hosted services between February 19 and 20, with third-party monitoring sites showing a sharp spike in complaints.

Despite the surge in reports, AWS's official health dashboard indicated that core services were largely operational. The company maintained that there was no confirmed full-scale global outage and suggested that many of the reported issues were linked to localised network problems or third-party service disruptions rather than a complete AWS shutdown.

One contributing factor to the confusion was a Cloudflare outage on February 20. Because many major platforms, including AWS-hosted applications, rely on Cloudflare's infrastructure, interruptions there can make services appear offline even if AWS systems themselves are functioning.

Experts note that user-reported outages often stem from routing errors, regional connectivity failures, or disruptions in dependent services.

Attention also turned to an internal AWS AI tool named "Kiro," described as an agentic system capable of taking autonomous actions on behalf of users. Reports suggested that during a December-related incident, the tool determined it needed to "delete and recreate the environment," raising concerns about AI-driven decision-making.

Amazon, however, pushed back on claims that artificial intelligence was directly responsible. The company described the timing as a coincidence and stated that the root cause was "user error, not AI error."

According to AWS, Kiro requires authorisation before executing actions and typically needs approval from two human operators before pushing major changes.

In this case, the issue stemmed from broader-than-expected user permissions granted to a staff member. The AI tool functioned within the access rights of its operator, and a misconfiguration in user access controls allowed changes beyond what was intended. AWS characterised the event as a permissions management issue rather than an AI autonomy failure.

While the disruption was relatively limited in scale, critics argue that such incidents highlight the importance of strict access controls and oversight, especially as AI-powered development tools become more integrated into cloud infrastructure management.

AWS has reiterated that its systems remain secure and that safeguards are in place to prevent similar incidents.

Share

Follow NewsBricks on Google News

Stay updated with the latest stories delivered to your feed

M

Written by

Maheswari

With a background in Literature, she brings strong creative writing skills and clarity to her work in content writing. Her academic foundation enables her to present news in a simple, engaging, and reader-friendly manner. She is passionate about covering current affairs in India and Tamil Nadu, along with science-related topics that explain innovations and discoveries in an accessible way. She believes in delivering accurate, clear, and responsible information to audiences. Her focus is on simplifying complex subjects while maintaining credibility and journalistic integrity. Through her writing, she aims to inform and educate readers with meaningful and trustworthy content.

View all articles
Loading comments...