The Microsoft Team Racing to catch bugs before they happen

As a stream of cybercriminals, state-backed hackers and scammers continue to flood the zone with digital attacks and aggressive campaigns around the world, it’s no surprise that the maker of the ubiquitous Windows operating system focuses on security. Microsoft’s Patch Tuesday update releases often contain fixes for critical vulnerabilities, including vulnerabilities that are actively exploited by attackers around the world.

The company already has the required groups in place to identify weaknesses in its code (the “red team”) and develop solutions (the “blue team”). But recently, that format has evolved again to foster greater collaboration and interdisciplinary work in the hopes of catching even more errors and flaws before things start to escalate. Known as Microsoft Offensive Research & Security Engineering, or Morse, the department combines the red team , the blue team and the so-called green team, which focuses on finding flaws or eliminating weaknesses that the red team has found and solved more systemically through changes in the way things are done within an organization.

“People are convinced that you can’t move forward without investing in security,” said David Weston, Microsoft’s vice president of Enterprise and Operating System Security, who has been with the company for 10 years. “I’ve been in security for a long time. For most of my career, we were considered annoying. Now, if anything, leaders come up to me and say, ‘Dave, am I okay? Have we done everything we can?’ That has been an important change.”

Morse has been working to promote secure encryption practices at Microsoft so that fewer bugs end up in the company’s software in the first place. OneFuzz, an open source Azure testing framework, allows Microsoft developers to constantly automatically bombard their code with all sorts of unusual use cases to find flaws that would go unnoticed if the software was only used exactly as intended.

The combined team has also been at the forefront of promoting the use of more secure programming languages ​​(such as Rust) across the company. And they’ve advocated embedding security analytics tools directly into the real software compiler used in the company’s production workflow. That change has had a big impact, Weston says, because it means developers aren’t doing hypothetical analysis in a simulated environment where some bugs can be overlooked a step away from real production.

The Morse team says the shift to proactive security has led to real progress. In a recent example, Morse members were vetting historical software—an important part of the group’s work, as much of the Windows codebase was developed before these extensive security reviews. While investigating how Microsoft had implemented Transport Layer Security 1.3, the fundamental cryptographic protocol used in networks such as the Internet for secure communications, Morse discovered a remotely exploitable bug that could have allowed attackers to gain access to targets’ devices.

As Mitch Adair, Microsoft’s chief security leader for Cloud Security, put it, “It would have been as bad as it gets. TLS is used to basically protect every single service product that Microsoft uses.”