How to Audit and Lock Down Your AI Training Data Exposure on GitHub
So GitHub just updated their Copilot data usage policy. The short version: interaction data from all user tiers — including free users — will be used to train and improve their models, and it's opt...

Source: DEV Community
So GitHub just updated their Copilot data usage policy. The short version: interaction data from all user tiers — including free users — will be used to train and improve their models, and it's opt-in by default. If you're reading this with a knot in your stomach, you're not alone. Let's skip the outrage cycle and focus on the actual problem: how do you figure out what data you're exposing, and how do you lock it down? The Real Problem: You Don't Know What's Being Sent Here's what caught most people off guard. It's not just your code that's being collected — it's your interaction data. That means prompts, suggestions you accepted or rejected, and the context surrounding those interactions. If you've been using Copilot to help debug a production issue at 2am, that context went somewhere. The frustrating part isn't that data collection exists. It's that the defaults changed silently, and most developers won't notice until someone posts about it on Reddit. Step 1: Check Your Current Setti