Starlink, SpaceX’s satellite internet service, has recently updated its privacy policy to include the possibility of using its users’ personal data in the training of artificial intelligence models.
This decision has generated concern among clients and digital rights experts, as the collection and use of personal information for AI purposes is becoming an increasingly common and large-scale practice in the technology industry.
The new policy, effective January 15, states that customer data can be used to “train our machine learning or artificial intelligence models,” and even be shared with third parties for their own AI developments.By default, all users are included in this mode, although there is the possibility of unsubscribing manually.
The scope of the data collected by Elon Musk’s company goes beyond what is usual for an internet provider.Depending on its policy, the company may access contact information, performance metrics, billing data, and, in some cases, communications data, such as audio, electronic, or visual information.
Likewise, the possibility of obtaining “inferences” from the collected information is mentioned, an ambiguous category that could range from usage patterns to personal preferences.
Starlink clarifies, on a separate page, that it will not share browsing history or individual habits or location with AI models.However, the record of visited sites and access times continues to represent a valuable resource for training algorithms, and potentially implies risks for user privacy.
Speaking to CNET, William Budington, digital rights specialist at the Electronic Frontier Foundation, warned of the dangers of this industry trend of feeding AI systems with large volumes of personal data.One of the risks is that, during the engineering process of prompts in generative models, systems can reproduce fragments of sensitive or private information used in their training, as the scientific community has pointed out.
This move by Starlink arises in a context of expansion of artificial intelligence within the ecosystem of companies linked to Elon Musk, and coincides with the acquisition of xAI by SpaceX.Starlink currently has more than 9 million users worldwide, giving the company a significant database.
For those who do not want their personal data to be used in artificial intelligence training, Starlink offers a simple opt-out mechanism. The process can be done both from the mobile application and from the company’s website.
The user must log in, go to the account portal and look for the Privacy Preferences section.There, just uncheck the option that allows the use of data to train AI models.By completing this step, the system will confirm that personal information will no longer be used for that purpose.
However, experts recommend taking additional steps to protect online privacy.Using a trusted virtual private network (VPN) helps encrypt communication to the VPN server, preventing Starlink, or another provider, from accessing the content of the communications.This protection is only effective if the VPN is installed on each device or directly on the router, to ensure coverage of the entire home network.
Despite official guarantees on the handling of certain data, the trend to massively collect information for the development of artificial intelligence poses new challenges for users and regulation.The Starlink case illustrates how the integration of AI into everyday services can impact privacy on a large scale and highlights the importance of users educating themselves and using the tools available to protect their personal data.

