Signing up for an online, at-home test from Pearson VUE also means accepting their Facial Comparison Policy.1
Pearson VUE may use facial images collected during the testing process to develop, upgrade, and improve our online proctoring application in the event that automated facial comparison processes are determined to have resulted in a matching failure. In these cases, facial images will be reviewed manually to determine the cause and the results from such a review may be used to develop, upgrade, and improve our online proctoring application. - Pearson VUE Privacy Policy
To opt-out, I had to start a chat session online (disabling functional cookies initially prevented the chat window from appearing) and provide my name, email, and phone number to a chatbot, who then redirected me to a person. It took about 15-20 minutes for the person to figure out next steps for opting out of the policy (most of this was spent waiting while the chatbot thanked me for my patience). The next step turned out to be that opting out could not be done online and I would need to call an international phone number to talk to their accommodations team to opt out.
Opting out through the call with the accommodations team took a team member, a supervisor, and 41 minutes and 46 seconds. The team member who answered the phone initially claimed it wasn’t possible by denying the language to opt-out was on the Pearson VUE website at all, and questioned me on whether I was on the right website.
I was on hold for about 30 minutes until I was connected with the supervisor who was able to schedule the exam for me while declining the facial comparison policy. This call (as it was an international call) cost $16.80 on my phone plan.
The confirmation email states that I was waived from “OnlineProctored’s Automation Tools”. This appears to be a subcontractor(?); the Privacy Policy explains that “The photographs are submitted via TLS encrypted HTTP protocol to a third party that conducts a comparison in real time without retaining either of the photographs or any data generated by or derived from the facial comparison process.”2
When I did take the test, I was unable to verify as to whether this opt-out was honored. I did not speak to anyone at the time of the exam. My camera was enabled, software installed on my computer, photos taken of the room I was in and my identification, and there was a virtual queue I was added to. I assume that a person spot-checked the photos of my environment, my ID, and my camera to verify as the exam started.
In 2025, an ~18 million class action lawsuit was settled for residents of Illinois who took a Pearson VUE test in Illinois and either scanned their palm or had facial comparison technology used on their test. The complaint said test takers were not provided with sufficient disclosures or opportunities to consent prior to a facial or palm scan. Pearson VUE denied any wrongdoing as part of the settlement (and no ruling was issued), and the company still clearly uses the facial comparison feature (perhaps with better legal terms to allow them to do it, since Illinois is called out in the Pearson VUE Privacy Policy)3
The Pearson VUE Privacy Policy further explains:
In some cases, we may handle so-called ‘special categories of Personal Data’ about you, which may be considered sensitive. This would be the case, for example, if you at your test sponsor’s request, (i) provide your race or ethnic origin; (ii) provide your biometric (palm vein) template where permitted by law; or (iii) provide medical or health information when requesting a testing session accommodation. Before we collect sensitive Personal Data we will mark the question(s) as ‘optional’ or require your consent. Such consent may be withdrawn at any time.
I would argue that the process to withdraw consent could be simplified.
Pearson VUE’s Responsible AI Statement could be improved by providing better opt-out processes and documentation, rather than trust. For example, in the “Transparency and governance” section, Pearson VUE notes:
we contract independent, third-party reviews and audits across AI design, development, and operations.
Where are the results of these audits? It’s easy to find reports of facial recognition bias and mis-tuning the systems, like recently from the Essex police, I can’t imagine this company has solved it fully; perhaps the best facial recognition system is one where its flaws are just never shared externally.
I am curious to see how and if laws developed around responsible usage of machine learning systems mandate the release of such reports. Pearson’s efforts fall into process and output-based (self-)regulation, as classified by this Harvard Law Review article, Resetting Antidiscrimination Law in the Age of AI, which when reviewing regulations related to testing the outputs of ML systems, found that:
The regulations differ in what response, if any, is required once testing reveals that a system has produced disparate results. Many regulations simply require that results be disclosed — at times to the public, but more often to an agency or state official. Some require disclosure of all impact assessment results, while others require public disclosure only if biased effects are found. (Section B, item 4, roughly at this part of the page)
The Commitment section does not discuss how it will rectify or disclose any errors made.
“We use AI systems and processes that prohibit automated decisions that could jeopardize a candidate’s ability to take or complete a test.”
The “human-in-the-loop” (or “human-controlled”, as Pearson puts it) excuse that is so common currently feels like such a cop-out, especially given we have now seen that people are often not suspicious or critical of the output of LLMs! We are conditioned to assume that the computer’s suggestions are right.4
How do online proctors react when a warning light blares red in the software for a candidate? How often do they overrule that indicator? How often is the ML system mis-comparing a candidate with their photo? How often are the candidates’ tests interrupted by online proctors when the AI system fails and alerts go off?
You can read a few online comments (like here), some that seem to partially defend the company’s policy around using automated facial comparison. Or at least accept it with resigned exasperation - perhaps it is unavoidable.
So, my complaint is that companies use unfair terms & conditions and onerous processes to stifle people’s objections to how their data is used.5 I would also like to see the company not only make the opt-out process clearer, but also deliver the transparency their Responsibility Statement promises.
Debatable notes
I use “company”, “companies” when writing, but it is people within these organizations that build these products, that accept and promote these initiatives, that profit off them. We should assign a higher cost to our data being used against us and others. We cannot divorce this from economic inequities; companies exploit our financial circumstances to get our information, only to sell it or manipulate us. Is the loyalty program or the membership rate worth our data? I cannot be the first to write that privacy should not be a privilege.
The relationship between companies and the government fighting over who can access our data also makes the headlines; speaking of large U.S. technology companies, these are often published with the perspective that companies are trying to protect you from government overreach even as they invasively collect your data and use it themselves. It always feels like PR efforts when these cases come up.
The criteria we should use: is the company using machine learning on our sensitive data to help themselves or us? Does their argument hold up to scrutiny? And this is made worse by the increasing potential harm from applying machine learning on personal characteristics, especially as tools become more accessible, with ever larger datasets. People do not have enough defenses against companies that leverage machine learning on sensitive attributes of consumers to influence them or other users.
Undoubtedly there are permissible uses of our data in products. The first line of every “How we use your data” section will lay this out: “to deliver this service to you…”. The section should end there, but it often doesn’t, which is the frustrating part.
Pearson VUE notes that for Illinois residents, additionally, biometric data could be retained up to 3 years. It is just another database with personal information stored too long that could be hacked. And from what I understand from online discussion, reliable locally-based identity verification systems are a long way off.
There are online posts about this since 2021. What does “this application” mean legally? E.g., can they only use the images to train the facial comparison algorithm? Can they use the data collected as part of this process to train a new algorithm that does something different in the same application? (As an example of data repurposing, “same application” aside, how Pokemon Go data trains delivery robots). ↩︎
There are many software companies that offer automated proctoring, like Proctortrack (whose homepage proudly declares itself as a “Leader in Innovation” for releasing the ‘Student Privacy and Data Expunge’ dashboard to promote transparency, which they claim is the ‘first and only’, for which the Read More link leads to a 404 Not Found error as of 11/27/2025, but can be found on the Internet Archive.) I am curious about the future of these companies and online test credentials as AI-generated or deepfake video technology continues to improve. ↩︎
The public sector equivalent of this perhaps is the recent TSA-initiative to board planes using facial comparison technology in the name of safety. In the airports, I have seen notices that indicate that your image is deleted after 48 hours and used for no other purpose (paraphrasing, so may not be accurate), but I would prefer clear language like that up-front, rather than buried in the privacy policy. ↩︎
I am curious about how user interfaces communicate uncertainty in ML outputs. LLMs, for example, don’t have a confidence indicator; some chat applications will include the text warning of “outputs may be incorrect” at the start of a session. Perhaps this is only strange for me, but the overall UI delivers such a sense of professionalism that it is disconcerting to think that the content itself could be wildly incorrect? And how seriously do we take these legal disclaimers when they appear everywhere? ↩︎
Of course, companies making things difficult is so common that it almost feels like the standard in all aspects of online use: hidden settings toggles, only respecting the minimum EU or California privacy laws, preventing VPN usage, defaulting invasive settings to “on”, being required to fax or mail documents to opt-out, and so on. ↩︎