Face recognition has its benefits in securing iPhones or, in the case of artificial intelligence company Kairos, its use in helping people with Alzheimer’s identify family members without feeling embarrassed or helping law enforcement identify criminals as founder and CEO Brian Brackeen said of his business’ work at a panel discussion at the Fairmont during South By Southwest.

But with the technology’s benefits comes a host of major ethical issues to address and improve on: privacy and racial bias, which was the focus of the panel “Face Recognition: Please Search Responsibly.”

The discussion was led by Clare Garvie, a privacy lawyer and associate of the Center on Privacy & Technology at Georgetown Law; Brackeen; and Arun Ross, the director of the Integrated Pattern Recognition and Biometrics Lab at Michigan State University. Each offered ideas for where the road to protecting privacy and undoing racial bias in developing face recognition begins.

Face recognition algorithms extract facial features and reduce them to numerical code, which can then be used to match photos in a database. Law enforcement agencies can use it to match against mug shots, for example.

But, according to the New York Times, when it comes to photo identification, some commercial software is 99 percent accurate if the person of the photo is a white man—for darker skinned women, it’s 35 percent.

The AI behind face recognition relies on the data used to train it. If that data doesn’t reflect diversity, e.g. is composed of mostly white men and few black women, the software will identify less accurately people of color, the Times reported. When people are crafting the technology—choosing features to extract—bias comes about, Ross said.

“The more we raise questions like Clare asked, it forces researchers to go back to the drawing board since it comes down to training,” Ross said in response to Garvie asking the other panelists if racial bias in face recognition technology can be corrected.

The early stages of Kairos also faced limitations in recognizing people of color. “Our first algorithms didn’t even perform well on me compared to my team members,” said Brackeen, who is black. Of Kairos’ many features—there’s gender detection, emotion detection, and face verification, among others—the firm also touts a diversity recognition app that allows a user to upload a photo and see their ethnic makeup in percentages.

At the panel discussion, Brackeen announced his firm will also open source face recognition technology at their own cost that will be accurate in identifying people of different races.

Face recognition’s versatility also lends itself to privacy concerns and little regulation. Metropolitan police used the technology to scan crowds during last year’s Notting Hill carnival in London. Early this week, the Times reported that Madison Square Garden, as a security measure, has also been using face recognition to identify people entering the arena (and comparing images to a photograph database).

In 2016, Georgetown Law’s Center on Privacy & Technology released a report that detailed American police departments’ own use of face recognition in identifying criminals and suspects. “The Perpetual Line-Up,” co-authored by Garvie, highlighted a need to better understand just how privacy and civil liberties are impacted and made recommendations, including law enforcement agencies and legislators crafting policies protecting citizens from misuse and abuse.

Face recognition is also used in retail, where companies may look for suspected shoplifters, or on social media. The fear is what else it may be used for, so it’s good for entities using the technology to be clear about its use, Garvie said at the panel.