Gizmodo: 
				How did you become a physicist interested in AI and its 
				pitfalls?
				 
				
				Brian 
				Nord: My Ph.d is 
				in cosmology, and when I moved to Fermilab in 2012, I moved into 
				the subfield of strong gravitational lensing. 
				 
				 
				
				(Editor's 
				note: Gravitational lenses are places in the night sky where 
				light from distant objects has been bent by the gravitational 
				field of heavy objects in the foreground, making the background 
				objects appear warped and larger.)
				 
				 
				
				I spent a few 
				years doing strong lensing science in the traditional way, where 
				we would visually search through terabytes of images, through 
				thousands of candidates of these strong gravitational lenses, 
				because they're so weird, and no one had figured out a more 
				conventional algorithm to identify them. 
				 
				
				Around 2015, I 
				got kind of sad at the prospect of only finding these things 
				with my eyes, so I started looking around and found 
				
				deep learning.
				 
				
				Here we are a 
				few years later - myself and a few other people popularized this 
				idea of using deep learning - and now it's the standard 
				way to find these objects. 
				 
				
				People are 
				unlikely to go back to using methods that aren't deep learning 
				to do galaxy recognition. 
				 
				
				We got to this 
				point where we saw that deep learning is the thing, and really 
				quickly saw the potential impact of it across astronomy and the 
				sciences. 
				 
				
				It's hitting 
				every science now. That is a testament to the promise and peril 
				of this technology, with such a relatively simple tool. 
				
				 
				
				Once you have 
				the pieces put together right, you can do a lot of different 
				things easily, without necessarily thinking through the 
				implications.
				 
				 
				
				Gizmodo: 
				So what is deep learning? Why is it good and why is it bad?
				 
				
				BN: 
				Traditional mathematical models (like the F=ma of Newton's laws) 
				are built by humans to describe patterns in data: 
				
				
					
					We use our 
					current understanding of nature, also known as intuition, 
					to choose the pieces, the shape of these models. 
					
				
				
				This means that 
				they are often limited by what we know or can imagine about a 
				dataset.
				 
				
				These models 
				are also typically smaller and are less generally applicable for 
				many problems. 
				 
				
				On the other 
				hand, artificial intelligence models can be very large, with 
				many, many degrees of freedom, so they can be made very general 
				and able to describe lots of different data sets. 
				
				 
				
				Also, very 
				importantly, they are primarily sculpted by the data that they 
				are exposed to - AI models are shaped by the data with which 
				they are trained. 
				 
				
				Humans decide 
				what goes into the training set, which is then limited again by 
				what we know or can imagine about that data. It's not a big jump 
				to see that if you don't have the right training data, you can 
				fall off the cliff really quickly.
				 
				
				The promise and 
				peril are highly related. 
				 
				
				In the case of 
				AI, the promise is in the ability to describe data that humans 
				don't yet know how to describe with our 'intuitive' models.
				
				 
				
				But, 
				perilously, the data sets used to train them incorporate our own 
				biases. 
				
					
					When it 
					comes to AI recognizing galaxies, we're risking biased 
					measurements of the universe. 
					 
					
					When it 
					comes to AI recognizing human faces, when our data sets are 
					biased against Black and Brown faces for example, we risk 
					discrimination that prevents people from using services, 
					that intensifies surveillance apparatus, that jeopardizes 
					human freedoms. 
				
				
				It's critical 
				that we weigh and address these consequences before we imperil 
				people's lives with our research.
				 
				 
				
				Gizmodo: 
				When did the light bulb go off in your head that AI could be 
				harmful?
				 
				
				BN:
				I gotta say that 
				it was with the
				
				Machine Bias article from 
				ProPublica in 2016, where they discuss recidivism and 
				sentencing procedure in courts. 
				 
				
				At the time of 
				that article, there was a closed-source algorithm used to make 
				recommendations for sentencing, and judges were allowed to use 
				it. 
				 
				
				There was no 
				public oversight of this algorithm, which ProPublica 
				found was biased against Black people; people could use 
				algorithms like this willy nilly without accountability.
				
				 
				
				I realized that 
				as a Black man, I had spent the last few years getting excited 
				about neural networks, then saw it quite clearly that these 
				applications that could harm me were already out there, already 
				being used, and we're already starting to become embedded in our 
				social structure through the criminal justice system. 
				
				 
				
				Then I started 
				paying attention more and more. 
				 
				
				I realized 
				countries across the world were using surveillance technology, 
				incorporating machine learning algorithms, for widespread 
				oppressive uses.
				 
				 
				
				Gizmodo: 
				How did you react? What did you do?
				 
				
				BN:
				I didn't want to 
				reinvent the wheel; I wanted to build a coalition.
				 
				
				I started 
				looking into groups like 
				
				Fairness, Accountability and Transparency 
				in Machine Learning, plus 
				
				Black in AI, who is focused 
				on building communities of Black researchers in the AI field, 
				but who also has the unique awareness of the problem because we 
				are the people who are affected. 
				 
				
				I started 
				paying attention to the news and saw that Meredith Whittaker had 
				started a think tank to combat these things, and Joy 
				Buolamwini had helped found the
				
				Algorithmic Justice League.
				 
				
				I brushed up on 
				what computer scientists were doing and started to look at what 
				physicists were doing, because that's my principal community.
				
				 
				
				It became clear 
				to folks like me and 
				
				Savannah Thais that 
				physicists needed to realize that they have a stake in this 
				game. We get government funding, and we tend to take a 
				fundamental approach to research. 
				 
				
				If we bring 
				that approach to AI, then we have the potential to affect the 
				foundations of how these algorithms work and impact a broader 
				set of applications. 
				 
				
				I asked myself 
				and my colleagues what our responsibility in developing these 
				algorithms was and in having some say in how they're being used 
				down the line.
				 
				 
				
				Gizmodo: 
				How is it going so far?
				 
				
				BN:
				Currently, we're 
				going to write a white paper for
				
				SNOWMASS, this high-energy 
				physics event. 
				 
				
				The SNOWMASS 
				process determines the vision that guides the community for 
				about a decade. 
				 
				
				I started to 
				identify individuals to work with, fellow physicists, and 
				experts who care about the issues, and develop a set of 
				arguments for why physicists from institutions, individuals, and 
				funding agencies should care deeply about these algorithms 
				they're building and implementing so quickly. 
				 
				
				It's a piece 
				that's asking people to think about how much they are 
				considering the ethical implications of what they're doing.
				 
				
				We've already 
				held
				
				a workshop at the University of 
				Chicago where we've begun discussing these issues, and at 
				Fermilab we've had some initial discussions. 
				 
				
				But we don't 
				yet have the critical mass across the field to develop policy. 
				We can't do it ourselves as physicists; we don't have 
				backgrounds in social science or technology studies.
				 
				
				The right way 
				to do this is to bring physicists together from Fermilab and 
				other institutions with social scientists and ethicists and 
				science and technology studies folks and professionals, and 
				build something from there. 
				 
				
				The key is 
				going to be through partnership with these other disciplines.
				 
				 
				
				Gizmodo: 
				Why haven't we reached that critical mass yet?
				 
				
				BN: 
				I think we need to show people, as Angela Davis has said, 
				that our struggle is also their struggle. 
				 
				
				That's why I'm 
				talking about coalition building. The thing that affects us also 
				affects them. 
				 
				
				One way to do 
				this is to clearly lay out the potential harm beyond just race 
				and ethnicity. Recently, there was this discussion of a paper 
				that used neural networks to try and speed up the selection of 
				candidates for Ph.D programs. 
				 
				
				They trained 
				the algorithm on historical data. 
				 
				
				So let me be 
				clear, they said here's a neural network, here's data on 
				applicants who were denied and accepted to universities. Those 
				applicants were chosen by faculty and people with biases. 
				
				 
				
				It should be 
				obvious to anyone developing that algorithm that you're going to 
				bake in the biases in that context. I hope people will see these 
				things as problems and help build our coalition.
				 
				 
				
				Gizmodo: 
				What is your vision for a future of ethical AI?
				 
				
				BN: 
				What if there were an agency or agencies for algorithmic 
				accountability? 
				 
				
				I could see 
				these existing at the local level, the national level, and the 
				institutional level. We can't predict all of the future uses of 
				technology, but we need to be asking questions at the beginning 
				of the processes, not as an afterthought.
				 
				
				An agency would 
				help ask these questions and still allow the science to get 
				done, but without endangering people's lives.
				 
				
				Alongside 
				agencies, we need policies at various levels that make a clear 
				decision about how safe the algorithms have to be before they 
				are used on humans or other living things. 
				 
				
				If I had my 
				druthers, these agencies and policies would be built by an 
				incredibly diverse group of people. 
				 
				
				We've seen 
				instances where a homogeneous group develops an app or 
				technology and didn't see the things that another group who's 
				not there would have seen. 
				 
				
				We need people 
				across the spectrum of experience to participate in designing 
				policies for ethical AI.
				 
				 
				
				Gizmodo: 
				What are your biggest fears about all of this?
				 
				
				BN: 
				My biggest fear is that people who already have access to 
				technology resources will continue to use them to subjugate 
				people who are already oppressed; Pratyusha Kalluri has 
				also advanced this idea of
				
				power dynamics.
				
				 
				
				That's what 
				we're seeing across the globe. 
				 
				
				Sure, there are 
				cities that are trying to ban facial recognition, but unless we 
				have a broader coalition, unless we have more cities and 
				institutions willing to take on this thing directly, we're not 
				going to be able to keep this tool from exacerbating white 
				supremacy, racism, and misogyny that that already exists inside 
				structures today. 
				 
				
				If we don't 
				push policy that puts the lives of marginalized people first, 
				then they're going to continue being oppressed, and it's going 
				to accelerate.
				 
				 
				
				Gizmodo: 
				How has thinking about AI ethics affected your own research?
				 
				
				BN: 
				I have to question whether I want to do AI work and how I'm 
				going to do it; whether or not it's the right thing to do to 
				build a certain algorithm. That's something I have to keep 
				asking myself... 
				 
				
				Before, it was 
				like, how fast can I discover new things and build technology 
				that can help the world learn something? Now there's a 
				significant piece of nuance to that. 
				 
				
				Even the best 
				things for humanity could be used in some of the worst ways. 
				It's a fundamental rethinking of the order of operations when it 
				comes to my research.
				
				
				I don't think 
				it's weird to think about safety first. We have OSHA and safety 
				groups at institutions who write down lists of things you have 
				to check off before you're allowed to take out a ladder, for 
				example. 
				 
				
				Why are we not 
				doing the same thing in AI? 
				 
				
				A part of the 
				answer is obvious: 
				
					
					Not all of 
					us are people who experience the negative effects of these 
					algorithms. 
				
				
				But as one of 
				the few Black people at the institutions I work in, I'm aware of 
				it, I'm worried about it, and the scientific community needs to 
				appreciate that my safety matters too, and that my safety 
				concerns don't end when I walk out of work.
				 
				 
				
				Gizmodo: 
				Anything else?
				 
				
				BN: 
				I'd like to re-emphasize that when you look at some of the 
				research that has come out, like vetting candidates for graduate 
				school, or when you look at the biases of the algorithms used in 
				criminal justice, these are problems being repeated over and 
				over again, with the same biases. 
				 
				
				It doesn't take 
				a lot of investigation to see that bias enters these algorithms 
				very quickly. The people developing them should really know 
				better. 
				 
				
				Maybe there 
				needs to be more educational requirements for algorithm 
				developers to think about these issues before they have the 
				opportunity to unleash them on the world.
				 
				
				This 
				conversation needs to be raised to the level where individuals 
				and institutions consider these issues a priority. Once you're 
				there, you need people to see that this is an opportunity for 
				leadership.
				 
				
				If we can get a 
				grassroots community to help an institution to take the lead on 
				this, it incentivizes a lot of people to start to take action.
				 
				
				And finally, 
				people who have expertise in these areas need to be allowed to 
				speak their minds. We can't allow our institutions to quiet us 
				so we can't talk about the issues we're bringing up. 
				
				 
				
				The fact that I 
				have experience as a Black man doing science in America, and the 
				fact that I do AI - that should be appreciated by institutions.
				
				 
				
				It gives them 
				an opportunity to have a unique perspective and take a unique 
				leadership position. I would be worried if individuals felt like 
				they couldn't speak their mind. 
				 
				
				If we can't get 
				these issues out into the sunlight, how will we be able to build 
				out of the darkness?