As the machine learning field grows, security needs to be built into the design, not just patched on after issues develop. So says Gary McGraw, the man whom many consider the father of software security.

According to McGraw, at the beginning of the computer revolution, computer security was often an afterthought – after all, holes in the system could be patched and firewalls could protect the broken thing from hackers. But, ultimately, McGraw and others were able to convince computer manufacturers that it makes a lot more sense to build security in.

gary mcgraw software secruity

Image 1: Gary McGraw, the “father of software security.” IEEE spoke with Gary about the future of machine learning security.

McGraw is no stranger to the IEEE community. Throughout his career, he’s published articles in IEEE publications, served on the IEEE Computer Society Board of Governors, and produced the long-running Silver Bullet Society podcast for Security & Privacy magazine.

Recently, we sat down with Gary to discuss his journey inside and outside the software security industry. In addition to discussing some of the hot topics of the day, such as machine learning security, we also talked about the origins of his career, his views on the future of software security, and more. Here’s what he had to say:

You’ve been a leader in the software security field since the mid-1990’s. How has the field evolved since you began your career?

My career came from a funny path. In 1981, my parents bought one of the first Apple II’s ever produced, directly from Steve Jobs at the Apple Store. With that in the house, I almost immediately taught myself how to write code. I was 15 at the time.

Fast-forward to college. I was studying philosophy at the University of Virginia and came across Doug Hofstadter’s book, The Mind’s I. I used it to refactor a philosophy of mind class, because I believed Hofstadter’s approach was right, and after inviting him to come give a lecture on campus, I decided to get my PhD with him. I subsequently changed my studies from philosophy to computer science, and the rest, as they say, is history.

You’ve written that in the field of computer security, “a distinct absence of science is a problem.” What do you mean by this, and why do you think there is a lack of science?

First, let me stress that there are people doing science in computer security now, especially in the IEEE community. However, in commercial security, I believe there’s a distinct lack of science. There are many ideas and theories that aren’t backed by data, and a plethora of people who are convinced their way is the right way without real evidence. Faith-based security is folly.

I’ve spent most of my career gathering data and piling up facts, because I’m a trained scientist with a PhD and that’s what scientists do. On the commercial side, people need to understand the role that science plays in keeping thoughts organized, citing sources, and advancing the evolution of technology. People can’t pretend they invented everything by themselves, or that they can solve every problem by themselves in a magic way.

Your recent research has focused on machine learning threats, and you’ve written that until recently not much attention has been focused on this issue. Why do you think organizations haven’t focused on machine learning security until now?

Machine learning is pretty new in terms of the hype cycle. There’s been some progress made with security, but I was interested to learn how ML really works. I dug into the literature with three other guys to see what’s happened in the last 25 years and discovered that the answer to “what’s new” is really simple: our machines are way, way faster, and our data sets are much bigger.

We’re seeing progress with machine learning, but it’s not breakthroughs in terms of cognitive science. We’re just learning how we can make machines do more interesting things. We’re so psyched about all of the things we can make them do, yet nobody’s really thinking about the security risks of how we’re doing what we’re doing. If we really want to secure machine learning, we have to identify and then manage all of the risks.

What are some of the top security issues you’re following now? Are there any other projects you’re working on?

I’m still deeply involved in software security, especially commercially – I feel responsible for curating the field and eradicating nonsense whenever possible. I’m on a number of technical advisory boards, all of which have very important things to think about in terms of security.

After “retiring” in January, I created the Berryville Institute of Machine Learning (BIML) with three others. We recently published a paper that describes an architectural risk analysis to identify machine learning security risks. In our paper, we identify 78 risks associated with components commonly found in ML systems. We then identify the top ten ML security risks.

Beyond that, my view is that the machine learning work we’re doing is something that’s going to become commercially viable, which goes to show you how bad I am at retirement.

 

If you’d like to learn more about Gary, read his published works, or listen to his podcast, you can do so at garymcgraw.com.

For more information on machine learning, visit the IEEE Xplore Digital Library.