Defining the AI Engineering Discipline 

Dr. Matt Gaston, director, SEI’s AI engineering division

Dr. Matt Gaston of the Carnegie Mellon University Software Engineering Institute has been involved in Applied AI, machine learning and national security leadership for many years. The founding director of the SEI Emerging Technology Center, he was recently named the first director of the SEI’s AI Division. Among his positions prior to CMU, Dr. Gaston worked for 10 years at the National Security Agency. He received his PhD in computer science from the University of Maryland Baltimore County, and an undergraduate degree in mathematics from the University of Notre Dame.  

At the AI World Government event Oct. 18-19, Dr. Gaston will be speaking on AI Engineering, a discipline to guide the development of scalable, robust and human-centered AI systems. He recently took a few minutes to talk to AI Trends Editor John P. Desmond about the work. 

AI Trends: The DOD in 1984 first sponsored the Software Engineering Institute to bring engineering discipline to software development. This new AI Division is intended to study the engineering aspects of AI design and implementation. I wondered if the work of the division will have an applied AI focus.  

Matt Gaston: Yes. The reason we’ve established this new AI Division at the Software Engineering Institute is exactly based on this history. In 1984, the DOD created the Software Engineering Institute to help the department do software as well as software could be done. Now with the major focus on AI and the rapid push to adopt and deploy these types of technologies because of the promise and power that they hold, constituent parts of the DOD are looking to the SEI to now help them figure out how to do AI as well as AI can be done.  

To your question about an applied focus, we think it’s critically important that if we’re going to be a leader in helping organizations understand how to build reliable, responsible, trustworthy AI systems, that we must also be doing some of that ourselves. So there’s most certainly an applied focus. I would say even the focus on AI engineering is really about how to apply these types of technologies in a smart, reliable, and responsible way.  

Will you still have the software development life cycle, with specific stages along the way? Does the SDLC work for the development of AI systems, or do we need something new?  

It’s somewhere in between. The traditional software development life cycle is most certainly relevant to what is needed in AI software and AI systems. But of course, adopting AI into systems poses some challenges to the traditional software development life cycle in some very important ways. First is the role of data. In modern AI systems, which are largely driven by modern machine learning, the behavior of the machine learning models that are produced is driven by the data that those systems are trained on. The importance of data in deriving system behavior is new and introduces new challenges. These include how to manage the data, how to know you have an appropriate data set, how to version control large datasets, and how to clean up and make a data set more robust. So the role of data is critically important.  

A second big challenge is the role of uncertainty. Almost all the modern AI techniques have some notion of probability or probabilistic behavior inside them. So that has to become a first-class concept in thinking about the software system as a whole and how you manage the software development life cycle, and handle that uncertainty across the development path.  

Then lastly, we need to have some interesting conversations about how these systems interact with their human counterparts. These systems are almost always going to be working for or with humans, providing information, providing decision support to a human counterpart. And the way AI capabilities work, they introduce new questions about how to make the connections between the humans and the systems transparent, understandable and usable in any variety of ways. So those are three examples of how AI and AI systems challenge and expand the traditional software development life cycle.  

What would you say are the key characteristics of AI engineering? 

Based on lots of feedback and input that we’ve collected and received from our stakeholders, from government organizations, but also industry collaborators and academic partners here at Carnegie Mellon University, and elsewhere across the country and the world in some cases, we’ve identified currently three pillars of AI engineering. Those three pillars are scalable AI, robust and secure AI, and human centered AI. I’ll say just another sentence or two about each of those.  

Scalable AI is about how to scale AI technologies up to the size, speed and complexity of the application space. For our Department of Defense stakeholders, it’s the mission space. But it’s not just scaling up, there’s also scaling out. How do you make the development and adoption of these AI technologies possible at the enterprise scale, again, in a responsible and reliable way? Also, for particular applications in the commercial world as well as the government and defense sector, how do you scale capabilities down? A lot of modern AI techniques require lots and lots of compute. In some cases we want them to work in a form factor that’s maybe really, really small, and there’s some interesting engineering challenges in doing so, and also some science that is needed to make that happen. So that’s scalable AI.  

Robust and secure AI is about test and evaluation. How do we build AI systems, machine learning models that are provably, or at least testably robust to various considerations—security but also uncertainty—and have appropriately calibrated confidence levels? So the robust and secure AI is really about test and evaluation and knowing that these systems are going to behave the way we expect them to behave.  

The third pillar is human-centered AI. That’s all about how these systems interact with their human counterparts, in what we might call human-machine interaction or teaming. That is, how humans can better understand and interpret both the function of these systems, and the outputs of these systems. Then there is a whole collection of policy and ethics considerations that we include in that pillar of human-centered AI.  

For the Department of Defense, what kind of AI work does the Carnegie Mellon University Software Engineering Institute get involved with?  

We have established the new AI Division at the Software Engineering Institute in response to a demand signal that we were hearing from senior leaders across the Department [of Defense] on the need for an AI engineering discipline. To establish that discipline, it is important to work on some applications, to build AI capabilities. We have ongoing work, and we have work that we plan to do in important mission applications.  

In command and control, for example, we see a huge opportunity to increase situational awareness, to provide the right information to decision makers. There are great opportunities in sensing and sensor processing. Also, logistics is an area where AI could have a huge impact. And I see some emerging application domains from a defense and national security perspective, such as the increasing importance of the implications of climate change on defense and national security. I am not aware of an enormous amount of work in that area; I see a huge possibility for applying AI technologies for good, in understanding those types of concerns.  

How will AI engineering address cybersecurity?  

The Software Engineering Institute has a long history of work and contributions in the space of secure and safe software systems, as well as cybersecurity and cybersecurity engineering writ large. So we want to build on all that experience and legacy of great work, and bring that type of thinking and knowledge and experience to the new challenges that AI presents from a security perspective. So in that regard, it’s pretty well-known that modern machine learning systems can be manipulated in multiple different ways.  

So how can a machine learning system be manipulated?  

I really like the taxonomy offered by John Beieler, who is the director of Science and Technology at the Office of the Director of National Intelligence. He boils it down to three categories. First, modern machine learning systems, for the most part deep learning systems, can be manipulated to learn the wrong thing at training time. Second, they can be manipulated to do the wrong thing. So at inference time or decision time inputs can be modified so that a modern machine learning system, a deep neural network, makes the wrong prediction, so it does the wrong thing. Then the third category is that they could reveal the wrong thing. It’s possible to probe a deep neural network, a machine learning model, out there in deployment to extract the information or the data that was used to train that model.  

Lots of detail is behind this, with many different paths to wander down in each of these categories of manipulations, but learn the wrong thing, do the wrong thing and reveal the wrong thing are the three big categories of how they can be manipulated.  

How will AI engineering consider the ethical use of AI? 

Ethics is a core consideration in our pillar that we call human-centered AI. It’s well known that the Department of Defense has adopted and published the ethical AI principles they are working toward. We want our work in AI engineering to be responsive to those ethical AI principles, and other organizations in the intelligence community have similar principles.   

From an AI engineering perspective, we are interested in thinking about ethics upfront in the design of AI systems, and then building in engineering mechanisms that can help to measure and monitor the ethical concerns system developers and users might have about how these systems are used. We see a great opportunity to make ethics a core consideration in the engineering discipline associated with AI.  

Estimates vary as to the percentage, but what is the primary reason that so many AI projects do not make it into production? Why do so many projects fail?  

I will point to a reference here, a great dataset that is run or operated by the Partnership on AI, called the AI Incidents Database. Just recently, the Center for Security and Emerging Technology at Georgetown [University] did an analysis of the incidents in that database. They identified three core reasons why AI projects fail.  

The first is specification. That means the system behavior that was built was not aligned with the actual intent. In other words, the requirements or the needs statement for that system did not capture what was intended to be built. It’s well-known that specification is hard in modern machine learning systems and in AI systems generally. [Ed. Note: Learn more at the blog of Roboflow, “Google Researches Say Underspecification is Ruining Your Model Performance.]  

The second big area is robustness. This means the system either was not or could not be tested in a way that would guarantee its appropriate behavior once deployed. This is a known challenge in AI systems. Major investments are being made in industry and in the government on test and evaluation. It’s hard. It’s really hard to test for the right things at system development time, pre-deployment. Environments change in the wild as well.  

One concept that we’re working on in this area is what we call “beyond accuracy.” All too often, especially when it comes to machine learning, model accuracy is evaluated, which is how well the machine learning model performed a specific task. Let’s call it a classification task. But that may have not been the mission application or the operational application of that model. Many good examples of this are out there.  

The third key area where it’s been shown that AI systems fail is in assurance. That means that the appropriate mechanisms to monitor the system in operation, were not there. There were no mechanisms to detect when the system might degrade in performance, or when things in the environment have changed such that the system behavior is no longer what is intended. So to recap, the three primary reasons, according to both CSET at Georgetown and the AI Incidents Database, are specification, robustness, and assurance.  

Among the top technology trends for 2021, Gartner analysts included AI engineering, saying it will bring together various disciplines and offer a more clear path to value. Do you see it that way? And if so, what disciplines will be brought together in the AI Division at CMU’s Software Engineering Institute?  

I do see it that way. I think it’s exactly right. We’ve learned a lot in all of these other engineering disciplines and their development over decades, or even centuries in some cases. So I think there is a lot to bring together to support AI engineering. The fields that come readily to mind, obviously software engineering. I am part of the Software Engineering Institute. So we will draw on ideas and lessons and practices from software engineering. Systems engineering, taking a broader context, is also critically important. Computer science itself. There are theoretical level considerations when thinking about how to make these systems robust and reliable and understandable. And there are great opportunities in other traditional engineering disciplines, such as civil or industrial engineering. We will draw on ideas and inspiration there to make sure we’re asking the right questions and providing the right tools to make these systems reliable. 

Another field that comes to mind is what I would call human-centered design. There is a lot of work out there in industry and in design schools on how to design systems around the needs of humans, with humans in consideration as part of the system. That is critically important for AI engineering.  

Also, our overall approach to AI engineering is very much community-based. With that, we’re trying to be very open and take a very inclusive view of where we might learn key insights or leading practices on how to build these systems and test these systems and deploy these systems. So we’re very open to being surprised by insights that come from unexpected fields or disciplines. So it’s exactly right to think about AI engineering drawing from a wide variety of different fields.  

Earlier this year, CMU’s Software Engineering Institute announced a National AI Engineering Initiative aimed at growing the AI engineering discipline, that would encourage partners who would conduct research and fund operations to join. How’s that going? 

To be honest, it started maybe a little slower than we had hoped, but I’d say it’s going very well. We have several key sponsors of that initiative at this point. Even more key stakeholders, people that are advising and guiding what we’re doing in that regard, and a growing set of formal partners that are part of our community-based approach to establishing an AI engineering discipline. 

Also, we have observed much more volume in the conversation about the importance of AI engineering. When we started our push into AI engineering two-and-a-half years ago, we heard a lot more talk about pursuing AI capabilities as fast as possible. That dialogue is still going on, but there’s much more consideration for, “Wait a minute. How do we do this in a smart and responsible and reliable way?” And we’re really excited that we just recently had a proposal accepted for a AAAI [Association for the Advancement of Artificial Intelligence] Spring Symposium [March 2022] on the topic of AI engineering aligned with our three pillars of AI engineering. So we’re really looking forward to bringing a much broader, somewhat academic-leaning community together in March of 2022.  

Good for you. What do you see as the role of certificate programs such as those offered by Coursera in the AI education landscape? 

Maybe I’m answering a more general question, but I think these are incredibly valuable resources, not just the certificate programs, but all the available learning and training programs out there.   

I do find we have an opportunity to significantly increase what I would call AI literacy, including how to build AI systems and knowing what the right questions are to ask in going about building an AI system. That’s largely at the individual level, but we also see an organizational readiness component.  

Part of the activities at the AI Division of the Software Engineering Institute, is work in digital transformation. What I mean by digital transformation is helping organizations and the individuals within those organizations better prepare to take on AI capabilities, to incorporate AI capabilities into their workflow, into their mission flow, and know how to do that—again I use these same words over and over—in a smart, reliable and responsible way. We also see a great opportunity for workforce development activities, augmenting what’s publicly available through Coursera or other offerings, with executive education, professional education experiences. We also have a great partnership between the Software Engineering Institute at CMU and the academic units at CMU, like the College of Engineering and in the School of Computer Science.  

On a different topic, what suggestions do you have for middle and high school students who might be interested in computers and technology, possibly AI?  

We are seeing that curriculums in K-12 education are starting to take on programs such as computational thinking or computer science, maybe even an introduction to AI course. Students should take advantage of those. Also, one thing that is exciting in AI is that there are so many open innovation challenges in AI, such as through Kaggle [an online community of data scientists and machine learning practitioners operated by Google]. 

So middle and high school students could get out there and participate in these challenges and get their hands dirty, trying to build some of these systems. I think this is just a really great opportunity regardless of whether the challenge is won or not, just a great experience to try to build some of these things. Then going beyond that, if you want to get beyond K-12, many colleges are starting to offer their courses online and in some cases for free. So that’s a way for students that are really interested and have gone as far as they can on their own to start to dig into some details of computer science and AI and related ideas.  

What is your favorite application of AI today?  

It’s a hard question; I see so many great applications of AI, and frankly, so many great opportunities to apply AI for good on many different problems and in many different domains. But one area that I’m particularly excited about is work that I’ve seen in humanitarian aid and disaster relief. 

We’ve done work in this space, but I’ll just talk about it generally. Based on commercially-available and inexpensive data from commercial satellites, we have seen a huge opportunity in recent years to analyze the planet. That includes satellite imagery to understand wildfires, including fire line prediction or planning. Or it could be automated building damage assessment from natural disasters.   

We see a great confluence of data availability, computational power and AI capabilities. In a very real way, these types of applications can have a huge impact on reducing costs, lowering risks and ultimately saving lives.   

Learn more at the Software Engineering Institute. 

Categories:

wpChatIcon
wpChatIcon