CMTC's Shifting Gears

We succeed because you do.

Season 8 Episode 1 - Artificial Intelligence in Manufacturing

Posted by Rachel Miller

Episode Show Notes

Episode 1 features Dr. Jim Davis, Vice Provost of IT at UCLA’s Office of Advanced Research Computing (OARC). Jim explains what artificial intelligence (AI) is, the process for how it works, and its value in the world of manufacturing. In addition, Jim offers recommendations for how SMMs can get started with AI as well as ensure long-term success with their AI projects. 

Jim Davis is Vice Provost of IT at UCLA’s Office of Advanced Research Computing (OARC) with broad responsibilities for data and technology solutions in support of the university’s research mission. Of particular relevance to AI in manufacturing, Jim’s past experience includes working for Amoco Chemicals with responsibilities for digital data acquisition, digital controllers, and PLCs in a process operation. He was involved in AI during the 1980s and 90s with the development and implementation of knowledge-based and machine-learning AI systems across a wide range of industrial projects. In the late 1990s through the mid-2010s, he served as CIO at Ohio State University and UCLA and was involved with scaling the internet, high-performance computing, cyberinfrastructure, implementing enterprise resource management systems, and addressing cybersecurity. Jim co-founded the Smart Manufacturing Leadership Coalition (SMLC) and spearheaded UCLA’s leadership role in forming today’s national Manufacturing USA Institute, called the Clean Energy Smart Manufacturing Innovation Institute (CESMII), sponsored by the Department of Energy. Jim recently co-chaired the development of the report “Towards Resilient Manufacturing Ecosystems Through AI,” which was sponsored by NSF/NIST to address recommendations for AI in the National Strategy for Advanced Manufacturing.


00:00:00 - Introductions

00:01:12 - Overview of Smart Manufacturing and AI

00:04:35 - The role of trust in Smart Manufacturing

00:05:39 - Definition of AI

00:12:39 - Process for how AI works

00:18:11 - Discussion about the use of AI besides monitoring and assessing in manufacturing

00:23:32 - Statistics regarding the value of AI use

00:28:28 - Custom and off-the-shelf AI solutions

00:31:28 - The importance of understanding the entire process prior to implementing AI

00:34:15 - Discussion about contextualization

00:39:22 - What an SMM can do to ensure the long-term success of an AI project

00:43:35 - Final recommendations for an SMM to get started with AI


Gregg Profozich [00:00:00] In the world of manufacturing, change is the only constant. How are small and medium-sized manufacturers SMMs to keep up with new technologies, regulations, and other important shifts, let alone leverage them to become leaders in their industries? Shifting Gears, a podcast from CMTC, highlights leaders in the modern world of manufacturing, from SMMs to consultants to industry experts. Each quarter we go deep into topics pertinent to both operating a manufacturing firm and the industry as a whole. Join us to hear about manufacturing sector's latest trends, groundbreaking technologies, and expert insights to help SMMs in California set themselves apart in this exciting modern world of innovation and change. I'm Gregg Profozich, director of advanced manufacturing technologies at CMTC. I'd like to welcome you. In this episode, I'm joined by Dr. Jim Davis, Vice Provost of IT at UCLA is Office of Advanced Research Computing. Jim explains what artificial intelligence is, the process for how it works, and its value in the world of manufacturing. In addition, Jim offers recommendations for how SMMs can get started with AI, as well as ensure the long-term success of their AI projects.

Jim Davis [00:01:10] Gregg, it’s good to be back. I never get tired of talking with you and CMTC. I’ll try and keep up with you. You have the radio voice. I’ll try and do as best I can.

Gregg Profozich [00:01:19] You’re far too kind. Thank you. Artificial intelligence—what an interesting topic. I think at this point in our lives and the advance in technology, we all likely have firsthand experience with AI to a certain degree. Between search engines we use to browse the Internet, to voice assistants like Alexa or Siri, to newsfeeds and ad content that we are presented, AI seems to touch many parts of our lives. In manufacturing, all of these that I just mentioned may apply, but there seems to be a much bigger opportunity to apply AI. According to Capgemini research, about 51% of European manufacturers are implementing AI solutions, with 30% in Japan and about 28% in the US. Manufacturing data in a way, I think, seems to be a natural fit for AI. As manufacturing equipment has become increasingly digitized, more and more data is available, often in a format that’s right to feed into AI engines. With the industry generating much often-digitized data, this seems to be a time that’s increasingly right for looking at opportunities to apply AI to solving manufacturing problems and improving performance. Let’s get into it. There are a lot of terms that manufacturers hear about advanced technologies or smart manufacturing: industry 4.0, big data, machine learning, artificial intelligence. We discussed smart manufacturing at length last time you were here, back in Shifting Gears season two. Just as a refresher, can you provide an overview of what smart manufacturing is, what AI is, and how AI fits with smart?

Jim Davis [00:02:45] Gregg, I saw that question, and I was trying to think how to begin, because I’ve been looking at AI—those two letters—for over 40 years at this point. I have to say that I’m pretty excited about this resurgent interest in AI. It’s actually always been one that is very appropriate for manufacturing. It ran into some infrastructure problems a long time ago, but those infrastructure kinds of capabilities are now in place. The ability to take it forward now is really… It’s really significant potential right now, but there’s some barriers. I guess I’m just starting off with a broad introduction. But to go back to the immediate question, smart manufacturing was a term that we started coining back in 2005, 2006. Where it came about was a long phrase we were using that said smart… What we were interested in at the time was smart, predictive, preventive, proactive, zero incident, zero emissions manufacturing. We were carrying that term for a while, and we said, “That’s getting too long.” So, we shortened all the parts that’s in the middle, and we just started calling it smart manufacturing. That’s how the term literally got coined back in 2005, 2006. But if you want to get into some definitions, probably the simplest one is: all we’re really trying to do is make manufacturing operational data as useful as possible. That’s as simple as you can get. The definition I liked the best from my smart manufacturing perspective is: what we want to do is have the right data at the right time at the right place in the right format in the hands of the right people throughout the enterprise with trust. This is so people and/or machines can make the right decisions, take the right actions to benefit manufacturing. It’s simply how can we take advantage of data in the best possible way.

Gregg Profozich [00:04:35] Jim, I think that’s a great definition. I love what I call the litany of rights: the right information in the hands of the right people at the right time to make the right decision, all those things you just said. You added something in there, though—with trust—which I find really interesting. What is the key to that? Is it about data integrity? Is it about cybersecurity? What’s the trust part of it?

Jim Davis [00:04:56] It’s all of the above, and we can get into it in more detail, but the notion of trust really has to do with can I trust the data with where I’m getting it from. It’s not necessarily going to come from my own factory or my own operations. The other part of it is: can I trust that I am sharing or exchanging to the benefit of what I’m doing or the benefit of the industry? Can that be shared with trust? Then, of course, there is the cybersecurity that is the front of mind for everyone right now. But I will always be saying now that security and opportunity are two sides of the same coin. Again, we can dive into that. But trust is the key to taking that forward, and risk is in the middle of that, as well. Risk and trust.

Gregg Profozich [00:05:39] Thank you for that, Jim. It makes a lot of sense. Now I think we have a pretty good definition of what smart is. Let’s talk about a definition for AI. What, actually, does artificial intelligence mean, for someone who has heard the term but really hasn’t done much research?

Jim Davis [00:05:53] Gregg, if I can, just to take some of this down, call it brass tacks or down where the rubber meets the road, when you’re thinking about data in manufacturing operations, data by themselves actually don’t do anything. They’re just there. So, one has to ask a lot of questions about how do you turn that into economic value, how do you turn that into environmental sustainability value, how do you turn that into market value, and so forth. Let me just say up front so I can refer to it throughout our discussion—when you really look at it closely, there’s only three ways to monetize the data. One is to use it to increase productivity. Productivity means how can I produce my product or material with the lowest amount of resources. You can think about that as your product over resource costs. The second is precision, which is how do I really drive my product quality, how do I ensure it every time I make that product, and how do I ensure that I make the right product with the right precision the first time, not the second time and the third time, going forward. Then the third one is… We use the term performance. It’s really going after efficiency, but I don’t really like the term efficiency because people always think about thermodynamic or pump efficiencies and these sorts of things. What we’re really talking about is how can you drive the best performance out of any of the assets that are used in whatever you’re manufacturing. That gets at better, faster, cheaper. There are some really good ways to use the data to get all three of them going at the same time in better ways. I will refer to productivity, precision, and performance. All of them are the drivers to get to economic value, market value, and environmental impacts, which are on the table, as well. I think it’s important to keep those in mind as we go forward. What is the definition of AI? The first thing I’ll say is you don’t really want to spend a lot of time defining it, because you’ll go down a rabbit hole trying to define it. If you look at the literature, there is no legal definition of AI. It’s all over the place. It depends on this and that. There’s many different forms of AI. We’re all familiar with them: natural language processing, pattern recognition, feature recognition, qualitative modeling; there’s statistics. We’re used to facial recognition kinds of systems. We now have ChatGPT on the table that is solving a lot of problems just as a result of AI. We have AI in art. You name it, there’s lots of different things. It’s really not so productive to try and define it, but one of the things we did do is define it in the following way. We brought a workshop together and just asked, “What’s the best way to think about the definition of AI from a manufacturing perspective?” I’ll just read this to you for the benefit of people who are listening. A useful definition as a practical one is: AI is about software systems that can recognize, simulate, predict, plan, and optimize situations, operating conditions, material properties for human and/or machine action. The key words there, if you think about it, are recognize, predict, plan, optimize. These all have these elements of cognition. We don’t need to get into details about that, but it’s got this human behavior flavor to them. But if you just stick with this definition—you get past all the mathematics, all the methods, and so forth—and aim yourself at the purpose for what AI is really useful for from a monetization standpoint or economic value standpoint in manufacturing. I hope that’s useful, and I can reference that as we go along, as well.

Gregg Profozich [00:09:32] I think that makes great sense. Down to the brass tacks, as you said before when a software system technology can recognize patterns, can have things in context, and recognize the information in the data, when it can make predictions about because this has happened, this is likely to happen next with a probability, when it can plan based on that, because of that I know that I can put some rules in place that then take it to that next level and then optimize as in taking the whole thing. They seem like they’re layers or stairs. You’re ascending the staircase there. First is the recognition, then the prediction, then the planning, then the optimizing. Is there a structure that’s intentional in the way you structured that definition?

Jim Davis [00:10:09] Absolutely. You also hear me talk about it as we get into some of the questions I would expect you’re going to ask. Those also actually begin to define a maturity level that you will gain and build as you start building AI out. You want to start simple and then move through these to get to the overall way of taking action. That actually leads into one other point, which is the term automation is on the table all the time, and the term autonomy is on the table all the time. From my perspective, those are really just maturity levels with the use of data, and your trust, and the use of that data, and what you’re trying to do. If you think about recognize, predict, plan, and optimize, if you can get all this put together with a great deal of trust and confidence, you now have the maturity and the confidence to move into automation. If you can carry those even further and say, “I know what I’m doing with this so well that I can just have this machine make those decisions,” now you move into autonomy in ways that become useful. But you pretty much have to go through these different stages or steps of maturity with the data and what you’re trying to do to get to those kinds of more robust activities which still all involve AI as we’re defining it here.

Gregg Profozich [00:11:25] I know it’s not a direct manufacturing example, but as you’re talking, my brain is going to the idea of a self-driving car. A self-driving car first has to be able to recognize its environment and figure out what the key variables moving around are. The bird flying 20 feet above the road doesn’t matter; the dog that ran out in front of the car does. Recognition. Then predicting that dog will be at the same point at my speed and where I’m going to be there. Now the car needs to do something. Then plan, which is I need to apply the brakes, apply the horn, whatever that is to correct for the situation. Then the optimizing piece can come later. As you’re talking about that maturity progression thing, automation and then autonomy, an autonomous driving car is the full fruition in a way of that model. Am I drawing the right conclusion?

Jim Davis [00:12:10] Yep, that’s right, except looks like I left one out, and you brought attention to it. You have to recognize something, but then you need to assess it. You need to assess what you’re looking at or dealing with first, and then you can move into planning about it. The planning then takes into now how do I take an action. I predict, and then I plan how to take an action. There’s really four major characteristics, or call it AI functions, that we’re trying to achieve in a set to move forward through that entire maturity cycle, if you will.

Gregg Profozich [00:12:39] Now we have a definition, a working manufacturing definition for AI, and understanding of that maturity progression. We don’t want to do our first implementation in planning; we want to do it in recognition. We want to walk that staircase in line if we’re a small manufacturer wanting to go forward. Let’s talk a little bit about it in as simple terms as we can. Describe just the basics of how AI works. Without getting into the nitty-gritty details, the engines are taking data. What’s the process for how AI works?

Jim Davis [00:13:06] Gregg, just to pick up on this discussion, I know we’re talking in just definitions, but I think it’s going to be helpful to have some examples and walk through them in that particular order of maturity or you can call them functions, that we want to achieve with artificial intelligence. Oh, by the way, we really should put the term machine learning on the table, as well. From my perspective, machine learning is just a subset of AI that takes into account a whole variety of ways of building algorithms or putting together algorithms that use prior data to accurately identify the current state and then predict the future state based on the availability of that data. Machine learning to me, it’s really important. It’s a way of doing things. It’s a really important way of bringing the data into the picture. But it’s a subset of AI and the overall intention of those particular functions. Let’s get into some examples and pull them apart. What we’re seeing across the board in manufacturing is that most of the applications now are in what I call asset management. This tends to start with maintenance. The intent is to move as quickly into predictive maintenance as possible. We’re seeing a lot of attention, a lot of applications in this area. It’s pretty obvious. Just take something simple like a pump. If you can collect data about that pump—think about temperatures, think about power, think about flows, speed—but you put it in the context of the surface that pump is being used for, and you collect that data, and you begin to see and get a picture of how that pump should be operating, we call that normal operation. If you now have that picture of what normal operation looks like, you can start pulling those variables in a continuous manner. When you start seeing excursions, you can start asking, “What’s happening to my pump?” That puts you into this, “Well, it’s probably time to maintain something or change something.” If you could take that same data… Now you’re just monitoring. Now if you begin to see when you have done maintenance, what it looks like when it’s time to do the maintenance, or what maintenance needs to be done under what circumstances or combination of those variables, you can actually move pretty naturally into the predictive side of this going forward. That’s how they move together in succession. It’s almost easiest to start with monitoring. Then you have to have more data; you have to have more variability of that data; you have to have performed some maintenance and have the data around that to then move into the predictive state. You get into more and more valued uses of data that is growing as this goes along. But I think that’s a pretty good example, just to keep that in mind. Lots of people use machine tools out there. Lots of small companies use machine tools. A lot of large companies use machine tools. They can see the same things. You put speeds, you put feeds, you put what’s a service, what kind of cut, how much pressure, how much lubricant. There’s many different kinds of variables that are in play. Think about milling, and drilling, and stamping. You can play out the exact same approach that we just described for the pump. You start collecting that data, and you can begin to see how your machine’s running for your particular service, for the particular part that you are making. You can start building out in this direction of assessing, thinking about maintenance, then starting to look for excursions, and then thinking about how those excursions then tie into predictions of the maintenance going forward. We’re seeing a lot of that kind of activity.

Gregg Profozich [00:16:44] That’s an example in asset maintenance. Are there other examples in manufacturing?

Jim Davis [00:16:50] The other thing that we hear a lot about—but it’s actually useful to put in this context of monitoring and assessment—there’s a lot of what I call rich sensing. If you take something like a pump or you take something like a machine tool, you run into chattering on a machine tool. There’s a lot of interest now in things like vibrational sensors. Now you got a whole other kind of sensor that collects very rich information. You begin to get a picture of when does this chattering occur or a pump, when does the vibration occur with different services. That actually adds to the richness of this maintenance, just an assessment sensor in the predictive maintenance. There’s some really interesting work out there that’s quite useful in this sense. You can think about now moving into acoustic sensors. You can do this with the sound of the machines or the sound of the pump. Obviously, there’s the machine vision side, really just using cameras and then looking for distortions, looking for changes. This becomes very useful in some of the product quality kinds of applications that we can talk about in just a minute. But I just want to put on the table rich sensing as a part that builds into some of the normal sensors or usual sensors if you think about legacy kinds of systems, legacy kinds of machine tools.

Gregg Profozich [00:18:11] Those are both in the monitoring and assessing space: asset management and the rich sensoring that you can apply to asset management for not just the standard levels of asset monitoring—is the pump turning, is it turning at speed, et cetera—but some of the other characteristics that can go with that. What are some other examples in manufacturing of uses of AI besides the monitoring and assessing, or is it limited there right now?

Jim Davis [00:18:33] It’s hardly limited; that’s just where most of the attention is right now. I’ll come back to this point. But one of the reasons is just the sheer access of data. We’ll talk about that point in just a moment. But you can start notching things up. You can get into operations, and you can start thinking about prevention. One of the applications we worked on within CESMII, which stands for the Clean Energy Smart Manufacturing Innovation Institute… I’ll reference, actually, a number of projects with steel. The front end of the steel process is the casting process in which you mix the ingredients at very high temperatures, and then you basically run them through a casting process, which then casts this mixture of hot molten metal into a hot slab. Well, there’s a lot of things that can go wrong in that process. If it’s not mixed properly, you’ll get product that is not going to meet qualifications. You can plug the caster. All sorts of things that can happen. One of the things that we spent quite a bit of time on with one of the steel manufacturers is collecting quite a bit of data. There’s a lot of different kinds of data that are used on the mixing process, all the same kinds of variables that you would think about to really start monitoring caster health. We call it caster health monitoring. But the idea here is to prevent these defects in the steel slab at the front end of the process and prevent these process operational problems with the casting process of itself. This included building some new sensors that could operate in high-temperature environments that could add to this rich sensing mix. But the real point here is you can get into basically monitoring and assessment of operations for this function of prevention and uptime around operations. Another good example that I think gets at the richness—these get more expensive and more costly to implement—we’ve worked with the use of furnaces. When they’re industrial scale, they are larger scale in terms of their geometrical dimensions. What we’ve been able to do is outfit those furnaces with infrared cameras around the furnace. We can use the rich imaging of infrared to actually understand the spatial temperature throughout the geographical space of that furnace, both vertically and then spatially, at every horizontal location so that we can look at how even the temperature is in the furnace. Of course, that’s a problem when you get into larger and larger scale problems. The rich sensing comes into play here. But what we were able to then do was map that spatial temperature distribution to a whole set of burners. There’s multiple burners on that furnace. We mapped them into the burner settings so that you could understand quite easily how to now change those burners to level out the temperature distribution throughout that furnace, which then allows you to run the overall temperature hotter because you are taking care of unevenness in this. But there was an example of rich sensing, image sensing, as well as then mapping into a control capability with the AI.

Gregg Profozich [00:21:49] It sounds like there’s a lot of things going on in the monitoring and assessing space, for sure. What are some other uses for AI within manufacturing if I’m a small, midsize manufacturer?

Jim Davis [00:21:59] Keep going up. I call it the KPI area. Now everyone wants to drive higher-level key performance indicators across their operation. The simplest ones are uptime of the operation or downtime of any critical units in that operation. The maintenance and the monitoring assessment usually begin there, but you could begin to think now about higher-level KPIs. We were working with a food industry application in which it’s very energy intensive. What they wanted to do was actually monitor the energy of each of their individual steps and an overall sequence of processes and start looking at and understanding how to optimize the energy use each of these individual steps against their production and the products that they’re trying to make. Now you’re bringing in data of two different types. We’ve seen applications with very small companies, just tying very simple sensored systems—for example, a weight sensor on some kind of feed or raw material that goes into their particular operation—and then tie that to their production data and begin looking at the feed of their particular raw material or the use of it against different products that they’re trying to make. Again, you can start bringing those kinds of data together into these higher-level KPIs. But those are some examples. It depends on what you’re producing, how many products, these sorts of things. But the data become very useful in these kinds of settings.

Gregg Profozich [00:23:32] We’re talking about use cases. We really haven’t talked too much about benefits. Typically, I’m going to use the asset monitoring piece. We can expect a what percent reduction or what kind of a reduction in downtime or unplanned downtime or those kinds of things? Do you have any statistics on some of the value from that perspective?

Jim Davis [00:23:51] We’ve done quite a bit. The first point to raise is that the monetization to get to these economic kinds of numbers depends. It depends on the industry. I was mentioning the example of the food industry. It’s energy intensive. They were able to monetize the use of that data by reducing energy. A discrete part manufacturer may be monetizing it with uptime of the operation or to increase productivity and production. It just depends. But if we put these all together, we’re seeing consistently across industries that the starting point economic value is right around 15% and goes up from there. In different industries, we see 15% to 20% economic value. Sometimes we’ll see 15% to 30% and even 40%, depending upon how these things have been implemented. That’s within the factory. That’s within a given operation going forward. We tend to use conservatively 15% within the factory operation with what we’ve been using. But what we also can go do is talk to OEMs in which they’re looking at their supply chains. When we start thinking about productivity, performance, and precision at an enterprise level, meaning beyond the factory and raw material to in-product throughout a supply chain, we have been quoted from a number of different industries to look at about 12% to 15% additional economic benefit just from the supply chain effects. One of the things I’ll make a point about here is that these economic benefits are huge, and they certainly accrue at the factory level, but there’s a huge benefit by thinking more broadly and getting beyond just the factory itself. That, I think, becomes very important for small and medium companies, which make up the huge base of US industries’ manufacturing supply chains for all practical purposes.

Gregg Profozich [00:25:51] Absolutely. When you say that 12% to 15% for supply chain effects, what might be an example of a supply chain effect where there’s an improvement based on using AI within the plant?

Jim Davis [00:26:00] Well, let me give you a very specific example. Let me not generalize. We were working with a glass manufacturer that used a raw material that was used in manufacturing that high precision glass. The manufacturer of the glass, if they understood the product quality in the sense… One of the important parameters was particle size distribution. If they had an understanding of the range of that precision of that raw material, they could set up their control systems in the manufacturer based on the quality of the material coming from the supplier and get that set up faster. That actually allowed them to make a higher precision product, a better product, in a shorter time. That’s an example of two companies coming together and getting a productive benefit. The information about what they need for the precision product was passed back to the supplier. They were driving the precision of the material, and the two are coming together. The end result was an overall higher precision, better spec’d product coming out of the manufacturer. If you think about that along an entire supply chain, those effects do add up, and that’s where you get to the 12% to 15%.

Gregg Profozich [00:27:11] I think that makes it clear. It’s really that collaboration of understanding. If I can monitor and optimize my internal production processes, I understand my inputs well enough to know that if I can have clarity on what the inputs are from one batch to another of raw material, I can make adjustments and optimize productivity, performance, quality. But I have to have that tie-in and know that information. I can’t be entering it at the moment when the first product is being run. I have to have it to be able to set it up and make the appropriate changeover so that I’m ready for the quality of the incoming material.

Jim Davis [00:27:43] Absolutely right. Gregg, this actually ties us back to the opening discussion. This is where the word trust comes in in a very specific way, and it comes in a very specific business way. The supplier needs to trust the glass manufacturer on what it’s buying from us, and they’re going to work with us together. The glass manufacturer needs to trust that data coming from the supplier. How do you exchange that information with trust and confidence in a way that that business relationship can come together with trust? We’re obviously getting into intellectual property, trade secrets, all sorts of things that become risks, perceived or real, that do need to be overcome. We can come back to that, but this is where trust comes in.

Gregg Profozich [00:28:28] Fair enough. Let’s switch gears just a little bit, talk in terms of implementations. We’ve discussed a number of specific use cases and benefits. If I’m an SMM, are there commercial off-the-shelf solutions that I can employ, or is AI only available as a custom solution for my problem? What does that landscape look like? How do I dip my toe in the water, or how do I jump into the pool?

Jim Davis [00:28:50] This is the place we perhaps should spend a few minutes. The landscape for off-the-shelf is in the category of the mathematical tools, the algorithm-building tools. Yes, there’s many of those that are off-the-shelf, and that’s one of the great things about today’s world of AI. The tool sets and the computational underpinnings of those is tremendously better than what it had been in the past. But what I want to urge is to be very careful. While you have access to those kinds of tools, an AI solution is not an off-the-shelf solution because as we’ve been describing all along when you talk about recognize, and assess, and plan, and so forth, number one, you have to deal with this in terms of your objectives and the service with which that AI solution is going to go in. That’s going to require that the data that you’re going to use in those kinds of algorithms has to be assessed. You have to know what data you’re working with. You have to apply your own domain and service expertise to that data. That’s not going to be done with just a piece of software out there. The whole notion of deciding on what data to use, deciding if I have the right data, and so forth becomes really quite important for these. Then I don’t want to lose the point about the expertise because I can generalize… AI, to me, is a combination of the data which has expertise applied to it. It’s the combination of the expertise to actually set up the solution in the best possible way. This is why I say—we were talking about it earlier—an AI solution is really a journey. It’s a journey with these algorithms to understand the data, understand what you’re trying to do, understand your operations, and then apply your expertise. Once you start doing that, you begin to see things about what you’re doing, and you build more insights and expertise, and then you take this forward. My advice is if anyone comes out there and says, “Just give me your data, and I’ll go build you a solution and bring it back to you,” don’t believe it. If someone comes in and says, “I can work with you. I want to work with your expertise and your data. We have this experience with these kinds of applications,” that’s the kind of thing that’s going to be very important. Now, several aspects of what the data and so forth that we can get into, but that general principle, I think, is a really important one to appreciate. 

Gregg Profozich [00:31:28] There are engines out there; there are algorithms; there are solvers. Pour the data into the black box on this side, and you get the right answer out the other side. Those exist, in a sense. Knowing what data to apply to what engine is a level of expertise. I want to delve in a little bit more on that expertise word you’ve been using, because, if I’m correct, knowing what data to apply to what engine is an expertise. If I’m an SMM, I would probably have to hire that. But then understanding what data I can get, how reliable it is, and how I would use the outputs of that is another side of expertise. I have to know my manufacturing process in and out. I have to know what my equipment is like or my processes are like, and how they work, and how they’re integrated with other systems and platforms to understand that we’re not going to get spurious information coming out because we put data in that has things in it that don’t belong or that are inconsistent. That’s one important part of it. Right?

Jim Davis [00:32:24] That’s right. Just to take an example, if I go back to the pump, it’s a pretty straightforward device. There’s lots of pumps out there. But you have the pump for your service. You have to be very careful with the objective. You can’t just take all pump data and throw it into one of these algorithms and expect to get to predictive maintenance or even an assessment. What you end up having to do is you put this data in, and you begin to understand what normal operation is. You begin to understand in more precise terms what’s the range of normal operation for different products, for your production, for your service, and so forth. You get that defined. You understand how noisy your operations are, and you begin to understand the noise levels so that you understand what’s noise and what’s real. You have to get through that understanding of your data, which I call the contextualization of the data, to actually build a robust application just to assess a pump. Now, you can do it pretty quickly. The tools allow you to move through these very fast now, but you have to understand that you have to go through that. Then let’s say you want to do diagnostics or you want to get something more about what to do with the pump. I’m seeing some excursions. Now you need to have data that has variability in it that reflects those excursions. You now have had to experience those excursions, and you had to collect the data, and your data is larger. If you put all normal data in with just a little variability, you overwhelm the variability. So, you have to select the data a little more carefully to build towards a different objective. It’s this care, and feeding, and use of contextual data that’s really very critical and needs to work well.

Gregg Profozich [00:34:15] We haven’t talked much about contextualization, but I suspect that’s one of the more important things. I have to put the data that’s coming out of the sensors in context. I have to understand what they’re really giving me, and then I have to understand what they really mean. The context is a foundational thing. No?

Jim Davis [00:34:32] That’s ultimately what I’m describing. It goes back to some of these definitions. I’ll pick on things a little bit. But these terms like a data lake, I’ll just pick on that one. That’s meaningless to me because if you just have a data lake, it’s just data; it’s not going to be able to do anything. What you actually have to do is understand that… Obviously, you need to understand the tags, you need to understand the units, but you didn’t need to understand that service and the contextualization of data. That’s the heavy lift, and that becomes very critical. One of the things the industry does is it does that contextualization over and over again for a similar operation. A pump gets many… There’s many pumps out there. That contextualization process gets done over and over again. Where it’s really valuable, there’s no reason to redo that all the time. It’s actually better to group this or categorize it with different kinds of services. One of the things that we’re working on pretty hard within CESMII—again, the Smart Manufacturing Institute—is how to work across equipment builders and across manufacturers to capture some of these kinds of configurations around the contextualization of the data where they’re used all the time. You can see where this is going to happen with both machine tools, with pumps, very common operations like this. That will heavily streamline the contextualization going forward. It doesn’t eliminate it, but it will streamline it for people. I think that’s very important because there’s so much time and effort going into the contextualization.

Gregg Profozich [00:36:11] No commercial off-the-shelf solutions. There are commercial off-the-shelf algorithms and components of an integrated solution, but it sounds like it’s going to take some consulting expertise or some programming expertise to be able to help assemble it. It sounds like a partnership, too, from what you were describing. I need somebody with the expertise to understand what tools to use from an AI infrastructure/backbone/technology/software perspective, but I also have to have my expertise and my people’s expertise if I’m an SMM to bring forth the right information, the right contextualization, the right process, inputs, parameters, KPIs, etcetera to make sure that we’re putting in the right data to get the right answer out or to get an actionable answer out. Correct?

Jim Davis [00:36:55] That’s correct. But we need to dig into a couple of things that you just said. The first is if you’re going to contextualize data, you have to be able to access it; you need to be able to collect it; you need to be able to ingest it; and you need to store it someplace. There’s a whole IT space underneath all of this. When you think about that, especially for small companies, they will not tend to have that infrastructure, or they may have parts of it and so forth. One of the numbers that we see over and over again is probably 60% to 70% of the cost is just dealing with the IT to get the data in someplace where you can start working with it. From our institute standpoint, one of the things we think is important to do is to lower those IT costs. Now, obviously, the cloud brings a lot of capability now that we can access. If a small company is on the network, there is an ability to use a cloud capability to do this. But the cloud capability can come up short very quickly. What’s the instrumentation? What’s the piece of equipment? All these different pieces of equipment have different protocols that are built in. You have to run the language to work with that piece of equipment, and then collect the data and put it into a common digital format. The collection part of this, there’s a whole set of standards and protocols that are in play, and you have to deal with those. Once those are in play, you can get things into a standard; you can ingest the data into a standard way. But if you go to different vendors, those get ingested in different ways with different protocols. It’s much better if we can ingest with an agreed-upon protocol. When you go out the gate right now, you’re going to run into all of these barriers, but there is a lot of work going on right now there, and its existing capability to make it that much easier at this point to use a cloud product that will manage all of the standards and get the first levels of contextualization in place, give you some templates—we call them profiles within the institute—so you can select these and do all of this much more easily. But out the gate, you’re going to run into this.

Gregg Profozich [00:39:22] Let’s jump into it a little bit further into that kind of thing. We’re talking about it. We’re dancing around the edge. What are some of the pitfalls or potential pitfalls? What can an SMM do to ensure the long-term success of an AI project?

Jim Davis [00:39:38] Well, the first thing is to appreciate all of these pitfalls. The way that manufacturing is set up right now, it basically is designed to compartmentalize data. Ultimately, what we’re doing is trying to free up data. There’s a lot of work out there that’s now putting together ways to do this, but just remember that the industry is set up to compartmentalize it, cloud is set up to compartmentalize it, and those are in the process of being changed. You really need to understand and go look for those kinds of consultants that will help you through that. I believe CESMII has capabilities. I’m biased, but I will pitch that. The MEP program, I know CMTC has those capabilities and can speak to this. There’s other groups and coalitions that can speak to this going forward. Just be aware of these aspects. I’ll give you a little bit more data. If you have two different systems and you haven’t addressed this if you now decide to after the fact interconnect that data—we run into this with larger manufacturers that may have multiple products on their factory floor, maybe an inventory system, and an operating system, and whatever—we find that if you now want to interconnect those systems because there’s a reason to build a KPI—you build an AI model and the KPI—it takes 250 person-hours to build that connection across those two products securely. You can start adding the costs up really fast. This is literally one of the barriers to really escalating these interconnections that we’re talking about going forward, especially if you start thinking about a factory floor, or supply chain, or even inter-company kinds of interconnections, let alone the business side of it. This work on how to collect, ingest, and contextualize data, especially off of common applications, is critical to removing some barriers taking this forward. Then you’re set up with the data. Now you’re set up with the AI applications that we were talking about previously.

Gregg Profozich [00:41:58] Are those difficulties, those differences in protocol the result of just the way the technologies grew up, or a result of conscious decisions for proprietary approaches in software or hardware, or a combination of both?

Jim Davis [00:42:14] Well, it’s really a combination of both. I’m not being negative. It’s just the way the industry is. If you go back 40 years and you just talk about the early digitization, just moving into digital data, at that particular time what we did was we just put data in, and we embedded it into function. All the software programs have the data embedded, so you can’t get at it very easily. By nature, everything was compartmentalized. If you think about the Purdue model, which has served the industry very well, it’s all about how do you organize data at the sensor level, then the control level, and then these higher management levels all the way up to the financial level. That’s been very useful to keep all of that separated. But what we’re now talking about is opening the floodgates of that data, going through those different layers, and keeping the contextualization consistent. We have to change how we’re doing things. It takes some time to do that. I just want to… I’m raising this because it’s important to be aware. It’s also important to realize that there are tools out there to move and get started. You don’t wait for these kinds of things, especially given the fact that this is a journey.

Gregg Profozich [00:43:35] Absolutely. We’re wrapping up our time together here. Any final recommendations you would have for how an SMM could get started into AI? Start small, start simple, start at a particular place?

Jim Davis [00:43:42] Throughout this podcast, I’ve emphasized the importance of accessing and understanding the data that will be needed in any AI and machine learning application. The question is available resources. First and foremost, I’m encouraging small and medium manufacturers to not do it yourself but to reach out and understand how to do this and what to think about. You’ll quickly learn there’s questions about being networked, wireless, and wired, how to think about data for different applications, what your IT and data infrastructures need to look like or be able to do. There’s security. There’s the kinds of sensors and the instrumentation. Then what I’ve been stressing throughout this podcast most importantly, it’s how to connect, ingest, and contextualize the data. The resources that stand out for me are CESMII, The Clean Energy Smart Manufacturing Innovation Institute, which I oversee. I know that the best. We have innovation centers around the country, with one based here in UCLA and there’s several satellites here in California. We also run a small and medium manufacturer affinity group in which small and medium manufacturers can discuss and learn together. CESMII and a number of the manufacturing extension partnership programs have also formed partnerships. Obviously, CMTC, that is sponsoring this podcast is a key partner on smart manufacturing and AI in this area. The last resource that I want to mention is the Industrial Assessment Center Network. These are sponsored by the Department of Energy to do energy assessments. We have similarly partnered with several IACs around the country. In this particular area, UC Irvine is the one that’s plugged into smart manufacturing and AI, but these exist around the country. The Department of Energy is further investing in these specifically for smart manufacturing.

Gregg Profozich [00:45:34] Jim, as we’re thinking through recommendations and listening to what you’re saying about recommendations for small and mid-sized manufacturers, what I hear you saying is number one, start small; number two, get help. Join a group to understand the state of the art and what’s going on out there, and learn from other people and their mistakes. The third one is figure out how you’re going to get the data. How are you going to collect it? How are you going to ingest it into a system? How are you going to contextualize it so it’s useful and usable? The last thing was training—training your people, workforce development, that aspect. Are there other elements in your recommendations list?

Jim Davis [00:46:04] If you pull some of that together, basically, the message is do-it-yourself is very hard to do. There are many resources out there. It’s to your advantage to take advantage of those resources. If you do it yourself, you can end up with an infrastructure that’s by yourself. That’s the one thing that you want to avoid. The main message here is be careful doing it yourself; take advantage of what’s out there. Then that actually builds into a larger question: why should a small company care? Why do you care about this? I’m sure everyone is seeing this, but the way of the world is towards this digitalization; it’s towards this use of data. It’s changing how manufacturing is… It’s changing the expectations for manufacturing. It’s changing expectations for supply chains. If you’re not starting this journey, you will eventually be caught in a place where it’s going to affect your market share, your product, something. You have to begin getting used to becoming adept with using the network and the data. There’s a lot of pressure now on sustainability, environmental sustainability. Those strategies tend to be industry-wide. There is data about those going forward. I know it becomes sensitive, but it’s going to be the way of the world from a sustainable standpoint. Manufacturing is changing more and more towards electrification. It’s changing more and more towards alternative energy sources. That’s also playing into the sustainability. That’s changing the complexity. You need to use the data now to manage that better. Gregg, you already talked about the opportunities. We talked about those before. I wanted to end on another note. The base of manufacturing in the US is the small and medium companies. We talk about large companies and AI all the time. That gap is getting large right now. That’s a big mistake for the small and medium companies and the large companies. We need to keep that gap closed. But it’s not just closing the gap; the small companies is where a lot of this expertise rests. It’s actually where a lot of the data is. The small companies play a big role in the data that we’re going to need to build these kinds of algorithms, and build the AI out, and have the expertise and the know-how in the best possible way. There is real potential for that expertise to play a larger role, not only for your internal benefit but also for other kinds of revenue opportunities and ways of thinking about as these go forward into more the supply chain ecosystem sorts of things. I have tried to create the message of be careful doing it yourself; go get some resources. Gregg, you summarized that very well. Let me just end with an example. We worked with a nutrition bar manufacturer. When I say “we,” it was one of the projects within CESMII. But we worked with a nutrition bar manufacturer, and for under $2,000 were able to put together a couple of pretty simple temperature measurements within some key parts of a fairly straightforward line operation in which these nutrition bars are manufactured. They were able to with a dashboard begin to see a number of aspects of a refrigeration cycle, the cooling cycle, ways of raising the temperature, ways of tying that data to their production, ways of tying the data to the production of different products just with that temperature sensor going forward. They were able to increase their productivity in this range of 15% to 20%. It was a very small investment, but it required this journey, and understanding the data, and all the things that we had talked about before. Be optimistic. 

Gregg Profozich [00:49:58] Jim, it was great to have you back on Shifting Gears today. Thank you for joining me and for sharing your perspectives, insights, and expertise with me and with our listeners. And to our listeners, thank you for joining me for this conversation with Dr. Jim Davis and discussing artificial intelligence in manufacturing. Have a great day. Stay safe and healthy. Thank you for listening to Shifting Gears, a podcast from CMTC. If you enjoyed this episode, please share it with others and post it on your social media platforms. You can subscribe to our podcasts on Apple Podcasts, Spotify, or your preferred podcast directory. For more information on our topic, please visit CMTC is a private nonprofit organization that provides technical assistance, workforce development, and consulting services to small and medium-sized manufacturers throughout the state of California. CMTC’s mission is to serve as a trusted adviser providing solutions that increase the productivity and competitiveness of California’s manufacturers. CMTC operates under a cooperative agreement for the state of California with the Hollings Manufacturing Extension Partnership Program, MEP, at the National Institute of Standards and Technology within the Department of Commerce. For more information about CMTC, please visit For more information about the MEP National Network or to find your local MEP center, visit

New call-to-action

Topics: Advanced Manufacturing, Manufacturing Technology, Smart Manufacturing, Innovation & Growth, Information Technology, Robotics & Automation

Tell Us What You Think