[00:00:06] Speaker A: Welcome to Aptudo's new podcast, a series about trends in the connected devices industry, where we're going to be talking to players up and down the supply chain. We're going to feature topics ranging from testing, automation, reverse logistics, and circularity, basically everything that happens during the lifecycle of a connected device. My name is Allison Mitchell, and I'm the VP of sustainability at Appcudo and the proud host of this new live podcast program covering these industry trends, customer journeys, sustainability topics, and more. If you have topics of interest or guests that you would like to hear from, please shoot me an email at allyson dot mitchellappcudo.com. so just for some logistics, heads up. We're going to start by diving into the discussion with our guest speaker. Then we'll answer questions during the Q and A at the end of the session. Today's guest is Chad Gotzman, president of Apkudo, and we're going straight to the top on this podcast by going to the talking with the president of the company here. Chad, you and I first met when you were visiting Mobile Resale's office last spring, and at that time, it was a discovery journey about what mobile resale was and what we were up to. And little did we know at the time that we would end up together a little over a year later on a podcast together. So I'm excited to have you join us.
[00:01:21] Speaker B: Yeah, thanks. Thanks a lot, Allison. I'm excited to be here and excited to be talking about this topic. And I have to say, data actually is exciting nowadays as AI has become just so ubiquitous. In fact, prior to this podcast, I was reading that the number of mentions of AI in public company earning calls has increased six fold since the release of Chat GPT in November of 2022.
Which is saying something because it was really starting to increase in the public consciousness, but increased six fold since two, two novembers ago. And you can't have AI without data, so data is hotter than ever. So happy to be talking about it today.
[00:02:02] Speaker A: Well, I think the best way for us to kick off today's topic, talking about all things data, is with a customer example. And I think you've got a great story to share about the power of data. Am I right?
[00:02:12] Speaker B: I do. In fact, I think we have many great examples of how our customers are using data on the Apkuda platform. The example I'm going to provide is probably not as obvious because most people at know us, know us for a robotics, but we are a full supply chain and platform for the industry. And so the example I'm going to provide is around reverse logistics and more specifically around loss prevention. So this particular customer has thousands of retail locations where a consumer can come in and drop off a phone for either trade in, upgrade, warranty claim, whatever the reason. But hundreds of thousands of phones, actually millions of phones, are being dropped off at these thousands of locations. And this customer is losing between $20 and $30 million worth of devices every year. And that was primarily due to theft, and they had a big problem there. And that $20 to $30 million also doesn't even include the added labor cost to chase down these devices, to investigate the losses, to speak with the customers. It doesn't include the cost of customer dissatisfaction. So suffice it to say it was a big problem. And so what would happen is a box would show up at distribution center and it was supposed to have, say, 20 phones in it. Instead, when the employees opened up the box and had maybe 18 phones in it and two blocks of wood or a water bottle, I mean, crazy things would be packaged in these boxes. And the way we solved this problem was really by using machine learning to analyze thousands of points of data, including phone model, the age of the device, the drop off location, and many, many other factors. And we built a predictive model that assigns a score to each drop off location and to each shipping hub. So from the drop off location, it went to a hub, from the hub went to a centralized facility, and we didn't know where that loss was happening. And so ultimately, by bringing all this data together, we were able to assign a score from zero to four for each one of those points in the map, so to speak. So a zero would mean there's virtually no chance of loss. A four would mean that there's extremely high likelihood of loss. And our customer would continuously leverage this model on our platform, where it's displayed visually in the form of a heat map. So, picture a map of the United States, and there are literally thousands of dots representing each one of those drop off points, and each one of those hubs, and each one of those would be some shade of red. And the deep red locations were the ones that were scored a four highest likelihood of theft to occur. They can zoom in, they can understand at a very specific retail location where the likelihood was gonna happen. And what that did was create what they would call a training opportunity for a regional manager to descend on that location and really investigate what was going on. And by showing that that was all being tracked and showing the analytics and the metrics behind all of that, that theft went to virtually zero. So that's an example where by leveraging data in a unique way, building out these predictive models, you can turn that data, literally can be transformed into dollars. In this case, 20 to 30 million a year.
[00:05:28] Speaker A: Wow, that is pretty powerful. That's not something I would expect most of our customers would expect to get from the type of data that we're collecting. That goes straight to the bottom line. So, Chad, you are talking to customers every day, and I'm sure throughout those discussions, you are observing things about how customers are using data across the connected supply chain and probably finding some surprising ways that that's happening. Can you share some of those observations that you come across daily?
[00:05:56] Speaker B: Sure. So how do customers use data across connected device supply chain? So, first of all, I would say I'm very confident that every one of our customers is making some kind of data informed decision every day. However, I would say, as a general statement, there's always room for improvement. And I would say one major observation, just a blanket observation across the industry, is that much of the industry is still relying on what I would call either unstructured or semi structured data that's not easily shared or leverage across systems. It's data that lives in spreadsheets and in emails, even text messages. And it could be really vital data like inventory pricing, logistics data. And this approach to data management, it lacks the sophistication that is really required for this industry to operate really efficiently. And so what happens here, we don't have the right data. We have the data. It's not shared. It's in disparate information. It's trapped in these silos. And that just ultimately leads to lower confidence, lower accuracy and consistency. A lot of manual effort with spreadsheets and keying in results for manual processing. And that's what we see all over the place before we start engaging with that particular customer. The other problem we really see is, and we see this all the time, is that the data is siloed and it sits within the individual process in which the data was actually collected. Think about all of the trade in data that might be captured at a point of sale system that's stuck in the POS system. The output of device processing is stuck in its own database, and those two aren't talking. And so we're not making the connection from, let's say, either the online trade and the retail trade, and the quality of that device, the condition of that device, and what was the accuracy of that at the retail, and how does that compare once it lands in the distribution center. And so there isn't that common thread across these systems, which just leads to more unstructured data in emails and spreadsheets that I was just mentioning. And it's all just very disconnected. And these kind of data problems are servicing across all industries, and these are challenges I think, that everyone is solving for right now, which is why it's such a hot topic.
[00:08:13] Speaker A: Yeah, it sounds like that can be very disorienting to have that sort of patchwork of data in a variety of places. So how is appcodo helping our customers rethink their use of data?
[00:08:25] Speaker B: That's a great question. So we're helping them think about data very differently in how they can make better, smarter decisions with increasing their sophistication, I'll say, and how they use data and the sources of the data that they're using. And so the Apkudo platform expands. I think of it as like expanding the aperture of the pipe that is bringing in the data. So instead of just relying on data from these disjointed internal systems, or even worse, like I mentioned, these spreadsheets that are buried in emails, and these are frankly very large, sophisticated companies, yet these processes are very unsophisticated and they are relying on spreadsheets, and it's low and error prone. And we're trying to, and what we're doing with the platform is bringing all of this together and all of the internal data. Let's start with just the internal data, data that the company has and putting that all in one place. So now we have the internal data in one place, and we can action that. We can build reports, analytics and understanding around that. Then we leverage our robotics particularly to augment that data with highly accurate, very detailed device specific data. So now we have all the internal information we need and the data we need, and we have on a serialized level. What is the quality of the condition of this particular device? Where did it come from? What is the cosmetic condition? What functional testing was done? What is the value of a repair to be done on this device, et cetera. So we have the device specific data. Then lastly, very importantly, we add in all the external data, external sources like market price, customer demand, does anyone want this device? Who wants it? What country is this device in highest demand for? Does this device need to be repaired? We learned that on the device level data. Okay, but what is the cost of that repair? What is the turnaround time for this particular repair vendor? Should we use this vendor or that vendor? These are all the external signals and external data that we combine. So think about that as where we're starting in terms of maturity of the data. We have a customer that has fragmented internal data. We bring that all together in one place. We add that to that the device specific data, and then we add to that the external signals and external data, and then we have a complete picture of really everything that we, where this device came from. What should we do with it next? So that's what we're constantly doing, is answering the question, we have this specific device, what is the next best use of this device? And I think when we work with our customers, we share this very distinct point of view and we help them think about data through the lens, really, of that type of framework.
[00:11:10] Speaker A: Yeah. So speaking of a framework, what you described sounds like a continuum. Can you explain that framework a little bit that you alluded to there?
[00:11:19] Speaker B: Sure. Yeah, absolutely. I can explain the data maturity framework, as we would call it. And there are a lot of data maturity models that you can reference across industries. And I'll tell you, Allison, as a former consultant, there isn't a maturity model that I don't love. It's tough for me to resist them. Maturity model, it's what we always have used in the management consulting world. And I like them because it gives customers a very easy way of identifying where they are now. And it can be a self assessment too, to understand where are you now? Where do you want to go? And then what's the roadmap to get there? What needs to be true in order for us to go up that scale of maturity? And so the model that I was referencing when I was giving my prior example is pretty straightforward. I mean, I think of stage one as sort of business as usual. The emphasis here is just on the data capture and reinforces that while customers are certainly using data today, they lack any real sophistication. It's that earlier example that I was giving, but it's a foundation upon which to grow. This is where we have spreadsheets and emails, and it's disjointed. But you can do an inventory, you can start understanding what does that actually look like? What does the current state look like? The next stage, what we call change makers, and they're starting to integrate data, they're improving that visibility, they're sharing that siloed data across different departments within the organization. Stage three are those that are starting to emerge as leaders. And I would refer to the first example that I use to predict the fraud and the theft is the customer in stage three. It's really taking data and going from descriptive to predictive. It's now not just describing what has happened, it's predicting what could happen and will happen. And that's in stage three. And then stage four is about working with new standards. This is aspirational, as every maturity model should be. And it reflects those companies using data upstream and downstream and with partners in new ways to accelerate business impact. And the end with partners, I think is a very important point. There is a lot of talk now, and a lot of industry chatter around the risks of sharing data with the ecosystem are lower than the rewards. So the rewards are outweighing the risks of creating intercompany, interoperability. And that's something very key to this industry and our industry, in my mind, to ensure that we are delivering the greatest impact as a whole and elevate the sophistication of the industry as a whole. And that all happens in stage four. And so we get as many companies as possible into that stage four.
[00:14:03] Speaker A: Yeah, that's really helpful to see this visually portrayed. And I see circularity on here as one of the ways to think about data maturity over this continuum. And so with my background in sustainability, of course I'm going to steer the conversation there. So I'm just curious how you think this framework applies when we're thinking about using data to inform and advance decisions regarding sustainability and circularity. What kind of comes to mind when you think about the intersection of data and circularity?
[00:14:36] Speaker B: I think it's right in the middle. And we are a circular industry platform. We call it that for a reason. And it's a core benefit that we are looking to deliver as an organization. And to me, circularity is all about keeping a manufactured product in use for as long as possible. How do we do that and how do we do that efficiently? The current asset owner, the person that has the device right now, needs to find the new owner. In other words, the supply needs to meet the demand. And the interesting thing we're seeing right now, it was IDC that came out with a report earlier this year that said there's a complete dislocation between supply and demand of devices, meaning there is more demand for used devices, in particular in Europe, but it's accelerating the US and other parts of the world as well. There is a larger demand for used devices than there is supply. And how do we use data, really? Is your question to connect those dots a little bit more so that supply and demand are met?
If you think back about the framework in stage four of the data maturity model. This matching is happening systematically. It's happening without that human intervention. Imagine on one side of the platform, you have a customer of ours, like an insurance company, and this insurance company is able to use their data to predict the demand that they have for certain set of skews of devices, hundreds of skews of devices that they knew they're going to need in order to fulfill insurance claims. So they're in the business of insuring these devices. Something happens, someone files an insurance claim, they now need a replacement device, and they can predict with a high degree of accuracy what those devices are that they're going to need. Their problem is, again, sourcing those devices and sourcing those devices in a way that is cost effective. So they are spending far more than they'd like to, to actually fulfill those insurance claims. So one way we can work together in a stage four data maturity model is by exposing that demand for those devices into the Apkudo platform, which also on the other end of the platform has all these trade in programs and all these enterprise take back programs and all these devices coming into the Epkuto platform. And so by taking the demand signal from this insurance company and their skus that they want to buy and the prices they're willing to pay and the condition that they're willing to pay for those devices and everything like that, putting that into our pricing engine, that is a centralized pricing engine that feeds all these trade in programs, we know that if someone walks into a retail store or does a trade in online and they present a device that matches one of the skus that the insurance company wants, that insurance company will pay slightly over market price to get that device now because it will delight their customer that has an insurance claim and they're going to fulfill that insurance claim for less cost than the alternative. It delights the customer on this end thats doing the trade in, because they just got a really good value for their trade in. The Epkuta platform systematically transports that device logistically from that point of sale, from that point of trade in back to that insurance company. All that is made possible by leveraging good, timely data across the system and by sharing it across enterprises in this.
[00:18:18] Speaker A: Case, and what you mentioned there in terms of the efficiency and the cost savings from that is also going to support sustainability goals because it's going to be less inputs into the process and less delay. So that's another benefit from it as well. On the enterprise take back side, I know we have had several instances where we've had utilized data to, for example, minimize, kind of go back upstream with the customer and say you are experiencing a large amount of damage that is replicated across all of your employees. There's something going on here. Let's investigate and finding that the way that these users were positioning their devices was causing this, you know, recurring damage. And so we were able to communicate that to the customer and say, if you put a better case on these devices, that will prevent the damage. And therefore, you know, this early return of these devices before they should be returned. And so it helps extend the device's life. And they, you know, very easily shown that the investment that those improve, you know, maybe, maybe more expensive cases, but added better protection. That investment was recovered because of the extension of the devices lifespan.
[00:19:43] Speaker B: So there's value on the resale of those devices too? Yeah, I'm sure correct.
[00:19:49] Speaker A: Okay, so, you know, sustainability is a, circularity is a pretty hot topic, but AI is a pretty hot topic these days as well. There's application across industries. There's really no place that AI is not having an impact. Can you explain how Appcodo is using the capabilities of AI to get even more value out of the data that we are capturing?
[00:20:13] Speaker B: Sure. I'd love to talk about how we're using AI.
We've been using AI, frankly, way before it was cool.
And we have patents on some of this. It is very, very sophisticated. And my favorite example of how a CuDA uses AI is in cosmetic grading. And it's a problem that we solve for so many of our customers, where again, we talk about sustainability and circularity. We all know that one of the biggest inhibitors to the growth of the secondary market is trust. And one of the biggest diluters of trust is cosmetic braiding. And not knowing what an a is versus an excellent or good and all this stuff, and, and having that consistency, and removing the subjectivity of a human that is staring at a phone and determining what the grade of that phone is. And by using robotics, we are able to increase the level of sophistication dozens fold in terms of getting to that accuracy and the consistency of that grading.
Each of our customers has their own unique requirements for grading, and that could be a whole other issue. But let's just say that for today, each customer has its own unique requirements for grading. And so, part of our mobilization process, when we're setting up our robotics, we use thousands of devices, typically around 3000, that have known grades. So they have been graded. They are the control set, if you will, and we train the robotic grading module with the known grades and we go through this process and it's continuous reinforcing model that only improves with time. So for this customer, this is an a, this is a b, this is a c. We know with the control, we train the model, and it just keeps getting better and better and better. And it sounds relatively simple, maybe, but it's because we made it simple. But the underlying technology is just very sophisticated. It's amazing to see the computer vision, what it's picking up, understanding how it's so consistent in how it knows what an a is. A B is, how it can instantly, as the output of these devices are coming off of a robotics line, grade that device for every type of marketplace. I mean, we have customers that have twelve different grades on a single device, depending on where it's going to be dispositioned, or if they don't know where it's going to be dispositioned, at least you have all those grades. So an a on marketplace one is a b minus, on a marketplace two, and so we have all that. You just couldn't really do that in any type of time efficient, cost efficient way without having the use of AI doing that for you.
[00:22:57] Speaker A: Essentially, what I understand about AI is that what you get out is very dependent upon the quality of what you put in. So I can imagine that there could be some unintended consequences for relying, relying on poor quality data. Can you talk about the potential risks and those consequences of relying on poor quality data?
[00:23:21] Speaker B: Yeah, there are definitely many potential risks of relying on poor quality data, but I think you put it in the most simple terms. It's Geigo, garbage in, garbage out, and AI is only as good as the training data and the reinforcement model. I actually saw an article recently published by MIT. They referenced an AWS survey of data and technology executives, and they found that 93% of respondents agreed that data strategy is critical to getting the value from generative AI. But less than half had actually taken any action thus far. And it's sort of startling. They know they need it, but they haven't done it yet. And I think this whole stat is compelling for several reasons. One, it reinforces the importance of having the data strategy component that I shared in our data maturity framework. You have to know what is current state, what is the strategy going forward. You have to take the time to do that analysis. It highlights also the importance of improving data quality and how essential that is to closing the gap in data readiness for the AI applications. If you don't have the clean data, if you don't have the data from the right sources, you can't feed it into an AI model, and it also sheds the light on the fact that industry leaders, they know that data is important to AI's value, but they haven't taken the steps to improve the data yet. And data strategy is just this basic, yet essential step that no one can skip. Otherwise, you're just going to be falling behind. So if there's any kind of takeaway, if you will, from this is get started, start doing the inventory, understand where your current assets are, how far behind are you, or maybe you'll be surprised and it's further ahead than you think it is. But if anything just gets started, take those first steps.
[00:25:19] Speaker A: Yeah. Or just how easy. It can seem really daunting to realize that there's a gap to close, but it may be simple to actually close that gap. So, not being intimidated by the gap, but instead sort of embracing the knowledge is power and data is power mindset and solutions like Appcudo can close that gap and take you further on that continuum. Simple and easy. And maybe you're just unaware of how quick and easy it could be to do.
[00:25:52] Speaker B: Really. Well said. Totally agree.
[00:25:54] Speaker A: Yes, I know we're approaching our time here as our discussion is wrapping up, so I wanted to take a moment here to encourage our audience to put some questions in the Q and a, if anything has come up for you so we can answer them here in a few minutes. But I want to circle back here with Chad. This discussion has really shed a light on how valuable data is and how it plays a significant role in the work we're doing for players in every part of the connected device supply chain throughout its lifecycle. How can listeners understand their data capabilities today, and what actions can they take to improve it?
[00:26:29] Speaker B: Yeah, so I would say, Allison, in the simplest terms, there are a few things that any listener can do to get started. Sort of. Like I said, the most important things to me is taking inventory of your current data captured across departments, processes, programs. Understand where it's coming from, what is the quality of it. Then you can assess where the gaps are in the data that you need to make the decisions. And so you are essentially plotting yourself. You're doing your self assessment of where your organization is on that maturity matrix, or any maturity matrix, but just have a point of view on that. At the same time, though, I would encourage our listeners to partner with their IT departments and their leadership teams to understand data policy and governance in use today. I think that's really important, because as AI has become more ubiquitous, so has data policies concerning the use of AI, both with using it as a tool to get your job done, but also building it internally. And there's every organization's figuring this out at their own pace. So I encourage you to understand where your organization is. And then I would also take an honest look and kind of rate your level of confidence in terms of the accuracy and the consistency of your data. So as you take that inventory, one thing I neglected to mention earlier, you take an inventory of the data and is it consistently going to show up? Is it going to be consistently good quality data? And some of this is judgment, some of this is empirical evidence. But I would definitely encourage you to really take an honest look at understanding. Is this data that we can rely on going forward that we can build into our models? Then finally, I would encourage people to come up with the prioritized list of data actions that your business is going to take. Make the plan. You're gathering the intelligence yourself. You're doing the investigation. You're using a maturity matrix or some sort of tool to assess where you are. You have a plan of where you want to be. Start building out those steps and how are you going to get there.
[00:28:38] Speaker A: Yeah, and what comes to mind when you're talking about this data maturity model and sort of moving along that spectrum is at that far end is the interoperability of the data and how, you know, as where we are positioned within the connected device lifecycle. We touch so many of those players. All of those players, really. And where the real power comes in the data is, you know, each, each of the players is, you know, collecting the right data. It's accurate, it's reliable, it's trustworthy, and it is being shared within their own organization appropriately. But then the ability to share it with, you know, those that we touch on that connected device lifecycle and supply chain and how there's real power in that connectedness of the data to really move the entire industry forward. So I think that data is one of these topics where there's so many layers to what's important, but the quality at the very beginning compounds and can improve the overall the outcome of what all this data is doing and what it means. So I really appreciate the way you've illustrated that continuum and that maturity model for data. I think it really helps people get a sense for where they are today and where they need to be. Hopefully our listeners, as they were looking at that model, kind of were thinking in their heads where they sit on that spectrum. Looks like we've got some questions from the audience here. Let's take a look. We've got one from Alex. What are some best practices for managing and organizing large volumes of supply chain data? That's a great question.
[00:30:20] Speaker B: Yeah, that is a great question. I would say you have to understand, where is that data coming from? Do you have a high level of confidence? I mean, I keep mentioning the word confidence, and to me, because of the underlying truth, that garbage in, garbage out into all these models, you have to know where is that data coming from? Are you confident in the accuracy of the data? And if so, then prove it through testing and understanding and back testing, potentially manually doing that. But it's worth the effort to do that, and particularly when you're getting large volumes of data coming in from different sources. I think then from there it is. I would encourage you to think about what is the magic that happens when you take that data that you're talking about, these large amounts of data from supply chain, and you start augmenting data from one source with the data from other multiple sources, and then what does that from an algorithmic standpoint and predictive models and things like that? The predictive model example I gave doesn't happen because you're getting data from one point. It's what Alex is asking about, getting data from many different points. And you have data scientists that are sitting down and they are thinking about what are the attributes that are most predictive or most useful in predicting loss in this case. And you have to test it out, and you have to take data from different points and different controls. But in order to do that, you have to get that inventory. So it comes back to that inventory, the understanding of the quality of it, and then saying, where can the magic happen? Where can we combine this augmented, do we have to buy data to augment it? But the real magic happens is combining it instead of taking that siloed approach again, if I can say anything that we see time and time and time again, is that siloed approach.
The sales team doesn't know what the product team or whatever, across any industry, across any company. It's a siloed data that is inhibiting the growth from happening. So when you looking and you're evaluating all those data sets, I encourage you to do those things. But ultimately to say, how does the combination, how do we draw a circle around that to create the most value?
[00:32:42] Speaker A: Yeah, that's really important, and I think you made that point very well today. I really appreciate that. Looks like we have another question here from Carly.
How can my company ensure the security and privacy of our data while still making it accessible for analysis. I bet you get that question quite a bit.
[00:33:01] Speaker B: Absolutely we do. Data security is key tenant of a circular industry platform. You cannot be a circular industry platform without the reliable, consistent, secure data. And so, I mean, I would encourage you to again, work with your IT departments who have a point of view. Every company is going to have a point of view. It's also going to be somewhere on a spectrum on what security really means to your particular organization. Go through that analysis, go through any of the audits that need to happen. The ISO standards, 27,001, all the ISO standards that are required for data security. I would encourage you to look at your partners, see if they have them, if not, why, but working with the IT organization and ensuring that the feed that you're coming in is secure. But then once it's in, you're having it within your own four walls and it's typically a whole lot more accessible from a security standpoint. But this isn't something that you should do with as a skunk works sort of with your head down. Because in particular, the intercompany interoperability and the intercompany data sharing, which again, just a few years ago was, you just never would have heard about that. And now it's happening more and more and more as we talked about how the ability to share data securely, appropriately is increasing in value and it's far outweighing the risks. So, I mean, in short, Carly, I would say just ensure that you are going back following your company's data security principles rules. Get help. Do the assessment with any of the third parties in which you're going to bring data in, and then have your own secure database to store all of that and do all your analytics off of.
[00:34:57] Speaker A: Thank you so much, Chad, for this conversation today. It was a pleasure to talk with you and nerd out a little bit on data. Hopefully, our participants enjoy this as well. And I appreciate everybody for tuning into today's podcast session. I'm certain that Chad will be back for future sessions because this is one of many topics that we could go and do a deep dive on. So just want to remind folks, if you have feedback or ideas for future topics or guests, please shoot me an email. It's first name, last
[email protected]. this is Allison Mitchell. Thank you again. I really appreciate everybody being on today's session, and we'll see you at the next one. Thanks.