Podchaser Logo
Home
CAIR 78: High Impact AI RESULTS For Your Business, Now!

CAIR 78: High Impact AI RESULTS For Your Business, Now!

Released Friday, 13th January 2023
Good episode? Give it some love!
CAIR 78: High Impact AI RESULTS For Your Business, Now!

CAIR 78: High Impact AI RESULTS For Your Business, Now!

CAIR 78: High Impact AI RESULTS For Your Business, Now!

CAIR 78: High Impact AI RESULTS For Your Business, Now!

Friday, 13th January 2023
Good episode? Give it some love!
Rate Episode

What are the main challenges that I need to solve as a company trying to apply AI to help my business? In this episode, I have a conversation with an organization that provides an AI platform to help you overcome those.

 


Grant
Hey, everybody, welcome to another episode of ClickAI Radio. This is Grant Larsen. And today I have someone that I bumped into the not too long ago, I was at a conference and literally went to their booth and said, Oh, I want to learn more about this technology. I've been tracking your team, your organization, so it's my chance to learn more about it. And so I got to meet a Atalia Horenshtien I hope I said that right. I'm so excited to have Atalia here with me today. So first of all, "A" welcome I tell you and "B" did I say your name right? And then "C" the first question out of the box, explain the meaning behind your name.


Atalia
Thank you so much for having me. And you actually pronounced the name correctly. So kudos to your best. The meaning behind my name. So first of all, thanks to my parents for choosing such a unique name. It's actually a Hebrew name. I'm originally from, from Israel. And I totally I need Hebrew pronouncing it as a Talia is actually the first claim in Judaism kingdom. So it's a name from the Bible. And it's actually very unique even in Israel. Very cool.


Grant
Really. The queen in the Bible, I'm gonna have to go look that up. That's awesome. You should. That's very cool. Do we? Do we bow in your presence? Then? Do we do anything like that? Do we do we say Hey, this is me?


Atalia
No, no, no, no, not not at all.


Grant
Excellent. Well, thank you. Thank you for taking the time here today. Now as I understand that, I want to make sure I get this right. Your title in your organization is Global Technology Product Advocacy Lead, I actually had to write that down because I can only remember three things. And that's five words in a title, Global Technology Product Advocacy Lead for DataRobot. Did I get that? Right?


Atalia
Yeah, that's actually correct.


Grant
What do you do in that role?


Atalia
So we did a robot. I started as a customer facing data scientist, where I work with customers in different industries, and helping them how to solve complex AI and machine learning problems. And learning from this role, and those use cases. I shifted a bit towards to the advocacy side. So how we tell the technical story of DataRobot, how we educate the market about what's possible. Some of the use cases I implement, and some of the stuff I saw was working on collaboration with our marketing sales. And our customers as well.


Grant
Okay, got it. So that global part, I think, is critical, because I'm assuming that you go across multiple markets, you're not focused on any one. I gotta believe that gives you a sort of broader industry cross industry view on on AI and machine learning. Is that correct?


Atalia
Yeah, so I was very lucky to work with different industries in different geographical locations. And obviously, I see a lot of different trends and maturity around AI, where they are in the stage, how are they adopting? What's the process? There, technical knowledge, their technical stage? Yeah. So from United States to Europe to the Middle East. It's really, really interesting. And I'm very happy that I have the opportunity to do so.


Grant
So that's awesome. What know what got you into this world? What got you into AI and machine learning?


Atalia
Actually, it's a really interesting path because I started actually the software engineer Not not a, as a data scientist, and over time, obviously the software engineer you work with, with software development, system design, etc, some stuff that you see today in in machine learning operations. And then when I did my masters, I was mostly specific around business intelligence and machine learning. And I learned a lot, it was super interesting. So I took my software skills into a different level. And it's a funny story, because a professor of mine is actually working for DataRobot. And he's one of the main reasons I'm here.


Grant
Oh, really? Oh, that's interesting. So was the professor already at DataRobot when you were like, Okay, got it. Got it?


Atalia
Yep, he's still here. His name is Ted Kotler is a world class Person, both on the personality side, and he's a technical knowledge. So I'm very fortunate to work with great people in the company.


Grant
That's fascinating. I have a similar journey myself, meaning I too, came through the software engineering path, and then sort of stumbled into  through a whole range of things into the whole data science and ML/AI space. So a lot of people certainly do that. But it's sort of a shift in the thinking, isn't it the first time you come into the ML thinking, you're thinking about your data in a much different way. And algorithms and such you're like, wait, okay, I'm solving it. So much different. But I thoroughly enjoy that. All right. So I want to get to some of the things that are unique to, to DataRobot itself. I've worked with multiple AI and ML platforms. And as I mentioned in the intro, I had been watching and looking at DataRobot, watching your organization over time, take on more and more capabilities. When I when you demonstrated the latest capabilities and gave me a sense of where things are going. It started me thinking, what are the main challenges when you think about the AI and ML world? And the problems that are in this space? What are the main challenges that we face? And ultimately, I want to get to what is it that differentiates the way DataRobot does it in the market? But could you first start with what are some of the main challenges you see today, especially with your global perspective, in the AI space?


Atalia
Yeah. So I think the main main challenge today around the machine learning lifecycle is how you move models into productions and how you make sure your models are still accountable, and accurate with all the factors and new reality that is coming up, right. And get this today, everyone can just build psyche learn model or simple, you know, regression model. But when you work as an organization, and you have different infrastructures, different tools, different skill sets, different personas within your team, and you localize the development side of it, and every model has completely different requirements. So you're getting to inefficient lifecycle. So moving a model from development phase to production is a process that takes usually a lot of time, and it can be super complex. So something that I personally like about there rather than this is something I learned from customers, right that this is the biggest pain today is having an ability to have a platform that will be interoperable and flexible around how to support models that were created in variety of environments and languages, but also how to serve and manage and monitor models that were deployed to different endpoints necessarily to data on production server.


Grant
So sorry, just interrupt there. That was one of the things when you showed me I was fascinated with which was this ability to bring in models produced from a wide range of platforms and tool sets, if you will, and still bring them into the management aspect that I thought that was a critical characteristic around DataRobot itself. So when when you do that work, when you bring those models in and you manage those, what is it that that you're doing that makes that easier? A it's a it's helpful for me to have one place to bring those together but be what value then does that help me with as I tried to, to update those and refactor those moving forward?


Atalia
So it includes several other aspects. So first of all, as you mentioned, you have a single place where you can see all your models regardless where they've been created and regardless of the word have been deployed to. So like single pane of glass where you can see everything and, and being able to see at a glance which models are stale, which eventually preventing any risk to your business. Because it's not just about having the visibility to those models. It's also the ability to manage, monitor and govern them. So what is the service health status of this model from the the all endpoints, and what's the accuracy of the model, how it's changed over time, maybe some features that have been drifted. When you see those aspects, it's really helpful for you to understand, maybe you need to retrain your existence project, maybe you need to swipe your existence model with any challenger model, something that is performing better now with the new data that is coming up. Think about COVID That can be a really great example, right? When you train models on certain data set, but then all the reality has changed. All the products you did at that time are irrelevant today with the existence data.


Grant
Okay, yeah. And this this ability to see the either the drip, like you said, or the staleness of that that's such a critical capability. Is that is that a visual thing? Is it notification based? And how is it that you're being made aware of this?


Atalia
So it's definitely a visual thing. So you have an ability to see on a specific time when something happened, but you know, we're not expecting from people to go in consistently checking the platform, you can automate the whole process with notifications, if you want to get notified that, let's say, above a specific threshold, you have, under a specific threshold, sorry, you have a drift in accuracy, you would like to get a notification. And you can automate the whole process around retraining, what are the factors for retraining? So really looking to? Where are the areas we can automate the cycle in order to make the life of the user easier? But also, you know, with how much we are saying, Yes, we have AI and we have a donation, you you still need the man or the woman in the middle to approve this process? To be aware about this process, and there is still user intervention for some degree.


Grant
Okay. All right. So so let's take a scenario. You know, one of the current challenges around AI is this, you know, data bias problem. So what I want to figure out is, let's say in my organization, I've got this bias that comes in, I'm not aware of it at the time, I'm creating the model. There's no intentional harm here. But however, as we get into the production and rollout in its execution, it becomes obvious that the kinds of decisions or insights are certainly leaning a particular direction, what is it that helps me to discover or find that out? And therefore ultimately, correct that?


Atalia
That's a good question. I think ethical AI is one of the rising topics in AI, right. And mathematically, you can create a model that is not biased. But there are some techniques on how to make sure that the model is more fair, towards your sensitive features. And actually, there are some capabilities in the platform that really helping you not just in production, actually, but also in development, where you can manage bias litigation and tag those have the features and see in your development process already. What's the what's the bias and fairness around this specific project with those features that you chose, and then being able to seeing how the bias and fairness continues towards the production side where you have new data coming in, so you have an ability to target even before and this is helpful for you to understand how to deploy the model into production? And what are the changes that are required in order to keep this model fair.


Grant
So it's interesting that there's some capabilities in the platform that help you to identify potential bias factors or features along the way, be mindful of that. Let's go to the far right side of this, which is let's say we've done that work. We have the model produced, it's deployed, it's in production. How do I how do I Give feedback. Is there a way to say, A? What are my results of this model? And be? What are my end users potentially feedback on it as well? It's two different kinds of feedback. A is the model given me good insights or guidance? And then be? What about the end users themselves? And how do I get their feedback into there?


Atalia
Are you asking in general about the model, or specifically with the ethical side of it,


Grant
Either one, I'm fine with either side of it. Yeah.


Atalia
I'm still on the ethical side of it. So we provide those visualization tools as part of our ML ops capabilities, to track the bias and fairness on those sensitive features. So you can be certain that decisions that are made with the model are aligned with your technique that you applied during development process. That's, that's one thing. And obviously, if you see any changes over time, you can retrain your model, you can try out different things as part of your development, with our bison furnace techniques. And then overall, the insights you can get from a model, they are divided to two phases. On the development side. You know, in machine learning, we're always talking about the predictions, the predictions, but this is not really the main thing. Yes, we care about the predictions, right. But how we translate those predictions into business actions. Exactly. So having an ability to get that Explainable AI, this is something that I see all the time, especially with the business personas, decision making people, for them, it's really difficult to translate, what is it? Ai model, right? So we provide some graphs to, to present insights and explainability on a model, for example, what are the main factors on a macro level, that contributing to the model? So for example, let's take a churn use case. And you're looking to understand why customers are churning. And it's not just about oh, this customer, the likelihood that this customer will churn is 0.7. What does that mean? So we know what what's the factors, for example, how many times the customer may be called to call center a lot, and maybe the customers plan is, is very expensive. So some of the factors that really affect about if the customer is churning. So from a business perspective, we can decide, oh, maybe we should improve our customer service, maybe we should reduce our pricing. So on a macro level to see those main factors, but also being able to dive into the to the micro level, and check how a change of each one of the features is really impacting the target variable. So let's say if a customer called two times twice or three times to customer service, how that affects the likelihood to churn. So this is gonna be super helpful. And this is really where we add the expandability side to, to the predictions from from the model and the predictions themselves, you know, at the end, when you move the model to production, and now you have new data coming in, you look to score it, this is where you can integrate the predictions with your business applications. For example, you know, Tableau Power BI, all those business applications. So you can still continue to work with your existing business flows and tools as you like, and being able to generate decisions.


Grant
But I like as he explained, or that you shared explainability is one of those elements that helps with successful adoption and usage of an AI model, and ML model. If we step back and given your experience with customer success, how would you net that if you were to look at the patterns, both the good patterns as well as the anti patterns around customer success? What have you noticed over time, across lots of markets or industries, those success patterns for not only building the models but but actually getting the value in the outcomes from AI models? What have you seen?


Atalia
So I think this is the million dollar question because I always say that it's not enough just to purchase an AI machine learning tool, right? It's a it's about if you if you use it, and if really models made it to production. This is how we did Robert, manage our success and with ourselves as partners, to our customers, we have an organization that is professional services, that includes data scientists that helping customers on implementing the whole process, we have some AI success. division where you have consultant, past consultant that currently working for DataRobot can do irrigation sessions with with customers and really looking to unlock the potential that they can get, and really helping them to the whole process with our professional services. So, the process can be quite long and complex. And I think it really depends also in the maturity level of the customer, because based on their skill set based on if they already have some models in productions, or if we just started a process, those things can affect about the process, because when you need to take into consideration existing infrastructures and models versus building everything from scratch, and maybe educate the people on what they can get from AI and machine learning model. So, this is a bit different between maturity, and I think everything eventually stopped with change management, you see, a lot of times you know, there is this very motivated executive buyer that is looking to change and, and include AI in the organization. But then how do you really convince the people that working on a day to day basis that this is the tool that can really help them and sometimes we see it even with data scientists that they they still like keeping, like to keep the hands on and and code by themselves without really trust any any other tools. So this is where you provide some education with some horses on the platform, but also how you engage a model that is not like a black box, right? You have expandability, you have the process, you have documentation to really let the the employees to trust the process at the output of the product.


Grant
So building that that trust in that, in that process sounds like the key thing there is success stories. Is there a success story that you can share where you help the organization go through this and realize some outcomes?


Atalia
I can try to think about some of the success stories that are publicly covered. Yes, as you know, right. For example, we have a story from Stuart healthcare. They are the largest for profit private hospital. They're operating in the US. And they use machine learning to make big decisions about staff and patients, which eventually helping them to reduce costs and improve the patient outcomes and experience. And they already started the process and really decrease the cost as they wanted. So 1% reduction in registered nurses hours paid per patient they netted $2 million in saving per year for eight of the 38 hospitals enough to its network. I think this is a great number and also a greater effect. I'm always very happy when I see those stories around, you know, hospitals and health care, which eventually provide better outcomes for society as well.


Grant
If you were medium size organization looking at looking at Hey, how can I apply AI/ML to my business? What would you say to them in terms of adopting a platform like like DataRobot? What What would you say? What are the things that they should start doing?


Atalia
So the process, in my opinion, divided for three major stages. One is the why. look internally and think, Okay, why we need to use AI? What are those use cases that I'm looking to solve? And what will be the expected improvement or our AI from those use cases? Once we define those use cases? The next question will be the how, okay, we defined the why now how we implement it. So understand the skills that they have in house and then define what to outsource versus what to build, buy versus build right, and then choose the right tool and start the implementation. And when you start the implementation, the the end result will be the what? Once we have the model in production, so how, what are the outcome that we are getting and how we are making sure that the models are still accountable with our business and always start small start with one use case, just prove that works, we have adoption across our organization. And then it's going to be like a waterfall because many times one use case can really amplify the rest of the use cases and create motivation. And some of the outputs of one model will go to another use case, and so on.


Grant
Have you have you gotten a sense of the general amount of time? Or is there a pattern? In other words, let's say I get started today? Can I expect to see outcomes in three months? Six months? A year? Where's that said?


Atalia
I've seen it all I've seen organizations that that can do it quickly, within really, two months. And I've seen organizations that are still stuck. Because if we're going back to what I said earlier, it really depends also in in the organization itself, right? If you have a blocker from the IT, or from the scientist, or you know, maybe some requirements that you still need to, to to implement and you didn't and what is, what is it around your prioritization? So it really depends. And also, let's be realistic. It's not just that I decided today, oh, let's talk with AI and machine learning. And it can work right. I need to have specific state of my existence data so I can really produce a better outcome, right? Today with data centric AI approach. As your data is more clean and organized and relevant, you will have better outcomes on your model. So you need to be in certain of our data transformation process with your organization's data to really start. Wow,


Grant
Wow, that's that's awesome. I love that Atalia. Thank you so much for taking your time here today. With me any last comments before we wrap?


Atalia
Up, I'm just audience gonna feed the story don't afraid to, to ask questions. I always like to say at the end, Are You Smarter tomorrow than yesterday? Because as we evolving over time, you know, I was I was three years old. And then I'm trying to obviously I'm gonna be smarter over time, as I'm growing up, but in the in the small revolution. Are we smarter tomorrow than yesterday? Think about that chocolate thing, how you can be more innovative, how to make your life easier. And how to do some changes fitting your condition and with yourself to develop and achieve great outcomes.


Grant
That's a growth mindset. I love that Right? Which is always learning our learning isn't kept. And let's focus on what are the things we can take moving forward to improve our environment, our situation, the people we serve, and those around us. All right. Natalia, thank you so much for your time and your organization for allowing us to chat with you today, everyone. Thanks for listening to another episode of ClickAI radio. And until next time, go check out DataRobot.


Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook visit ClickAIRadio.com .

Show More

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features