- Calgary Tech Journal
- Posts
- “Good AI is invisible.” – AI in B2B: a Tech Thursday recap
“Good AI is invisible.” – AI in B2B: a Tech Thursday recap
All photography by Paola Contreras, Studio Team at Neo Financial
On June 15, Tech Thursday brought together a panel featuring folks at companies that build products that leverage AI for their B2B clients. The speakers were Chloe Smith, co-founder and CEO at Mercator.AI, Heather Chapple, managing director at AltaML, and Ryan Kazmerik, director of data science at StellarAlgo. The moderator was Drew Gillson, global AI portfolio lead at Google.
I’m Phillipe Burns, the talent marketing lead at Neo Financial, and the lead organizer of Tech Thursday events in Calgary and Winnipeg. The following is my recap of the event’s top ideas and insights, which were condensed and lightly edited for clarity.
Mercator.AI has built a tool that tracks the full lifecycle of a plot of land for general contractors to keep an eye and bid on potential projects.
AltaML builds custom AI solutions for their clients through their Applied Labs.
StellarAlgo uses crowd data from sports teams to build advanced customer analytics for teams to leverage.
Drew: Ryan, as an Instructor at Mount Royal University, what are you telling your students now about AI?
Ryan: With students using co-pilot, I encourage them all to ask Chat-GPT to explain to them what the code is doing. Today, we’ve scaled down the reading and writing aspects of the VARK model (Visual, Aural, Reading/Writing, Kinesthetic) of learning. Especially in Comp-Sci, we don’t have much reading and writing. Students should make sure they’re understanding what the code is actually doing.
Drew: Chloe, how do you communicate how you use AI to your clients?
Chloe: People get scared when you say AI sometimes. I try to make sure they know it can be used in a way that:
Helps and assists them
Augments their work
Can help shoulder the responsibility of work with our aging population with less people able to work
Drew: Heather, can you share any use-case success stories from AltaML?
Heather: We recently worked with a large energy company where they had 15 pounds of fire proofing material fall from one of their construction sites. They engaged our applied labs where we used computer vision to scan for fireproofing. We created a solution that can assess problem areas on construction sites sometimes better than a human can – though it’s necessary that humans are closely involved in the process. The tool can be better for safety, better for cost but without the user telling the computer how it should walk through a space it just won’t help. Without humans, tools are just tools.
Drew: What are the words you use to talk about the problems you’re solving?
Ryan: I explain the difference between AI and AGI. AI is teaching human-like tasks to a computer, it won’t infer your emotions from the information it’s telling you, it doesn’t have a conscience. We’re still really far away from AGI. I then unpack it from there. So for sports teams I ask “What’s the worst part of your job?” The answer is often calling all season ticket holders. Ok, well AI can project who you should actually be calling and make that part of your job much easier.
Drew: What are we going to do with these new advanced capabilities?
Chloe: We have some people right now working on projects trying to figure out how to leverage GPT and come back with, “Here’s a use case that we found, here’s how we think it can help a project.” Then we identify if GPT is the best path forward or if we should go another way.
When identifying use cases, it’s important to have a sandbox that experts can play around in. It can help your team think differently about solutions. Find individuals who will push use cases to limits with a business case in mind to find viable solutions.
Heather: Time-to-Value is an important thing to keep in mind as well. It’s critical to think about if we should be using AI. The volume right now with clients wanting an AI solution is turned up to maximum. It’s kind of like Oprah’s trademark: “You get an AI! You get an AI!” Getting a deep understanding of the data that is available and if AI would really make sense in a particular use case is really important.
Drew: Removing the obstacles of having a clean data set, how would you build an AI?
Ryan: Data science is becoming a micro services architecture – meaning that if data is clean, you can build AIs that build AIs. If data is pristine, you can build microservices. If you have fans with ticketing data but not age, gender, location, I’ll build a microservice to estimate that information which can make a predictive AI better.
Chloe: Building rapid intelligence tools. If we have perfect data, then instead of building a pre-set onboarding flow for everyone, you can create a personalized experience as your clients go through the flow. You can marry a personalized experience with questions they might ask at each point. You can really build a chameleon product with a customized experience, customized code, and customized outputs. Coming from a PR background, we used to say good PR is invisible. I think a good AI is invisible in the same way.
Drew: How did AltaML influence customers to augment their work?
Heather: LLMs (Large Language Models) are really influencing enterprise data sets. What I hope is that LLMs help you turn off your computer at 4:00 pm instead of being stuck in an ERP (Enterprise Resource Planning). Enabled tools = humanity.
Drew: What are some of the obstacles that are in the way of your goals?
Ryan: I think we miss when we misalign AI research areas from business cases. Magic happens when research and business opportunities align. I think if we can align research with sound tech we can unlock some big opportunities.
Heather: Another recent use case I’ll touch on is we recently built a wildfire monitoring tool. It can identify tinder and conditions to help the Alberta Government to be more predictive with fires. Here I think change management is incredibly important, we have to work with firefighters closely to learn what they do and have constant user iteration and feedback. Having a constant feedback loop is key. With the scale of the wild fire problem being so big, the AI will constantly have to learn.
Drew: That’s a great example of a use case where people want it to work, what about when some don’t want it to work?
Chloe: Investors look at team, product thesis, and timing. We couldn’t do this two years ago. Socratic data has enabled us to do what we’re doing and being on the verge of a generational shift.
Drew: What are some of the ethical considerations of AI-assisted workflows?
Ryan: In sports, we don’t have as bad of a biasing problem as some other use cases because sports is an incredibly diverse community. So we have fewer problems than maybe a computer vision tool would have but PII (personal identifiable information) is certainly a big consideration.
At StellarAlgo, we have a rule of zero PII in all of our pipelines. Why? Because PII makes for bad features. If I’m predicting behavior I don’t actually care about your last name or your postal code. Things like your distance to the venue based on radius is just as predictive as postal code. I will say that in the age of AI that everyone should be prepared to be a security expert.
Heather: AltaML is a member of the Responsible AI Institute. That means we take on projects that benefit people and/or climate, and not ones that don’t. You should have guardrails when you’re working with AI, and always check the work to make sure it’s ethical and responsible. Great power comes great responsibility.
Drew: How will our teams work together in the future?
Chloe: There’s no world anymore where customer experience, marketing, or any other part of an organization doesn’t know how our data works. They’re going to have to talk with our customers and make decisions based on our data. We have to embed AI in our day to day, it will elevate us all.
Drew: What is the one lesson you have for everyone here?
Heather: Educate your staff
Ryan: Educate your staff
Chloe: Educate your staff