Test Case Scenario

Error Monitoring Across the SDLC with Mac Clark

Sauce Labs

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 29:39

Send a text

Can your software handle the pressure when bugs slip through the cracks?

In this episode of Test Case Scenario, Jason Baum and Evelyn Coleman chat with Mac Clark, Senior Solutions Engineer at Sauce Labs, about the dynamic world of shift-left and shift-right testing. Mac shares how gaming and software industries leverage AI-driven testing, real-time error monitoring, and feature flags to catch issues before they snowball into costly problems in production.

You’ll also learn the hidden risks of neglecting continuous testing and how to protect your brand’s reputation by balancing proactive and reactive testing strategies.

Join us as we discuss: 

(00:00) Introduction

(01:12) Shift-right testing v. shift-left testing

(03:32) The role of error reporting in continuous quality

(06:50) Balancing shift-left and shift-right without cutting corners

(10:06) Using AI-driven testing to reduce developer crunch

(13:46) How Backtrace and real-time error monitoring can save production

(17:48) Crash and error reporting in gaming

(20:41) Avoiding alert fatigue and prioritizing critical errors

(24:01) Scaling error monitoring for large-scale software releases

(27:56) Lightning round


We’d love to hear from you! Share your thoughts in the comments below or at community-hub@saucelabs.com.

SUBSCRIBE and visit us at https://saucelabs.com/community to dig into the power of testing in software development.

Sign up for a free account and start running tests today at https://saucelabs.com/. 

▶ Sauce YouTube channel:  / saucelabs  

💡 LinkedIn:  / sauce-labs  

🐦 X: / saucelabs





Jason Baum [00:00:00]:

This is Test Case Scenario with me, your host, Jason Baum. This podcast is the definitive hub for knowledge and stories in the software testing and development communities. If you're new to the channel, hit the subscribe button and let's dive straight into the episode. Hey, everybody. Welcome back to another episode of Test Case Scenario. I'm one of your hosts, Jason Baum. Joining me, as always again, is Evelyn Coleman. Evelyn.


Evelyn Coleman [00:00:37]:

Hi, everyone.


Jason Baum [00:00:38]:

It's always great to have you back on the podcast, Evelyn. And we have a special guest today. Joining us today is Mac Clark, a solutions engineer with Sauce Labs. Mac, why don't you say a few words about yourself?


Mac Clark [00:00:51]:

Sure. Thanks, Jason. My name is Mac Clark. I have a game called Fishing on the Fly on Steam. Been here with Sauce Labs for three years. Came on board with the Backtrace side, really, to talk about how to make games better. We're going to probably hint on a little bit of the shift-right. Technology behind that for the production side, having experienced some of those inside of my own game and what I could do to make better as well.


Mac Clark [00:01:12]:

Before that, I actually was an early person doing full stack development in the fintech space. I'm actually one of the originals in that place as well. Thanks, Jason. I appreciate you having me on board today.


Jason Baum [00:01:24]:

Yeah, absolutely. Really happy that you're here, especially to talk on this topic, which you were hinting at, which is shiftright, sort of like error reporting monitoring. And we talk a lot about shift-left. I think that's like the buzzword these days, and has been right since DevOps started to come about. But now I feel like we're hearing more and more about shift-right as a solution, testing in production and all that. And, uh, that was like a dirty word for a while, I think testing in production now you're starting to hear more companies doing it, and that's their adopted style. One pretty big company that I won't name, uh, you know, I saw a lot of articles about them shifting to that. So why don't we just start right there? What exactly is shift-right testing, and how is it different from the traditional shift-left testing practice?


Mac Clark [00:02:24]:

Yeah, sure. And I think it's even important to split the difference between those two, right?


Jason Baum [00:02:28]:

Yeah.


Mac Clark [00:02:29]:

If you're testing in production, that can mean using synthetic data and moving through your system, and that could be like negative functional tests where you're logging into a system and making sure it doesn't log the inappropriate person in. And then we also have, what we have is, like, error crash in reporting, which is basically a solution, you almost think of it like an insurance policy. So if you have something catastrophic happen in production, you have coverage for those events as well. Traditionally, when you look at the gaming space, games, like, were allowed to get by without having a lot of, they get by a lot of errors and a lot of bugs. Um, and we've talked about it before, but they're, you know, some of these titles have really taken big hits when they've shipped like that. So they need this policy when they can't put enough development time up front to make sure once they're starting to experience something in production, that they can get coverage on it and get visibility into it and solve it as quickly as possible. I have many thoughts on this as well, but the one piece that delineates from is shifting left is right. That's before you're released.


Mac Clark [00:03:32]:

Then you're thinking about as many things that you can do ahead of time, like black box, functional, all those kind of testing. And then you have regression once something's released. And then the post production part, it's a real interesting space and I think it's really important. I continue to talk to people about how CrowdStrike, when I think of arming other QA folks with money to base or not money, but thoughts about how to keep themselves in a job. Right. Because the first thing it gets cut is QA sometimes are cost. What was the cost of the CrowdStrike? It was $10 billion for one bug. Right.


Mac Clark [00:04:08]:

That's your high watermark sitting up there and saying, okay, well, where does your organization fall inside of that? And then you'd want to go back to your management and justify that. What was the damage to your brand? So what's the damage to your app? If your app slips between a four and five star rating, you go from 90% adoption to the three star to 50% adoption, twice as much money on marketing. So what im trying to get around here is theres a real return on investment and real justification. Its a win-win for quality software to be shipped as high as quality as possible from the get go all the way to the end. Its a win for the consumer, and its a win for the brand and the business.


Jason Baum [00:04:46]:

Yeah, totally. And I have an article that was published on DevOps.com called shift-left is dead, meaning shift-left testing. And I think there are a lot of things wrong with how shift-left in general, but shift-left testing has really been kind of adopted. I think in principle there's a lot of things that make sense about it, but I think what companies are doing wrong is that they haven't brought the QA function in properly early enough in the process, where they're active contributors to the creation of software, they're still seen as the testers. And when you have that separation, I think inherently shift-left isn't really happening and shift-right, there are issues there, too. What I think is important, though, isn't shift-left isn't shift-right. It's kind of this continuous testing throughout the entire SDLC process and emphasizing continuous quality. That is the thing that people should be talking about more and more.


Jason Baum [00:05:55]:

And I think we are hearing that more and more. But, yeah, this is me, more editorializing.


Evelyn Coleman [00:06:01]:

It's okay. I think I agree with you, Jason, that, you know, we're supposed to test all through the life cycle. I think where this comes from is this idea that some folks are taking shift to mean cut, and then we're going right or we're going left, and so we're going to cut the opposite of. And I like to think of it more as sort of a bubble. Like if you've ever seen those models of the moon's effect on the tides, and you realize that it's just a bubble of the water flowing in one direction, but there's still water all the way around the earth, even when the tide shifts, right. So it's more of your bubble of emphasis for maybe just the time being. Maybe you're shifting that bubble because you have, you realize you have a gap.


Evelyn Coleman [00:06:50]:

Maybe you're not doing any kind of monitoring in production, so you're going to temporarily shift to cover the gap. Maybe you're shifting holistically, and you're saying, our product, whatever type it is, it's going to be more conducive to do more on the right or more on the left. But I don't think it means that we're cutting or it shouldn't. I don't think we should interpret it that way because then we get into trouble.


Jason Baum [00:07:15]:

Yeah, 100% agree.


Mac Clark [00:07:16]:

I think that's why I'm trying to set this bar down and say, hey, quality has a number, it's an investment, and we need to arm people in our industry about that. So when we talk about this high watermark, 10 billion for one bug, where your organization falls today, it's not meant to cut. It's meant to increase the visibility for your investment. I think of it like a flashlight. Okay, here's all my different types of testing. Even on the shift-left, you have functional or sanity or these type of things, and then you're moving to this continuous quality platform that's basically taking the flashlight and trying to get as many places that you have darkness and lack visibility to prevent these bugs from shipping inequality in quality software, prevent that experience that the user that breaks them. Fundamentally, the crashes and hangs are horrible. Any place that you can drive or squeeze that from, whether its left or right or smack in the middle, all of that continuous quality is extremely important to have that flashlight to shine in all of these areas so you get that coverage.


Evelyn Coleman [00:08:20]:

I have a question on that, and I love your analogy with the flashlight. Given the current economic state, given the resources, do you think from a gaming perspective, you would use that flashlight to uncover those dark spots closer to shifting right? Is that kind of what you're positing, or do you think it has its case by case?


Mac Clark [00:08:44]:

All of the industries kind of have a common theme, but I do think gaming is more of a unique use case. And Jason, and we've talked about this, the toxicity in gaming and kind of is hand to hand. Shifting left for the gaming industry means you want to prevent the developer crash that happens when you do your alpha and beta releases. Because once you start throwing players at something, they don't have enough QA departments to QA it or whatever. So they use players to do that, and that causes a crunch in the developer timeframe. And for me, shifting left in the gaming industry is using AI agents to play your games and not the players, and then using a product like Backtrace that's traditionally shift-right. But now you can use it to shift-left because you have AI agents in these bots going out and playing these games. The bots used to be fight bots, they used to be flybots.


Mac Clark [00:09:35]:

They used to be a particular type of thing in the game. Today, with the progress that we've seen with machine learning and AI, we are working with a couple of these companies, I'm not going to mention them, but the point here being they have everything. Bots. Now, these bots will learn the entire game and play your entire game for you. And in essence, shifting that left to get rid of all of that developer crunch in alpha and beta. So it's kind of unique in terms of that, Evelyn. But I think hopefully that answered the question. Or that, or do you want me to keep.


Evelyn Coleman [00:10:06]:

No, it did. But it also raised this incredible idea of me being able to hire a bot for when I get stuck in my games and I don't have anybody in the house to be something. I would like these bots for hire as well as for shifting less.


Mac Clark [00:10:21]:

Well, it will. It will get you banned for most video games.


Jason Baum [00:10:24]:

Bot find me the warp zones in Super Mario Brothers one.


Mac Clark [00:10:29]:

So I play Apex Legends. Used to play competitively. I have about 150 wins. Now I'm just humble bragging, you know, so that was when I had more time and. But we'd have people come in that played the bots, and what you'll see is they'll walk down the screen and they'll just go like this, and they're. They're shooting somebody over here in one frame and they're dead.


Evelyn Coleman [00:10:48]:

Yeah. For people like me who play in story mode, if I get stuck, there's really. I'm really already on the floor. Like, I even got a skip button or something. But, yeah, I'm so glad you mentioned that because it's giving me. It's just making my day to think about these bots running around.


Mac Clark [00:11:05]:

But they do make controllers where the mods to do these things are built into the controllers. It happened to be that video games tended to use a head button like they use a bone system for their animations. So headshots are about knowing the bone system of the majority of the people all named it according to those frameworks for those video games that's built into the controller. So if you were playing first in story mode, you could use a modded controller to automatically get headshots in an FPS game as an example of how that technology works.


Jason Baum [00:11:35]:

Cool. And that's ruined gaming. Here with Mac Clark. Join us next time when we ruin your favorite cartoon when you were a kid.


Mac Clark [00:11:46]:

It'S gotten more advanced. I mean, today there are people that set up little servers to try to get around anti-cheat systems and cheating and gaming is a huge deal. And they even have ranked players that have gotten caught in tournaments doing it. It's pretty bad. I chose to have a nice game about fishing with no guns. And really, there's butterflies in my game. I enjoy those games to some extent, but I do think there's a level of toxicity in them that I didn't want to engage in some of that. I certainly don't want to play with players that are cheating.


Mac Clark [00:12:16]:

It's not any fun for anybody.


Evelyn Coleman [00:12:18]:

Does the type of game inform the. I mean, you did say certain types of games you can use bots and ship left. If that's not something that's possible, is then that when you ship right for.


Mac Clark [00:12:32]:

Gaming in particular, it's kind of a new concept to ship left. And I would say that it's more dealing with the engines themselves. Unity and Unreal are the two predominant engines. I know Godot might have been mad about that or whatever when you say this, but they are the two predominant engines. And the technology around AI agents for both of them to play these games is really more dependent on that engine than to say that it is per title here. Even at Sauce Labs its the same way we look at it. Were looking at it for the platform of what we can do inside of these engines for testing. We have a couple of solutions in both of these places for automated testing for games.


Mac Clark [00:13:07]:

And that gets you more to that. That shift-left place. When it comes to shift-right for gaming it, it is always relied on a player base to push the crashes and errors into production. And then you're reactive. I think if you were to go to a therapist or anybody that they would say being reactive to any situation is kind of a bad thing. You want to try to be proactive and you know, take that into a different consideration. So if it was me, and I wasn't a director of engineering or quality at a gaming studio, I would be utilizing a full disposal of these tools with the goal of shipping the highest quality software so that I keep the players happy for all. I don't have to spend this money in marketing, I don't have to do these things on that level.


Mac Clark [00:13:46]:

And really that's about that idea of again, continuous quality. What can I do to have shift-left and how can I do these things once something does escape? One really cool thing about backtrack is we have automatic slacken and automatic integrations to notification systems. And so you can send an email, you can send a page. So let's say you have a crash and that's horrible, and suddenly you have 100,000 in your day of launch. You can literally have your entire engineering department aware of it. The moment it happens at the threshold, it happens and be working to address the problem and being it fixed as fast as possible. So it's really about cutting down the meantime to detecting it, then how long it takes to resolve that issue. We want to compress that as quickly as possible so that we don't lose players and don't lose trust and don't lose brand identity or brand value.


Mac Clark [00:14:32]:

I guess more.


Jason Baum [00:14:33]:

If it's one thing that I have learned from Marcus Merrill, Titus Forkner, Diego Molina, Nikola Avaloka and Mac Clark, it is that bugs are going to get out there, right? No matter how much you test, bugs are going to make it out into production. And sometimes what's even more important than testing is how fast you can roll back. Right. And how fast you can catch that error and correct it. So tell us a little bit more about error monitoring, crash and error reporting and its place with shift. Right.


Mac Clark [00:15:06]:

You mentioned something really interesting which I think a lot of people are adopting lately, which is feature flags, which is the ability for them in any software, not just games, to basically say, whoops, know, I rolled out this as a piece of this and then I'm just going to flip a flag and we can update these builds. So that's really, really interesting.


Jason Baum [00:15:24]:

And with feature flags, it's also how you're naming. Right. The actual labels that you're giving make sure that they're very specific.


Mac Clark [00:15:31]:

Yeah. Like I've built this new level and it has a new car that has wheels and I've just released it. But you know what, it turns out they're sinking into the sand, which happened in a real title and prevented people from doing that. In a very popular title, if you simply had a feature flag to remove that tire that you just rolled out, I mean, it can unfortunately be these blend animations. You miss one bone or one particular keyframe and you have these unintended consequences throughout the game running player experiences. You want to be able to label those very specific and say kill this particular feature flag. It can't just be rev version 101.6. It doesn't make sense to anybody.


Mac Clark [00:16:09]:

But as far as you know, what production and error monitoring look like, I think of it as like a sick patient, right? We literally call our solution as a triage section. So comes through the door, you know, and for me, when I look at this, I wouldn't necessarily triage the thing that's, it's interesting to look at. But the highest number, right? That thing, that's the highest number. Like say it happens 2 million times in one day, which happens. It could be like an ads thought served, right? But the 100,000 crashes that your game's experiencing, absolutely destroying your player experience, is the one that I would filter to. So the things that I'm going to want insight to immediately are the highest user impact, your highest things that any app, anything you ship. Today we spend more time on our devices. So this digital confidence and we've talked about it years past, but is key because we're using these, this is our experiences, this is an insurance policy basically to basically get coverage into those areas that you otherwise wouldn't and to fix those problems as quickly as possible.


Mac Clark [00:17:10]:

And to me, crashes, hangs, and errors, those are the most important ones that drastically break it from there. The other key piece of what you want to do is you want to make the remediation of that as quickly as possible. So we have this debug. From there, you click the debug part of our tab and it launches into what a developer needs right at their fingertips, all to solve that issue as quickly as possible. The stack traces, the dump files, even potentially source code, breadcrumbs, attributes, every single thing that you could dream of that you want, the log files attached, screenshots all in one place for you quickly just to go. Yep, yep, yep, yep. I need to prioritize. Get up, get on it.


Mac Clark [00:17:48]:

Here's my team. Let's go. I think that summarizes a bit. Was there a little bit other thing that you were interested in outside of as a piece of that? Cause I could drill down into all kinds of different things, but no, no.


Jason Baum [00:17:59]:

That'S really what I wanted to cover. And then I guess I. A follow up question when we always talk about why something is so great is so what are the common challenges teams face when implementing error reporting in production? And then how would you overcome those?


Mac Clark [00:18:15]:

It's interesting because it is a competitive space, right? There are other, there are other solutions out there. And this isn't just why, but I'll give you a common challenge. There was a video game. It was actually an AR app on a phone, and it was in downtown Tokyo. You can hold up your phone and you could see a japanese anime character on the high rise building doing things. And it was amazing. People loved it, but it was driving 200 to 300 million errors a month into the system. Because the ads, like I mentioned, the kind of the ads, the ads weren't being served.


Mac Clark [00:18:52]:

Like it was literally that popular. And that key, if you're going to launch something, the last thing that you want to do is not be able to get visibility into it. And that deals with scalability. Games happen on a scale that is almost unimaginable from the time that it takes to create them. Like Assassin's black flag, I think was 10,000 people working on it. Four or five years, 50,000 man years go into a game. When talking about trying to make a game in real life by one person as a solo dev, and then you think about another game that has 50,000 man years in it, right? And so you could think about the scalability of players that are playing with 100 million players. The primary concept and value needs to be the ability to scale to that use case.


Mac Clark [00:19:35]:

And that's one of the reasons why Backtrace doesn't try to be like an analytic solution. It doesn't look pretty necessarily on the front end because it is fundamentally built to scale unlike any of our competitors. And that really is kind of one of the delineations of what I would look for in a solution. And then after that, that ability to sort, to give me this sort of triage, hey, what are these important problems? And to give that an ability that people can understand. We have drop downs, we have a query window, we have a little quick link that you can send out to basically give people this option to get visibility into those things, regardless of their technical capabilities. And then we offer a SQL query window where you can create really complicated queries to test out ideas and edge cases.


Jason Baum [00:20:17]:

Yeah, and I mean, you kind of touched on this, but like alert fatigue, right? I mean, that's, that's kind of what you were talking about. But alert fatigue, uh, can cause problems unto itself, right. You ignore other issues because you're getting so many alerts. And sometimes those larger errors might be slipping through the cracks. And that's one that you got to.


Mac Clark [00:20:40]:

Look out for in testing. Just in general, it's an excellent point, which is noise. Sauce labs wants to give you signals that you can action on. And the second use case would be what we call de duplication, which is that ability to take an error that has happened 100 million times, raise it above the others and show you where it is at that hundred million mark. We dont take 100 million errors and show it to you in that thing. We de-duplicate it to show it to you once and say its happened 100 million. Here's where it sits at the stack. We do sort bye default to the number of occurrences.


Mac Clark [00:21:13]:

So that would be the highest occurrence. It would show up first at 100 million. But again, I would take the dropdown and go to crash and hangs and take a real good look at those as well. Just in terms of the prioritize of what I would be trying to solve. Let's say something that's having problematic issues in production, which is very, very common for gaming. This is games don't have the same bar, say, as banks from a regulatory perspective. And so I think gaming tends to see a larger volume of errors.


Jason Baum [00:21:43]:

We have a lot of VP of Engineering or Director of Engineering or QA who listen to the podcast. And what would you say to them?


Mac Clark [00:21:56]:

Budget, speed.


Jason Baum [00:21:58]:

These are all things that they have to think about in addition to quality. And I think people feel like, at least from my understanding, is it's and this is going kind of back to what I said in the beginning. It's shift-left or shift-right and never the two shall meet or, or should they meet. And you know, what would be your kind of argument to that individual? Why should I incorporate error monitoring into my continuous quality strategy? Why is it so important and should I?


Mac Clark [00:22:38]:

Preston? I'm not sure I would stick on the right or the left. I think Evelyn hit on it excellently. When you bubble up one or the other, it's problematic. You're just swaying with the winds of what's popular right now in the QA world. I think it's more about developing a sound strategy for your particular use case and from what you're seeing. And you want to basically have that level of coverage and confidence across the continuous quality. I don't really want to advocate for necessarily one or the other above each other. I do think they all have their purposes.


Mac Clark [00:23:14]:

What if I had to say something to an engineering manager? It would be create an effective strategy with what you have. Let's say you have budget cuts. What are we seeing? You're seeing a higher degree of automated testing on the left. Okay, great. That doesn't mean cut in particular places. It means increasing coverage with the technology that you have today to use. Because if you're looking at Crowdstrike, and I'm just going to hammer on it as this $10 billion cost of one bug, I know your organization has a cost of bugs too. You know what that cost of quality is? And are you paying for it upfront or are you going to be behind on it when it damages your brand reputation? So that's the justification that I would go to a manager be with and like, hey, I know we're going to continue to make in technology improvements.


Mac Clark [00:24:01]:

Here's what I suggest, automated testing on the left side. I suggest we get some coverage over here on the right side for almost think of it like an insurance policy. Here's what we can in a worst case scenario. Here's what we can in the best case scenario. Here's how we moved all the way through. And if it was me, I'd want to present this in the, the smartest way possible to have that continuous coverage with what I've been given as my resourcing.


Jason Baum [00:24:25]:

It's an awesome answer.


Evelyn Coleman [00:24:26]:

That was a great answer. When we do talk about that resourcing, Mac, how would a leader in the QA space take into consideration sort of the individual skills of the developers and the QA folks at their disposal when making these types of decisions? Is that a variable in what you just said?


Mac Clark [00:24:49]:

Yeah I think it is because each organization luckily and its great, I have an exposure to, lets call them trillion dollar companies for lack of word right. And I walk into the room and lets say one has a diagram up in Seattle where guns are pointed at each other as that organizational philosophy and the other one is 30% continuing chopping as their annual. So each organization has a special use case and then each organization also treats their cost centers differently. So some organizations will say we need to grow at all costs and when they have bad lean years theyll cut cost centers. When you cut cost centers in QA is traditionally looked as a cost center, where do you go next? And I think thats why its a variable because the organizations have their own needs and so you basically cut your nose off despite your face in some ways. And so what does that mean? That means that from an economies at scale perspective were here to help. When you decide to hurt yourself and cut massive potential manual QA processes or whatever Sauce Labs can help stand in the automated testing that I mentioned before or work with you and bring these solutions front or other people can as well. But I'm seeing a lot of really large organizations look at QA as cost centers cutting manual testing out.


Jason Baum [00:26:13]:

Cool.


Mac Clark [00:26:13]:

Let's replace it with automated testing. Be smart but come up with sound strategies across the board. You can look at bugs as risk, so let's mitigate risks. How do you do that? Smartly. And it's about not taking this bubble perspective, shining a flashlight, getting coverage in all the different corners, and then it is a worst case when it does happen. Have something in production to protect yourself.


Jason Baum [00:26:34]:

All right, we're going to do a quick lightning round. Keep answers super short. All right. What's the weirdest bug you've ever encountered in production and Crowdstrike can't count.


Mac Clark [00:26:46]:

No, no, I know Evelyn, you're first.


Evelyn Coleman [00:26:50]:

I play a very popular video game about space travel and the other day I was stuck outside of my spaceship.


Jason Baum [00:27:00]:

All right, all right, Mac, go.


Mac Clark [00:27:02]:

This is going to date me. But MySpace had a bug that allowed people to add everybody else in MySpace because they had masked a language with another language and people figured out how to concatenate strings. This gets to some of the people out there for programming and run that. And that's, that's what brought down the whole platform. That's what destroyed MySpace.


Jason Baum [00:27:26]:

One of my favorite bugs is in a video game where a guy went to catch a touchdown and I'm pretty sure it was intercepted by the other team. But they were in the end zone and, like, ran it back but to the same end zone. It's like this constant loop of just like no one was doing anything and it was just like in. They were kind of like floating in space. That's always fun. That kind of stuff happens, right. And then they roll it back. Now it's too easy to catch to flip the switch.


Jason Baum [00:27:54]:

Right. Which is great. That's good. That's good.


Mac Clark [00:27:56]:

Yeah.


Jason Baum [00:27:57]:

Light mode or dark mode?


Evelyn Coleman [00:27:58]:

Light groove all the way. I’m not on dark mode ever.


Jason Baum [00:28:02]:

All right. Light mode or dark mode?


Mac Clark [00:28:03]:

Both.


Jason Baum [00:28:05]:

Favorite snack while you're debugging pizza?


Mac Clark [00:28:09]:

Cause I love pizza.


Jason Baum [00:28:10]:

Pizza.


Mac Clark [00:28:10]:

Pizza's a.


Jason Baum [00:28:11]:

Is that a snack? Sure, I guess. Pizza could be breakfast, lunch, dinner, snack.


Mac Clark [00:28:15]:

Europeans hate Americans. Cause we actually eat at our lunch desk. We're like, what are you doing? He gets lunch, but no, I have leftover pizza sometimes.


Jason Baum [00:28:25]:

There you go. What about you, Evelyn?


Evelyn Coleman [00:28:26]:

I don't debug, but I'm a peanut. M&M's girly when it comes to focus time.


Jason Baum [00:28:31]:

Yeah. There you go. Thank you so much, Mac, for being on the show. Really appreciate the time and knowledge as always. And you are always welcome to come back on this show. I feel like you are another panelist, consistent panelist, I would say, recurring panelists that we have. So I really appreciate you're coming on.


Mac Clark [00:28:50]:

Yeah. Thanks for giving me the time to talk about some of these things. It's kind of fun in one way.


Jason Baum [00:28:55]:

Yeah, absolutely. And thank you, Evelyn, always. And thank you, our listeners, for giving us what's most precious in this world time and for joining us on another episode of Test Case Scenario. We'll see you next time. Thank you for joining us on Test Case Scenario. Share your thoughts in the comments. We'll make sure to respond to each and every single one. Don't forget to subscribe and hit that notification bell to keep in touch.


Jason Baum [00:29:30]:

If you missed our last episode, it's popping up on your screen right now. So click it until next time on Test Case Scenario.



Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.