Test Case Scenario
Join us every other week on "Test Case Scenario" presented by Sauce Labs, where our expert panel dives into the exciting and ever-changing landscape of technology, pop culture, and business. Host Jason Baum, Director of Community at Sauce Labs, will lead the discussion with our esteemed recurring panelists: Marcus Merrell, VP of Technology Strategy; Nikolay Advolodkin, Senior Developer Advocate and Evelyn Coleman, Manager of Implementation Engineering. Get ready to uncover the impact of continuous testing in this thrilling exploration of the tech world!
Test Case Scenario
AI Promised to Boost Productivity—Did It Deliver?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Are AI tools really helping developers, or are they creating more problems than they solve?
In this episode of Test Case Scenario, Jason Baum, Marcus Merrel, and Evelyn Coleman are joined by Titus Fortner, Senior Solutions Architect at Sauce Labs, to unpack the surprising findings from the latest DORA report. Together, they dive into the unexpected decline in productivity following AI adoption and discuss the challenges developers face in balancing automation, innovation, and collaboration.
You’ll also hear their thoughts on the future of software development, the risks of over-relying on AI, and how teams can rethink their strategies to fully harness the potential of these powerful tools.
Join us as we discuss:
(00:00) AI Integration Hampers Productivity: DORA 2024
(03:49) Rapid Evaluation of Tech Tool Impact
(06:41) AI Development: Prioritizing Maintenance Over Writing
(09:26) AI Enhancing Developer Productivity
(17:17) AI as Junior Developer Assistant
(22:24) Embrace Testing in AI Era
(25:02) Rethinking AI's Impact on UI
(26:21) Emily Freeman's AI Warships Talk
We’d love to hear from you! Share your thoughts in the comments below or at community-hub@saucelabs.com.
SUBSCRIBE and visit us at https://saucelabs.com/community to dig into the power of testing in software development.
Sign up for a free account and start running tests today at https://saucelabs.com/.
▶ Sauce YouTube channel: / saucelabs
💡 LinkedIn: / sauce-labs
🐦 X: / saucelabs
Jason Baum [00:00:00]:
This is Test Case Scenario with me, your host, Jason Baum. This podcast is the definitive hub for knowledge and stories in the software testing and development communities. If you're new to the channel, hit the subscribe button and let's dive straight into the episode. Hey, everybody. Welcome back to another episode of Test Case Scenario. I'm your host, Jason Baume. And with me, as always, my co host, Evelyn Coleman, Marcus Merrell. And joining us today, Titus Fortner.
Jason Baum [00:00:41]:
Thanks, everybody. Welcome back, Titus.
Titus Fortner [00:00:44]:
Thank you.
Jason Baum [00:00:45]:
Good to have you back on the show.
Titus Fortner [00:00:47]:
It's fun to be here.
Jason Baum [00:00:48]:
Yeah. Marcus is in the office. Titus, it looks like you're somewhere with a chalkboard or a whiteboard. I mean, I don't think two doors down. Oh, in the office also. Wow.
Marcus Merrell [00:00:58]:
Same building.
Jason Baum [00:00:59]:
Same building. Look at that. They're not sitting together, though. I promise. And, and Evelyn, you're still in your gorgeous house with that beautiful grand piano.
Evelyn Coleman [00:01:09]:
You know me.
Jason Baum [00:01:11]:
Yeah, I think I was on a call with someone sitting in the exact same position in your house. It's weird.
Evelyn Coleman [00:01:16]:
Many people have this exact same house with these furniture.
Jason Baum [00:01:20]:
Yes, it is beautiful, though. Well, thank you for joining us again. And we got a fun episode today. There's an article that came out recently called developer productivity in 2025. More AI, but mixed results. And who, who's the author of that one?
Marcus Merrell [00:01:40]:
Jennifer Riggins.
Jason Baum [00:01:41]:
It's real interesting. I suggest that you go read the article and then read the supplemental links. But to summarize, basically what it said was that in 2024, Gen AI became prevalent in software development and leadership was. I think all of us were assuming that that would lead to significant productivity enhancements. Right. That's what we kept hearing. AI's here. And it's like a tool to assist.
Jason Baum [00:02:11]:
It's going to make everything better. It's going to make the mundane more optimized and things are going to get faster. It's going to make all that just, just in general, your day, more productive. And then recently the 2024 DORA report came out. And in the report, it indicated that speed and stability have declined due to AI integration. So not only was it have we not been productive, it went backwards. We were actually less productive. So I'm just going to open up the floor to whoever wants to go first and just ask the question, did you see this coming? Is this shocking to you? I would be surprised in 2024, maybe earlier in the year, but as time went on, I think I'm less surprised today.
Jason Baum [00:03:02]:
What about y'all?
Titus Fortner [00:03:03]:
I'm still overall surprised by it. That we're seeing negative productivity. Like they're. Some of the trends that they're talking about in there make complete sense to me. But like I use, you know, AI for open source development and examples that I'm writing all the time and it absolutely makes it faster for me to do specific things. But maybe my use case is also a little special because I know how to write in Java but not in JavaScript and it's Claude or ChatGPT makes it really easy to like hey, this is the Java thing, what's the JavaScript thing? And then, and then dive into that. But you know, if you're building a web app, maybe that's not what you need necessarily. Maybe that's not like I'm getting value out of it.
Titus Fortner [00:03:45]:
That's not the, the standard use case.
Jason Baum [00:03:47]:
Marcus, what do you think?
Marcus Merrell [00:03:49]:
Mostly surprised at how, how quickly we're seeing this. I really didn't, I didn't really feel like we were going to be able to evaluate the impact this quickly. And I'm not really sure. I haven't clicked on the, the primary sources that Jennifer talks about, but I think overall every paragraph I read that discusses another aspect of it is like, oh yeah, that tracks. Yeah, that I could see why that would be the case. My sort of three takeaways, see if I can remember all three by the time I get through this are one that it, it says that developers biggest problem is with tech debt and documentation and that these tools aren't helping them. That's, that's two takeaways. The third one is the big thing that they say is my problem is not writing the code.
Marcus Merrell [00:04:30]:
My problem is getting the time to write the code. AI is not helping us cancel all the goddamn meetings we have every single day. I have six, seven hours of meetings a day sometimes and it's not helping me get rid of any of those problems. So I'm not surprised that it's when, when I do get time to code, I sit down and I got to learn a whole new tool set and that's going to slow me down. And the tools as they output, they're making mistakes. I have to have the knowledge and bandwidth to be able to fix the mistakes of essentially a junior program who doesn't know what they're doing. Like, I'm not surprised by any of the findings. I'm surprised by how soon we're seeing it.
Evelyn Coleman [00:05:07]:
Well, I agree with Marcus that I'm surprised we were able to draw these conclusions so quickly and measure some of these things. I'm not Surprised yet? I think sometimes you have to go backwards to go forwards. And given the economic climate, all the layoffs and the immaturity of AI right now, a backslide isn't surprising. I think if we don't get traction, like if our wheels never grip the pavement and start to drive us forward in the productivity, that I will be surprised.
Marcus Merrell [00:05:44]:
It just feels like the energy has been pointed at the wrong aspect of the problem. I mean, if I have a productivity tool and my biggest problem is documentation, what I don't want is a tool that sort of has documentation as kind of a side effect, as like a here you go through these hoops, commit this thing, do these, copy, paste. What I wanted to do is write documentation as I am writing the code without a single moment of intervention from me.
Titus Fortner [00:06:10]:
Documentation isn't a problem. Documentation is a solution to a problem. Right? Like no one's like this website sucks because someone didn't write documentation.
Marcus Merrell [00:06:18]:
I'm just saying what the article reported was that developers say that creating documentation is the biggest problem. And I know that ChatGPT is very good at that. So what is the disconnect between its ability to create documentation and developers saying it's not doing that? I think it's probably in the interstitial moments that it and actions that you have to take to create this documentation and tie it to the code you've just written.
Jason Baum [00:06:39]:
Isn't that something that AI should do well?
Titus Fortner [00:06:41]:
So I think the general gist of some of these and some of these specific quotes that came out of like the DORA report and everything, it's we're focused on the wrong problems with AI. And this is like the talk I gave at Test Bash in Brighton, right? Like what are we prioritizing when we say, ah, this, this AI is going to help me write the code, write the tests, write these things. The bottleneck isn't writing things, it's maintaining them. If AI is not helping you maintain things and write things in an extensible way and help you understand that. And so it's, I think we're seeing the same thing for development as we're, as I'm saying, for, for test automation, right? Like the, I don't think that AI solutions for writing your tests for you is going to be long term successful. At least where we're at right now. When you have AI write it, it's now harder for you to maintain it because you didn't write it. You don't understand what's going on.
Titus Fortner [00:07:39]:
And is it possible in a few years we're going to have AI agents that are able to scan the whole code base and figure out what is the idiomatic way to add this feature so that it matches everything else and document it and make it. And probably. But, man, every time someone's like, oh, yeah, AI is going to replace a developer soon, I'm like, I use AI a lot, but I'm constantly asking questions like, no, I don't think that's right, or that's not working, or I'm pretty sure we need to do this instead of that. And it. Having that conversation is extremely helpful. But it's still a conversation. It's not a, hey, do this, and it's going to be done for me.
Jason Baum [00:08:18]:
You still might not get the right results.
Titus Fortner [00:08:20]:
Well, I think a lot of it is also, like, I have had to maintain a lot of things. I know what good is. And so if what I'm getting back isn't good, I can have it help me massage it to where it's good. But you have to have that background of this is good and this isn't. And I think AI is trying right now to skip past that.
Jason Baum [00:08:40]:
I'm picturing Marcus when we interviewed Jason Huggins on this podcast and we were talking about AI and he said to him, you're going to need to have a lot more knowledge. We're going to need workers who are much more advanced because you're going to need to know when it's spouting bullshit. Right? Like, you need to know this is right, this is wrong, this is. And so it takes a higher caliber type person running the AI. And. And instead of, well, you know, it's supposed to shorten the amount of time for. For more menial jobs. That's actually not true.
Jason Baum [00:09:16]:
Right. It's like you need people who are much more advanced in their careers to be able to interact with AI to know when this is right, this is wrong. Right.
Evelyn Coleman [00:09:26]:
I think it depends, right, though, because we're talking about augmenting developers that have these really big skills in terms of productivity. But, and we've talked about this before on the podcast, there are productivity things that maybe aren't being measured. Right. The productivity of someone who's trying to write code in a language it's not their native language. Like, most things are in English, right. Not everybody who's coding is a native English speaker. AI is particularly good at that sort of thing. So there's probably a lot of things that are more productive, but we're not necessarily.
Evelyn Coleman [00:09:59]:
They're not the big things that we Measure, because they're things for maybe folks who've been. Those are the expectations. Right. The expectation is that you got the job and you're supposed to be able to do it. Nobody cares if it. The email took you half an hour versus it took somebody who's a native English speaker five minutes. But now you can compete.
Jason Baum [00:10:20]:
Totally. But, like, I'm not native in writing Python and I'm going to ask it to generate Python code for me. How the hell do I know it's correct?
Evelyn Coleman [00:10:31]:
I agree, but it's going to be less correct if you don't speak English in the first place.
Jason Baum [00:10:35]:
Yeah, but it's going to actually take me more time because now I have to research what it wrote and did it get it right. And I think to me, that's like, what's actually happening.
Titus Fortner [00:10:43]:
Right.
Jason Baum [00:10:43]:
With developers. Not that they don't necessarily know the code, but they have to test it a whole hell of a lot more.
Titus Fortner [00:10:49]:
It depends. Like, one of the nice things about code is there's a right answer to a certain extent. Right. Like, does it deterministic? As long as you're defining the right, it's deterministic. So it's not like, did it explain this properly? And so, like, if you have a couple tests, you can say, yes, this generated the right thing instead of the wrong thing, but that.
Marcus Merrell [00:11:07]:
That makes it easier and harder because. Yeah, because you get complete, correct code which compiles. It's figuring out whether or not it's doing all the little things right that requires the brain that Evelyn's talking about.
Evelyn Coleman [00:11:19]:
Yeah. I don't disagree with anything y'all are saying in terms of productivity. I'm just saying there's probably things that AI has helped with in terms of productivity and gotten right that aren't being measured by door metrics that are saving some people's butts.
Marcus Merrell [00:11:34]:
Yeah, absolutely. I mean, Titus and I just this week have both used ChatGPT in each other's presence to be more productive at trying to do some experiments that we're trying to run. It's working beautifully. That's not ever going to go into.
Jason Baum [00:11:45]:
A door report, you know, when it's really helpful outlining. That's a great use of AI converting JavaScript to Java summaries. Well, it's interesting.
Titus Fortner [00:11:56]:
I've used several plugins for my ide, trying out different things, and none of them let me, like, highlight code, right click and say, ask a question about this code. Like, it's all explain this code or document this code, or write unit tests for this code. And I'm like, that's not the bottleneck for what's holding us back from delivering more frequently.
Marcus Merrell [00:12:17]:
So I think the magic from these tools, the productivity boost will come not from its ability to think and analyze an output, but to connect all these little tiny pieces like peripheral muscles to the exercise of manipulating code into our ide. Like right now, it's just hard. You still have to do all this copy and paste and analyze. There's all this stuff you have to do, plus worrying about the content that it's actually producing. It's not really heading in that direction like it should be.
Evelyn Coleman [00:12:44]:
You shouldn't be using it to replace the things that you are already brilliant at. But if you have something that you're really weak in and you can use it to replace that. Yeah, you might not be able to double check it and you might not be able to correct it if it's hallucinating, but you probably wouldn't have gotten it right either if you'd done it. And now at least you're doing something. Is that a problem? Like maybe, maybe it's introducing too many, too many errors and too many bugs. But is it more than you would have introduced by doing something you do poorly?
Jason Baum [00:13:15]:
I don't, I mean, that's what the DORA report basically said. That's what's happening. Right.
Marcus Merrell [00:13:19]:
They're concerned that it's stunting the growth of junior developers and limiting the, the further education of senior developers.
Jason Baum [00:13:26]:
Yep. It's leading to security issues. Like it's faster code writing means, you know, more bugs.
Evelyn Coleman [00:13:33]:
I guess I'm asking is, are the people that are using it, are they using it as an augmentation, are they using it to try to replace themselves? And I think there's a nuance there. I would love to, to be able to separate those two cases out and say, was one group more productive than the other group in terms of like how they use it?
Titus Fortner [00:13:54]:
Well, yeah. Like if executives are like, we're going to fire half your team and you're gonna have to make up with it using AI, you know that you're, you're not going to come out ahead on.
Marcus Merrell [00:14:02]:
That'S probably not going to work.
Jason Baum [00:14:03]:
Yeah.
Titus Fortner [00:14:03]:
Well, and especially I think we're also like, these are all brand new tools we're still figuring out how to get the most value out of. So like there's process changes that have to go into this. Right. Like, you're not going to be like, here's an AI tool. And now magically it's going to Decrease or increase throughput and you know, improve velocity. You know, we're going to have to figure out the right ways to do things. And like a lot of companies still don't even have approval to use AI, right? Like there is a company policy saying you can't use AI on any of our products right now. And so I think there's process, there's social change that needs to happen.
Titus Fortner [00:14:37]:
There's, there's process that needs to be improved. I'm curious how some of these agents are going to improve things. We're going to start, we've got the basics now, but hey, let's add more things into the context window when we're, when we're asking questions or trying to do stuff like can we make this idiomatic to the rest of the repo versus like I think there's a lot of potential for where this can go, but I think the challenge is are we going to lose our ability to understand things before we get to the point where the AI can do it all?
Marcus Merrell [00:15:08]:
I guess I'd ask one question pursuant to that in order to try to formulate a hypothesis. Are we worse at writing and word processing because of things like Microsoft Word than we were, or is it truly automating the stuff that's important to automate and leaving us to do the stuff that we want to do, which is right, and not worry about white out and editing and typewriter ribbons and stuff like that? Like, is that what we're talking about?
Jason Baum [00:15:32]:
Can I give you a different example?
Marcus Merrell [00:15:34]:
Sure, yeah.
Jason Baum [00:15:35]:
Because that's a good example. I was going to say Grammarly.
Titus Fortner [00:15:38]:
Yeah, that's what I want to. When.
Evelyn Coleman [00:15:41]:
Yeah, I would disagree because my measure for whether or not we've improved using sort of helper tools is around whether or not we're still able to communicate, not necessarily the finer points of grammar which people have opinions about. And obviously in an academic setting, people want you to speak a certain way and do things. But I wouldn't say that tools like Grammarly have cut down on our ability to communicate points to one another. Especially now that we have, now that we have video conferencing at the touch of a button. The same way that maybe I'm not as good at navigating, but I can still get to point A from point A to point B, maybe even more efficiently than I could 20 years ago. So it depends on how you're measuring. I can't read a map as well as I used to be able to, but I can get to the store much better.
Marcus Merrell [00:16:38]:
What I kind of feel like we're missing here is along, along those lines, like, to me, what, what I'm thinking about when I'm thinking about typewriters and ribbons and corrective tape versus being able to share a Google Doc with you. And the two of us work on this thing real time. It's like they didn't do anything about the content that I'm producing. They're not messing with that one bit. They're helping with all of the, the crappy tedious little tiny things that you have to do in order to collaborate with another author and making that stuff work great. And that is not the tactic that AI has taken so far. They're trying to replace my brain and the things that I output and the things that I'm good at when they should be replacing typewriter ribbons in the equivalent.
Titus Fortner [00:17:17]:
I mean, I guess I kind of look at, at an AI as like, hey, I want, I want a junior dev who just graduated from CS school and has got a lot of book knowledge, but doesn't necessarily know how it applies in the real world. And I'm asking this person, like, did I use this hashmap correctly? Right? Like, and that's the stuff that I'm getting from them. But you have to understand that's their role. You're not going to give that person, you know, your web app to maintain. But it's interesting because I'm kind of looking at that as an opportunity for collaboration or opportunity to give feedback rather than creating. And, you know, this, it's, it's interesting. These articles are talking about, like, how this is, you know, shifting things to the left and we're focused on, on the creation side and all of those things. And I'm just like, I, I think I've said for years now that I'm expecting AI to help more with monitoring than I am with testing.
Jason Baum [00:18:09]:
You know, that's so interesting. Titus to said what, what you said about collaboration, I think that's the piece that's scary to me is like, are we becoming more isolated and we're missing the collaboration between human brains? Because instead of like in, in joke writing, there's the, you punch it up, right? You get a bunch of, of joke writers and you take a joke and it's like, okay, and then you punch it up, right? Same thing with code. Code is. Code is very collaborative, right? Are we asking the AI instead of each other for collaboration and are we getting a worse product?
Titus Fortner [00:18:44]:
But we're already getting a worse product from the pandemic, right? Like the reports that show, like, remote work and what the social changes that have happened. Like, I'm trying to remember that the takeaways I got there was the junior devs are getting less help. The help they're getting tend to be from other, like, female coders are more likely to help junior devs than senior devs. Senior devs actually want to have, like, just leave me alone and let me get my stuff done. Right. And so, like, I think we're already at the point where how's a junior dev supposed to learn if you've got either, you know, a remote or a hybrid work environment? Like, that's better than just trying to figure it out yourself with stack overflow.
Evelyn Coleman [00:19:25]:
Let's for a second imagine a world where this article was the other way around. What if the door reports had come back and said, everybody is so much more productive because of AI. Is this a situation like, was it the dishwasher or the washing machine? Where the introduction of the washing machine just meant that women would take on even more household tasks? Like are if the developers were more productive using AI tools, would we just be asking them to develop more?
Marcus Merrell [00:20:02]:
Absolutely.
Jason Baum [00:20:03]:
Yeah.
Titus Fortner [00:20:05]:
Yes.
Marcus Merrell [00:20:05]:
And speaking of comedy writing.
Titus Fortner [00:20:08]:
Yes.
Marcus Merrell [00:20:09]:
And these AI companies who have invested hundreds and hundreds of millions of dollars would suddenly start to ask for their due and they would stop subsidizing the tokens and the cost of these things would go up 10x. Both of those things would happen.
Evelyn Coleman [00:20:23]:
So is it a bad thing then? Is this like, we're reading this report as like, wow, AI is really like, you know, ruined things or is on the track to ruin things, but maybe it's just given us some equilibrium.
Jason Baum [00:20:36]:
I think it was expectations. We, we put the wrong expectations on.
Evelyn Coleman [00:20:41]:
It, but do we want to.
Marcus Merrell [00:20:43]:
As a result, people are fixing the wrong. People are concentrating their work and investment on the wrong aspect of things they're trying to fix. I think this is going to improve. I have a feeling 2025 is going to be better than 2024. It's going to improve because people are going to start to figure out it's okay. So the thing that we originally thought it was going to be good at, it's not very good at that. Let's do something else with it. And that will probably, well, maybe the third or fourth iteration will actually work.
Jason Baum [00:21:06]:
It's kind of like crypto, right? How the crypto movement.
Titus Fortner [00:21:11]:
How it's highly.
Marcus Merrell [00:21:12]:
Prof. Well, it's highly profitable and there's no more scams and everything. All the frauds been.
Jason Baum [00:21:17]:
Well, we're, we're to throw money at it and who cares what, which one's real, which one's snake oil. And then, you know, the real ones will rise the top eventually and you know, the fake ones will disappear to a certain extent.
Titus Fortner [00:21:32]:
How much of the decrease in productivity is because managers and executives have already started cutting? Like we're not going to backfill because we expect AI to make the difference. Right. That's going to have an impact on throughput. If you are getting more value individually from AI, the, the company, the product as a whole isn't. And so yeah, that's the concern is, you know, hey, we want, you know, junior Devs to be 10x devs now because they have AI. And that's the expectation.
Jason Baum [00:22:02]:
Thinking as a senior leader, Gosh, I don't know. The introduction of AI, doesn't that create a whole lot of risk in a field, in your field where you're supposed to de risk, right. As much as possible. Isn't this like a big risk factor?
Marcus Merrell [00:22:20]:
Yeah, which is why I've been talking about it for two years is a bad thing.
Titus Fortner [00:22:24]:
I mean, if anything, like the rise of AI should make people much more, much more testing focused. Right. Like you really, you need something that validates that what you did is what you intended to do and what you did didn't break things in a way that you didn't intend. So some of this is just changing mindset. Right. I know I've talked for years about how moving from a manual testing to automated testing, you can't just do the same manual tests with a computer. You have to completely rethink about what is a computer good at and how can I scale with the computer in a way that wouldn't make sense for a human, but does for, you know, automation, you know, splitting them up and running, running in parallel and, and being able to scale things up instead of thinking about everything from the viewpoint of the user. And I think it's going to be the same thing with AI as we need to think of.
Titus Fortner [00:23:19]:
Like the industry is going to be thinking about how do we change the process, how do we change our methods, how do we collaborate together with AI as part of the team instead of. And yeah, so if we're just going to fire everyone and expect one person to do everything, we're not there yet.
Marcus Merrell [00:23:38]:
Yeah, we're not.
Evelyn Coleman [00:23:39]:
So if you are going to keep everybody, right, you're going to keep your headcount the same. And it's 2025, 2026. AI has suddenly helped tremendously with productivity We've passed this divot and your workers have this additional time. What would you have them focus on.
Marcus Merrell [00:23:59]:
To help with the risk innovation features to market getting user value? I don't focus on things like profit and cutting jobs because I'm not Jack goddamn Welsh.
Evelyn Coleman [00:24:11]:
Well, I'm saying, wouldn't you want them focusing on risk reduction instead of like, you would keep the innovation linear, but then you'd spend that extra time doing some of this double checking and refinement that you know the AI is not super good at or would you really, you just go straight for innovation?
Marcus Merrell [00:24:27]:
Well, see, I mean, I'm just, I'm not really the right person to ask this. Am I going to answer anyway? I, I, I don't, I don't think the innovation should be linear. Like we're always asking Jason for more time and budget to do innovation. I just, I think there's so much in this, it's just in our little space that we work in, there's so much innovation to be done to give users more value from the product that we make. I don't want to keep that linear. I want to, I want to keep giving people amazing stuff out of essentially what we've already got, which is kind of my whole, my whole mission here. And to me, if you're not increasing your risk mitigation with that, then you're just not even playing the game, right?
Titus Fortner [00:25:02]:
If we think about just doing what we're doing now is with AI is going to be faster. We need to be thinking about what more we can do once we have AI and I, I don't know, I've been throwing around like, are we even going to have UI at all in 10 years? Right? Is AI going to figure out the backend API calls and on the fly generate a custom front end for you populated with the back end information and filter out all the stuff I know, I don't care about, right? Like, we could see some really dramatic changes based on having AI in a way that we couldn't if everything had to be done by a person. We have to completely change around how we're thinking about it and how we're leveraging it. So I, I'm expecting that people are going to same thing, right? Like, let's offshore all our qa. It's like, oh, that didn't work. Let's bring them all back in the U. S like, you know, like, let's fire everyone for AI I'm like, oh, our competitors that didn't do that are, are doing better right now. Let's hire people back and, and try and get more released.
Marcus Merrell [00:26:09]:
It bothers me so much because I know that the executives who make these bad decisions aren't going to be around for the accountability part. That's what bothers me the most. They're just, they're. They're long gone and they don't have to pay the price, but I do, you know.
Jason Baum [00:26:21]:
So that was Emily Freeman's talk at All Things Open that you were referencing Titus. It's a great talk if you haven't seen it. That was really at the very beginning when, I mean, AI was relatively new when she gave this talk, you know, she was basically setting it up, comparing it to warships, which is really, really great talk. I highly recommend you catch that. Wow, this, this is a really good talk. We should really revisit the topic. We should talk about AI more. Thanks, everybody, for Titus.
Jason Baum [00:26:55]:
Thanks for visiting us again. Hopefully you come on a few more times, you know, in 2025. Let's make that the resolution. Thanks, Marcus. Thanks, Evelyn. And thank you all for listening to another episode of Test Case Scenario. We will see you next time. Thank you for joining us on Test Case Scenario.
Jason Baum [00:27:23]:
Share your thoughts in the comments. We'll make sure to respond to each and every single one. Don't forget to subscribe and hit that notification bell to keep in touch. If you missed our last episode, it's popping up on your screen right now. Boom. Click it. Until next time on Test Case Scenario.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.