As controversial as we all know it is, artificial intelligence is here to stay.

So as digital marketers, how do we use it thoughtfully and mindfully?

Joining me to dive deeper on this subject is the brilliant tech educator and author, Avery Swartz. We're taking a closer look at generative AI tools like ChatGPT and their role in marketing and client services.

We explore AI's probability-driven predictions, compare it to an “autocomplete on steroids,” and discuss its limitations, especially its lack of self-awareness. We don’t shy away from the ethical dilemmas surrounding AI, from integrity and environmental impacts to copyright issues.

On top of so much more, we touch on the future of AI in marketing and society, and Avery’s thoughtful approach to technology.

So listen in as we shed light on both the potential and the pitfalls of AI in business and marketing with the always-insightful Avery Swartz.

In this episode of the podcast, we talk about:

  • Avery's transition from web design to tech education
  • Very real AI hallucinations
  • How to use AI tools like ChatGPT without compromising integrity
  • The importance of human creativity and critical thinking in the AI era
  • The future of AI in marketing and its potential impact on businesses
  • The balance between productivity and societal wellbeing

This Episode Was Made Possible By:

Riverside All-in-One Podcast & Video Platform
Visit Riverside and use the code DREA to get 15% off any Riverside individual plan. We use it to record all our podcast interviews!

Social Media Day Summit
Social media is not dead. It is simply evolving. And that's what we're exploring at the Social Media Day Summit.

Join me and my fellow experts on June 30th as we dive into innovative strategies and timeless tactics designed to empower social media marketers, freelancers, agency owners, and anyone else ready to take their social media strategy to the next level. Grab your ticket today for $10!

About the Guest:

Avery Swartz is the founder and CEO of Camp Tech, and author of the best-selling book See You on the Internet: Building Your Small Business with Digital Marketing. Avery is the resident tech contributor on CTV Your Morning and is regularly heard on CBC Radio.  Avery was ranked number 5 on Search Engine Journal's Top 50 Women in Marketing list and hosted the AI Meets World podcast from Microsoft and the Globe and Mail Content Studio.

Websites: https://averyswartz.com/, https://www.camptech.ca/
Instagram: https://www.instagram.com/averyswartz/https://www.instagram.com/camptechco/
LinkedIn: https://www.linkedin.com/in/averyswartz/, https://www.linkedin.com/company/camp-tech/ 

Resources mentioned:

Check out Avery's upcoming Camp Tech workshops

Watch the Episode Below:

Transcript

Andréa Jones (00:01):
Artificial intelligence, it is here to stay. And in this conversation we're talking about how to use it thoughtfully and mindfully as a human being, interacting with the robots of the internet. And today's guest is Avery Swartz to bring this conversation home. I'm excited about it. Let's dive in. You are listening to the Mindful Marketing Podcast. I'm Andrea Jones.

(00:30):
I've recorded over 300 podcast episodes. Yeah, it's a lot of podcast episodes and I've tried a lot of different virtual recording studios, but my favorite has been Riverside. Riverside makes their virtual recording studio look so profess. My guests love it. Plus, I also low key love recording YouTube videos in here as well because it's so easy to use. My team also loves Riverside because it spits out separate audio video tracks making editing easy, breezy, lemon squeezy. And if you want a little magic, they've got this tool called Magic Clips, which uses AI to take your video and turn it into perfect social media sized videos. I'm talking vertical videos for TikTok and Instagram, Facebook reels, all the places you can post these videos with the captions included, and you don't have to hunt and search for that perfect clip. So if you want to try this out for yourself, click the link that goes with this video. Or if you're listening to the audio on the podcast, it's in the show notes. Okay, click that link. Use the 15% off coupon code. It's Drea, DREA and try Riverside for yourself. Thank you, Riverside.

(01:41):
Avery, welcome to the show.

Avery Swartz (01:43):
Oh, thank you so much for having me. I'm really excited to be here.

Andréa Jones (01:46):
Yeah, I'm excited to talk all things ai, but I want to go back to the beginning for a sec and talk about you because you have a really strong history in just bringing things digital. You've got a bestselling book to talk about all things digital and small business. You are the tech correspondent for CTV and all things business, but how did this start? What's the origin story?

Avery Swartz (02:10):
Okay. Well, I am a person of a particular age. I'm an eighties kid, so that means that when I was growing up, I could not have said to my parents, mom and dad, when I grow up, I want to be a web developer. They would've said, that is not a thing. So I took a really winding route to get to where I am now. My first stop was actually in fine arts school. I have a fine arts degree, and then I went to design school after that. So I made my way into the web originally through the root of web design. So I was a web designer, I had a web design studio. And then I realized that as much as I love designing, my favorite part was actually just working with clients, was talking with them, figuring out what do they need. I had a really deep empathy and still do have really deep empathy for how there's this narrative out there that tech is supposed to be easy, that tech is supposed to make things easier for people.

(03:04):
And I think in general overall it does, but that doesn't recognize that there is a bit of a struggle, that there's an adoption cycle, there is a bumpy road. And my favorite part of working with my clients was always just kind of helping them get set up. And then I realized, wait a minute, this is something I could do. So that's when I pivoted to being a tech educator. I own a tech education company where we run practical workshops for small businesses, charities and nice people. Like you said, I wrote a book, I go on tv, but it, it's all in service of this mission of trying to help non-technical people bring technology into their lives in a way that is thoughtful, in a way that can help them reach their goals. And my role is just to be an educator and an interpreter.

Andréa Jones (03:51):
Yeah. I love everything you do. I feel like one of the first people to buy see you on the internet, the book, because I'm just a fan. And I was like, I want this. Plus, it's gorgeous and it makes sense with your design background, how pretty everything looks too. I

Avery Swartz (04:09):
Love it. I did work with the publisher and my publisher, they sent a first draft of the cover of the book, and it was really those tech books where it's got all these weird lines and lots of things looking like they're flying around and it's very technical. And I was like, this is awful. I'm like, no offense, but this is not my vibe at all. I was like, we want it to feel friendly. We want it to feel approachable. I was like, can we put an emoji on the cover? Can we have more color? Let's use a fun font. Strip out all of this. And it's always blue, right? It's like dark navy blue and electric blue, and it's like tech internet, zip, zip zap. I'm like, no, no. Get rid of all of that.

Andréa Jones (04:53):
Yeah. Why the blue? It's always blue. You're right.

Avery Swartz (04:56):
It's always blue. Yeah.

Andréa Jones (04:58):
Oh my gosh, I love that. Okay, so switching gears over to ai. I think that AI, specifically generative AI has shaken us up a lot. It has. It really has. And I feel like listeners of this podcast, we've been talking about the chatGPTs of the world for a while, so I want to just dive right into the conversation with ai and why does it feel like sometimes AI is making stuff up? It's like, did you even check the internet on this? You have access to this information. Why did that

Avery Swartz (05:33):
Happen? Okay, so to understand why generative AI systems like chatGPT, why they sometimes make things up, the technical word for it is a hallucination, which I think is hilarious. It sounds like ChatGPT just went on an acid trip that it's over in the corner, it's on drugs, it's hallucinating, but it goes back to really understanding what is a transformer style generative AI model. And I know I just used a whole bunch of jargon, but tools like chatGPT are different from other types of ai. They're different from even other types of machine learning ai, which has been around us for a number of years now. But the way that a tool like ChatGPT works, you've probably heard that it taps into something called a large language model. And a large language model is kind of like its knowledge base. It's all of the information that ChatGPT and other generative AI systems have access to.

(06:30):
But the thing about a large language model is that we've heard that they have basically read every page of the internet. It's read all of the publicly available books. It has scraped information, it's pulled in all kinds of data to get into that large language model. But the large language model is not a search engine. It doesn't behave the way something like Google does. So the way that Google behaves is that it goes through the entire internet with its robots and indexes the internet and indexes and looks at every single word. And also the connection between webpages, what's linked to other sites. The way that a large language model works is that it's looking through all of that original information, but it's not really looking so much at the words themselves. It's looking at the connection between the words. So how often certain words tend to go together.

(07:25):
There are patterns in our language. There are words that are more based on probability that are more likely to appear together. And so when you interact with a chat bot, like chat GPT, it taps into its large language model, which in this case is called GPT. And what it's doing is it doesn't actually know any of the words. It doesn't know what any of the words mean. It only knows the relationship between the words. So it's using its understanding of probability and it's using its prediction skills to try and give you the next word in the sentence, the next sentence in the paragraph, et cetera, that it thinks is probabilistically most likely to be what you're looking for. So some people say that ChatGPT is autocomplete on steroids, and it is, it's literally just kind of trying to auto complete itself. And so in that way is not, it's not aware of the information that it's giving you.

(08:30):
So if you ask it for something, like if you're asking it to help you write a caption for a social media post, that is a situation where there isn't one right answer. There isn't one factual true answer. There's a lot of different ways that you could go with that caption. Some of them might not be the best quality, you might not like them. They might not be on brand, but it's not looking for a particular fact. It's when you ask ChatGPT or another tool like it to go deep on a particular, a vertical, a particular depth of its knowledge where it tries to get into facts, that is where it's more likely to hallucinate because it doesn't have, although here's the big caveat, it will happen less and less as the large language models get more and more sophisticated, and it will happen less and less as generative AI models have something called retrieval augmented generation, which means they can augment what they generate so they can change what they generate by retrieving information from somewhere else.

(09:34):
So for example, if you've used a tool like perplexity, which I really, really like, perplexity is using a large language model, but then it's also searching the live web and pulling information in that way and kind of smashing those two things together to give you more accurate answers. And it also gives you citations. It'll say, we found this on this website, and then you have a link and then you can click and you can go and find more. So once you understand that that's actually how the large language models work, they're not Google, they're not a search engine, then you'll be like, oh, okay. So it's actually the thing that makes it really good is also the thing that makes it sometimes tell lies.

Andréa Jones (10:16):
Yeah, totally. Which

Avery Swartz (10:17):
Is so weird, right? Yeah, so weird.

Andréa Jones (10:21):
I actually, I use chat GPT quite a bit, especially for coming up with ideas and I have to literally tell it words not to use because it just latches on to certain words like unlock or unleash or there's a few words where I'm like, can we please stop using this word? I'm never going to use it again now because you've ruined it for me. And I feel like that makes sense, just trying to predict what I want it to say. Very interesting. Absolutely.

Avery Swartz (10:51):
Absolutely.

Andréa Jones (10:52):
So when we think about this as a marketer, people fall into two camps. There's the people who are never in a million years, and then there's people who are like, I use it every day and I feel like I'm somewhere in the middle. But then I feel like I have to always explain, I use it every day, but I don't do this because it feels like I'm cheating. I'm not cheating, but I am using this as a shortcut. Why does it feel like we're cheating when we're using these tools?

Avery Swartz (11:18):
Yeah, I mean, there's a real ethical dilemma here and quite a paradox of, I think the more you use generative ai, and the more time you put into it, the more time you put into training a system to get to know you, to be able to write, you give it that feedback like, Hey, in my case, it's the word demystify, the word wield. It always wants to wield things. And I'm like, no, we don't use that verb in my life. So it's like teaching it which words to use what tone, et cetera, because generative AI is a learning system, and machine learning systems get better. They literally learn and they learn through trial and error, through feedback and through lots of practice. So the more you train it, the better it gets. So anybody that's invested a bit of that time, it is undeniable that this is a tool that can be helpful.

(12:11):
This is a tool that can supercharge your productivity, but then you have that moment of if you're someone that has any sense of integrity, you've heard that, okay, maybe it was trained on copyrighted data. There's a bit of a gray area there. You're thinking, okay, I'm not really creating this. Can I say that it's something that I actually made? Is that ethical? There's also some environmental concerns with generative ai. The actual computing power required to run these systems is immense. And when you're using computing power, that means you might be using more fossil fuels, you might be using more electricity, et cetera, et cetera. And so there's an environmental impact there as well. So it's kind of like what is a thoughtful marketer to do?

(12:58):
You want to potentially do better work, you want to do better work in less time or with that extra time, you could just do more work, I guess. So is it cheating to use ChatGPT, this is one of those conversations that I think is best had with a glass of wine because it almost goes a little bit more into a theoretical conversation. But I was talking with someone recently and they said that it felt like cheating. And I said, okay, let's unpack that a little bit. Is it cheating to use a calculator? And we talked about that a little bit, and I said, I have a daughter who's in middle school, and right now she is at the point now where she's allowed to use a calculator in math class, but for a number of grades, she wasn't allowed to use a calculator because it was very important that she knew how to manually do long division because she was understanding the mathematical concepts of addition, subtraction, multiplication, and division.

(13:52):
So in that case, it would be very bad for her to use a calculator, but I use a calculator all the time because it frees up my time and my mental energy to do something that's more intensive, right? Yeah. So the thing about ChatGPT though is that if you completely outsource everything to it, if you've trained it really well so that it can write just like you, and you say, write me five blog posts, and then you just kind of, I don't know, you go and do something else and you come back and you're like, yeah, yeah, that looks good. Good, good. You copy and paste, you publish, you put it write up on the web, you let it do everything for you. I think that's when it feels like cheating, because there's two things. Are you potentially cheating by getting to the final product without you being the one that's going there? So it's like, is that ethical to then turn around and sell it to your clients and to bill people for those hours? But what I think you're really cheating is you're probably cheating yourself. Because I'm a writer. I actually do like to write. I wrote a book and I wrote the book myself. I didn't write it with artificial intelligence.

(15:06):
When you're going through the process of writing, you're going through the process of thinking, and it's hard. Writing is hard, and it's hard because you're doing more than just stringing together words in a sentence. You're working out your thoughts, you're figuring out, okay, does this make sense here? Is this a logical argument? Do I need more examples over here? If I'm going to counteract my argument, is this flowing? Do I need to have something here that will wrap this up? And there's different types of writing. There's writing an email to my sister-in-Law, who cares? ChatGPT can do that. I don't care. But writing a detailed blog post about thoughts in the industry or something, that's the point where I might use AI tools to help me. They might help me to edit a little bit. They might help me to provide a counterbalance to my argument, but I personally will not let them just do everything because it's cheating me out of the thought process.

Andréa Jones (16:10):
Yeah, and I think part of it too is this idea of creating original work as well. I love that I can use chat GPT to help me with things. How can I wrap this up? Or what's a good title for this? Or what's a hook for this? But when I think about contributing interesting stories, adding flavor, adding perspective, those are things that I don't think chat GPT can do nor should do. And I've even been playing around with Gemini recently as well. Same thing. I don't want those tools to do those things for me personally because it does feel like cheating. But I do wanted to, one of the things I did recently was, here are my top performing podcast episodes. What are 10 more things I can talk about on the podcast? And it gave me 10 ideas that were terrible and one that was great. And I was like, okay, good. So I feel like there's the calculator shortcut there where I can go relieve my brain of thinking about this for one minute so that I can focus on the things that kind of add more context there, which I love. I love,

Avery Swartz (17:20):
Yeah, there's a sense of getting to know what the tools are really good at and getting to know what they're not good at all. And then picking and choosing, treat them like a coworker. You'd be like, Hey, I know what I'm good at, so I'm going to focus on that part. I know what you're good at, so I'm going to let you run with this part. I used ChatGPT recently to do some data analysis. I took an Excel spreadsheet that had something like 3000 rows of feedback from a program that we had run at my company, and I took out all of the names and email addresses of people I don't want to put confidential information into chat, GPT, and I put it into chat GPT, and I said, read through this all of this feedback and do a sentiment analysis. And it did a really good job kind of pulling out key ideas. Now that was me playing to its strengths, and then I played to my strengths by then taking that and then crafting it into a proposal for a new program that I want to run. So I wouldn't let ChatGPT write the proposal, but I would let ChatGPT do the sentiment analysis for me because that would've, oh my gosh, that would've taken me probably six or seven hours to read through all of that, to copy and paste, to highlight commonalities and common threads. It did it in three minutes or something.

Andréa Jones (18:42):
Brilliant. I love that. And now I'm like, oh, I need to do this. Yeah,

Avery Swartz (18:47):
It's pretty good at it. Yeah, it's surprisingly good.

Andréa Jones (18:50):
I love that. I love that. Okay, we're going to take a quick break, and when we come back, we have more AI questions coming at you.

(18:56):
[Podcast ad break] We are long past the good old days of MySpace, and despite what those Facebook ads are touting as the magical AI solution antithesis to social media, social media is not dead. It is simply evolving. And that's what we're exploring at the Social Media Day Summit. Join me and my fellow experts on June 30th as we dive into what's working for social media marketers here now. And today we're exploring innovative strategies and timeless tactics that you can use for both yourself and your clients. Grab your ticket today for $10, and I'll see you there. [Podcast Ad break]

(19:35):
And we're back. So you talked a little bit about this already, and I want to dive further on the topic of generative AI and copyright, and who owns all of this, because the latest update to chat GPT, it can create photos and videos and design things for you, but do we own these things that it's creating?

Avery Swartz (19:59):
Short answer is probably not, or no, depending on where you live in the world. So I'm in Canada, and in Canada, our copyright law does not recognize generative ai. Like our copyright law predates generative ai, so innovation, science and economic development. Canada has just wrapped up a consultation process with experts. They're taking all of the results from that. They're going to be advising and creating policy advice to update copyright law in Canada. So TLDR, it's coming in Canada. We have an AI act that is being worked on at the moment, some copyright updates. But in America, the US Copyright Office has said no. What you create with generative AI cannot be copywritten means like writing copy for a website. I don't think you use that word if you're talking about copywriting something. So let me rephrase that sentence. The US Patent Office has said, or sorry, you know what I mean?

(21:01):
The Federal Copyright Office in the United States has said that anything that's created with generative AI cannot be applied for a copyright. It cannot be applied for a trademark. So this is a big thing. If you use generative AI tools to make a logo, if you use generative AI tools to make an image that you want to put on social media that will represent a program that you're running, knock yourself out. But if you want to turn around and then say that you own copyright to that image, or if you use ChatGPT to write a book or a white paper or something, it gets really tricky, this thought of, well then who actually owns that intellectual property? Who owns the copyright of that material? So in general, the answer is no, that you basically, either you probably don't, or you definitely don't have the copyright to anything you create.

Andréa Jones (21:52):
I think it's so important to emphasize that because this is where that cheating feeling comes from because you have to be very careful, especially business owners, marketers, if you're doing this for your clients, even there is some gray area and then there's like, oh, no, definitely don't do this at all. And logos, any written copy that's going on, websites and things like that. I just would stay away from it if I were you, because you can get you some serious hot water. But I want to talk specifically about client stuff, and especially you mentioned earlier you remove names and email addresses to not feed it to the machine because it's not safe, right? Can you talk a little bit more about that as well?

Avery Swartz (22:37):
Yeah. So this is where I started to bump up against my knowledge of I am not a computer scientist. I am not an AI researcher. I'm not a cybersecurity specialist, but I am an information sponge and I read everything I possibly can, and I talk to a lot of people and I listen as much as I can. And what I hear out there in the kerfuffle, in the chatter is that we need to be a little bit concerned and a little bit, we definitely need to be a little safety conscious about putting any confidential information into chat, GPT, now chat, GPT and all the other systems too, Microsoft, copilot, Google, Gemini, perplexity, Claude Meta, ai, all of them. Same thing. These systems already know a heck of a lot about me. Anything that you could find out about me by Googling me, I figure is fair game.

(23:32):
So my name, what I do, my business, et cetera, fine. I don't pull that out. But anytime that I'm entering any information that has to do with my clients or people that I work with, that's where if I can, I will strip out names, I'll strip out email addresses, anything that could be personally identifiable. And it's just because the way that these systems work is a black box. We know in general how they work, but we don't know exactly how they come to the exact answer that they come to. And so when you're feeding information in, and I know Microsoft and OpenAI and Google, they have all said that they don't take your material that you type into the system, that they are not taking that and using it as their training data for their large language model. But there's also real world examples, and if you look it up, you'll find them real world examples where people will interact with these systems and all of a sudden it starts spitting out information that it got from somewhere. So I am a bit concerned about putting any sensitive information in there. Instead, I'll just pull it out or I'll rephrase. But that being said, there's still a heck of a lot you can do with these systems without, I'm not going to put all of my client payment information or anything like that into it. I would never do that. Yeah.

Andréa Jones (25:01):
Yeah. I mean, the big guys, they have a lot of information on us already too. So I feel like the least amount I can give it, the better. One of my good friends works in data privacy now on the government level and the stuff she tells me, I'm like OMG, they know a lot about us and some of the stuff we're volunteering to share. So anything that we can be very careful and mindful of how we give them that information is I think key here, especially with other people, staff. If you're a marketer and you're working on your client sentiment analysis, for instance, on reviews that they've had, remove those names, remove those email addresses for sure, because you don't want that information being leaked out there. I love that. So on the same tone of using generative ai specifically as a marketer, as a professional, even I'm thinking about copywriters, I'm thinking about web developers, I'm thinking about designers. Do we need to disclose this for our clients?

Avery Swartz (26:11):
So that's another big question that you can come at it from a bunch of different angles. So let's start with the regulatory response. And that is that in Canada and in America right now, you are not legally required to disclose the use of generative AI from the federal level at either of those companies. If you are in the European Union, you do have to disclose the use because the EU has passed their AI act, which does have different parameters for the disclosure of use of ai. Like I said, there's one coming in Canada, there's one coming in America, all of this stuff, this is typical technology moves faster than the law does, so the law has to catch up. But that's just the basics, right? Like regulatory, federal law. Do you have to, now you've got to kind of get into this question of ethically, do you need to tell your clients?

(27:06):
Now, I would assume that your clients know that you use certain tools that when you are creating an image for them or some graphic design for their social media posts or for their website, that you are not personally painting an image that will become their Facebook post, that you are using tools, you're using Photoshop, you might be using Canva, you might be using Adobe Premier to cut some videos. They know that you're using tools. And I think it's generally considered okay to use tools. I think that the real dilemma lies in the value of the work. And this is the thing that becomes really, really thorny, which is that you've got to think about why, and I know we're getting a bit theoretical here, but why are your clients hiring you? What is the value? Is the value the final product? Is the value the getting to the final product?

(28:08):
Is the value that they love working with you because you're so pleasant to work with, is the value that you can do some analysis and some strategy, but if the value is purely in the final product, then we've got a problem because we've got robots that can get to the final product, and we got robots that don't sleep, they don't eat right now, a lot of them are free or very, very low cost. So it's going to be a race to the bottom really, really fast in terms of just competition of how can a human compete with a robot if all you care about is the final product. So that right there, I think is why some people are hesitant to tell their clients that they're using generative AI because it might open up a conversation of the client, and hopefully your clients are much more respectful than this, but the client could look you in the eye and say, well, what am I paying you for then?

(29:05):
And you better have a really good answer to that question, and hopefully you are bringing so much more value than just the, well, I sliced and diced and made 10 Instagram posts for you that there's more to it than that. But if you can't say that there's more to it than that, then I mean, they might fire you. And frankly, I would probably fire you too. Everybody wants to save money, and if the robot can do just as good of a job, then why not let the robot do it? So it's a really meaty question that really gets into, as marketers, what is our value? And I strongly believe that it is not solely in the final product, but we have to get better at communicating that value to our clients so they don't assume that the value is in the final product.

Andréa Jones (29:57):
You have my wheel spinning over here because this is something I've been trying to conceptualize for myself, and especially I mentor marketers, social media managers. And this question does come up often, especially for clients who are more budget conscious and scrappier and they're trying to do the thing. And frankly, there is a lot that these tools can do. I like to compare it to the Photoshop versus Canva conversation where go back 10 years ago, if you said as a marketer that you primarily use Canva as your design tool, people were like, what am I paying you for? Oh my gosh, you don't use Photoshop, you don't use the Adobe suite. Whereas now it is very acceptable and encouraged even to use Canva as the tool for a very easy way to whip up something really quickly that doesn't need all the technical aspects of an Adobe product.

(30:53):
So I do think that there's a conversation there, and I also hear a little inkling of an ethical conversation around what are we telling our clients? How are we presenting this information? Because if we are using it, we definitely do need to disclose it. I think from an ethical perspective, having, we're working on this with our agency right now, having a one cheater or even a page on our website that's like, here's how we use AI tools in the business. And so I think that there's a lot of things at play there. And like you said, it's moving so much faster than the laws can keep up with it. So we're kind of like the wild, wild west out here making our own rules as we go before the laws catch up.

Avery Swartz (31:35):
Yeah, absolutely. And it's one of those things where this could be a completely separate podcast episode, and I'm sure you've had this episode and we'll continue to have it, but this kind of gets you into, again, that big conversation about what is your value? What is the value that you bring to your clients? And also how is your fee structure reflecting that value? And is your fee structure aligned with that value? Because, and now here comes some Avery opinions for you. I don't do a lot of client services these days, but I did for many, many years, and I was never a fan of the hourly rate. I'm not a fan of the billable hour. I am not in the business of selling time. I am in the business of selling expertise. I'm in the business of selling ease. I'm in the business of selling strategy.

(32:20):
I am a bit in the business of selling the final product. It's going to be pretty good too. But if you are doing a billable hour with your clients, and if using generative AI literally gets more done in less time, you've got a problem. Either your billable hour, your rate has to up, or you are going to have to do something to add value in that hour because you may actually be able to create, and this is what a lot of studies are showing, that on average, people that use generative AI tools are saving between 25 and 45% of their time. Wow. And so what are you going to do with that time? Are you going to, and this gets into a big conversation of, as a society, what are we going to do with that time? Unfortunately, because we're all just capitalists, we're probably just going to do more work with that time. But wouldn't it be beautiful if we did less work if we took that time and that was time that all of a sudden we all just embraced a four day work week, or we spent more time with our family and our communities and our pets and doing the things that we love. But we'll probably just do more work.

Andréa Jones (33:29):
The podcast episode that's coming out next month, we're talking about systems and automations, and this is part of that conversation, which is we are in the capitalistic society of improving things, but at the same time, there's something so sweet about having the hobbies, like having the time to do the things. We just do this because we like to frolic and enjoy it. And so I think that hopefully we can also do that with the power of technology and how quickly it's moving all this free time that we're saving. We can read some romance novels and enjoy a glass of wine in the sunshine. How about that? That's

Avery Swartz (34:10):
Right. I love it. I love it. I'll be out in my garden tending to my flowers, which is what I love to do when I'm offline.

Andréa Jones (34:18):
Yes. Beautiful. And speaking of flowers, y'all should follow Avery on social media. She'll post all that stuff there. You can find all the links to Avery's work onlinedrea.com/ 3 0 9. And hey, guess what? Avery's going to be speaking at our Social Media Day summit coming up on June 30th. Avery, can you give us a little teaser about what your keynote's going to be about?

Avery Swartz (34:42):
We're going to be talking about this stuff. We're going to get into these meaty issues. I am not an AI hater. I'm also not an AI hype girl. I live in that in-between world, where I think most people are living right now of trying to figure out how to wrestle with this stuff, how to use it in a really thoughtful, ethical way to get the benefits, but also to minimize the risk. So that's the kind of stuff I'm going to be talking about as a marketer, how do you use AI in a really thoughtful way? It's going to be a lot of fun. It's going to be a lively discussion. We're going to have a good time.

Andréa Jones (35:18):
Yay. I'm looking forward to it. Make sure y'all grab your tickets for that. And Avery, if people want to connect with you, tell us a little bit more about Camp Tech and how else I can connect with you.

Avery Swartz (35:27):
Yeah, sure. So Camp Tech, we run a number of different training sessions for companies, for small business groups, but we also run a whole bunch just for the general public. So you can check those out. You can attend them. They're online, so you can attend from anywhere. If you go to camptech.ca/workshops, I think it's workshops, plural. Yeah, I'm fairly certain it is camptech.ca/workshops. Also, like you said, you can find me. I'm an oversharer on the internet. I'm always online. I think there's somebody named Avery Swartz that lives in Tennessee, but too bad for that person because I own that name online. If you look up Avery Swartz, it will be me. Nobody else will show up with that name. I've got it on lock.

Andréa Jones (36:13):
I love that. I think there's something too, for having a unique ish name, my name, I can't get anywhere. Thank you so much, Avery, for being on this show today.

Avery Swartz (36:23):
It's been my pleasure.

Andréa Jones (36:24):
And all of you listening, thank you for hanging out with us today. All the links are in the show notes onlinedrea.com/ 3 0 9, and make sure you give us a five star rating on Apple Podcasts and Spotify helps keep us in the top 100 marketing podcasts. That's all because of your support. I'll be back at you soon with another episode. Until then, I'll see you on the internet as well. Bye.