AI in leadership: keep your team ahead of competitors with generative AI
What are the risks and opportunities that generative artificial intelligence (AI) offers and how can your team get ahead of the curve?
We recently welcomed Daniel Rowles as our guest speaker for a webinar on AI tools and trends – particularly aimed at those who lead teams now and in the future.
Daniel is CEO and Lead Trainer at Target Internet, a hub that’s dedicated to career-accelerating digital marketing advice and education through its online learning platform and podcast.
He also hosts The Digital Marketing Podcast (with an incredible 130,000 listeners) and is Programme Director at Imperial College.
Our webinar was designed to help current and future leaders upskill their teams around digital marketing, communications and especially AI. Watch the replay.
The webinar explored:
- AI platforms like Chat GPT and Google Gemini
- How to prompt AI tools effectively
- How to keep up with the pace of technology
- Deepfakes and ethical implications
- Embracing the opportunities
- Navigating the risks of AI
- Resources for you to take away
AI platforms
First we looked at putting AI in perspective – where have we been and where are we now?
AI platforms were launching as early as 2016, when Microsoft took its first steps into conversational AI and created Tay – a Twitter account that could learn from conversations with users and become smarter. Tay was friendly, if a little cheesy. Within 12 hours Tay had a friendly demeanour, but within 48 Tay had become offensive and was switched off – foreshadowing polarising and risky results.
“There is an art to prompting and getting the most out of AI tools.”
Fast forward to 2019 and 2020, and our mobile phones have adopted machine-learning skills. They can recognise a dog’s face and filter our photos by what we search. Google then adopted this function in its search engine, enabling us to add images to searches – including people – and identify them (though not always accurately). But this of course has its own privacy implications.
After this, we saw chat tools become more well-known such as Chat GPT – the interface for GPT which existed before we had access to it via the chat function.
How to prompt AI tools effectively
There is an art to prompting and getting the most out of these tools. Providing a writing structure and particular topic ensures better quality results.
Here are some prompting examples Daniel provides for tools like Chat GPT:
- Provide some background to roleplay, such as “You’re an expert copywriter that speaks and writes in fluent English”
- Ask how you can improve your work
- If required, ask it to ignore all previous instructions because otherwise it’s going to continue
- Ask it not to self-reference and, if you’d prefer, not to explain what it’s doing while it’s doing it
Daniel recommends a free Google Chrome plug-in called Keywords Everywhere which can create prompts for you with templates saved in your own repository. For example, you can provide a PDF with your company’s style of writing and ask it to analyse and describe it.
“Remember – you don’t need to know everything. You just need to know a little more than your competitors.”
You can then use this description to teach your AI tool to use it for content going forward – saving you and your team hundreds or even thousands of hours.
Keeping up with the pace
This has led to huge volumes of content with more blogs being published and more automatic emails sent. It’s even impacting video content with AI tools such as Google Lumiere and OpenAI Sora generating incredibly realistic videos.
Technology is moving at pace and leading to disruption across many industries, which can feel overwhelming. So how do you stay up to date? Remember – you don’t need to know everything. You just need to know a little more than your competitors.
“So what can we trust? What do deepfakes mean for politics, media and everything else?”
There is a cultural aspect to this in terms of learning. How do we embed this in our companies? How do we constantly expose our team to these learnings and experiment with them? We need to start by looking at how we can adopt these AI technologies.
But how do we manage the ethical implications, such as risks of easy plagiarism? From an academic perspective, Daniel explains that assignments for his students now allow them to embrace tools and learn to use them for the industry. What’s important is the thinking and structure behind it all.
What about deepfakes?
There are some incredibly useful AI tools for content and media – such as ElevenLabs, a generative voice AI platform that can create or imitate any voice in any language. This is great for features like promotional audio snippets or news updates.
Descript is a tool for editing podcasts and videos by moving around your transcript. You can make visual changes – for example, if a person who’s been recorded looks away from the camera, you can edit this movement out of the video.
Tools like Google Gemini are multi-modal, so they can work with audio, video and images as well as text. We’ll see more of this in the near future, as well as tools for other areas like e-commerce where we’ll try on clothes virtually.
“You can even connect some AI tools to other external platforms – such as customer relationship management systems and web analytics.”
So what can we trust? What do deepfakes mean for politics, media and everything else? We all need to think about that so we can take a position and create policies that will need revisiting regularly.
While none of these tools are perfect, they are only getting more effective – and fast. This means constantly educating our teams and playing with tools to stay on top of them.
Where are the opportunities?
A particularly good opportunity is around data analysis, where you can upload a dataset to an AI tool and it can visualise and analyse it for you. Daniel is teaching his master’s students to do this, for example when analysing customer data to make future predictions on what works and what doesn’t.
You can even connect these tools to other external platforms – such as customer relationship management (CRM) systems and web analytics. You can also send a screenshot to some tools and they’ll write the HTML code for you or upload a photo of your sketched ideas to create an interactive prototype.
“Regulation isn’t really the problem.”
Social media platforms are now talking a lot about the metaverse. Facebook has created avatars where you can record yourself from different angles and after 30 seconds, it’s started to create a 3D avatar of you to use via your phone or virtual reality (VR). Would someone notice the different on a Zoom call?
How do we navigate the risks?
What does it mean when you can change what you look like to other people? As leaders and people managers, we need to have a view and embrace this. Daniel was recently asked by the House of Commons to brief on what they should do about broader regulation.
“Create a company policy that says what you will and won’t do.”
And the answer is that regulation isn’t really the problem. It’s the lack of investment into development and knowledge about these things and continuous discussions going forward.
Resources to take away
Daniel recommends trying the AI tools out and experimenting with them. Then create a company policy that says what you will and won’t do and iterate it. That’s the key to creating a learning culture. You can even benchmark your team’s digital marketing skills to see where your strengths and gaps are.
Daniel explains that the most important thing we can do is get together and keep the conversation going, for example on webinars or by listening to podcasts. Keep learning and try to help your companies embrace these changes.
You can find more resources and helpful information on Target Internet’s website.
You can get in touch with us to discuss your hiring needs and how to upskill your teams: