top of page

Q&A: How Higher Education marketers can use generative AI more effectively and strategically

There’s a lot of hype around generative AI in marketing. Every conference has at least one session about it. It’s all over LinkedIn. People are asking about it whenever we deliver a training session.


In this Q&A, our creative director Chris Silberston speaks to AI expert Jose Ignacio Barrera Malagarriga. They discuss how HE marketers can navigate the rise of generative AI, moving beyond the hype to integrate thoughtful strategies that improve and complement their existing processes.


Jose Ignacio is the AI and Insight Lead at the University of Manchester, where he helps marketing and recruitment teams explore how AI can support creativity, forecasting, and decision-making, while enhancing the university’s predictive analytics capabilities with insight-driven approaches.


Chris is Creative Director at A Thousand Monkeys. He helps universities think more strategically about the words they use, whether that’s through practical tone of voice guidelines, upskilling teams with new tools and techniques, or delivering clearly persuasive copy that engages audiences.

 

It’s tempting to start using AI for everything so you don’t get left behind. But let’s take a step back. What are most people actually using it for?

 

Most universities seem to be actively exploring AI, but very cautiously. There’s a lot of enthusiasm, but also awareness of the risks, especially around data privacy, academic integrity, governance, and the complexity of managing change. Many are partnering with Microsoft and using Copilot, as it offers a secure infrastructure for handling data responsibly.


From what I’ve seen through conferences, webinars, and conversations with peers, AI is currently being used mostly for productivity: summarising documents, brainstorming content, and analysing data.


Increasingly, it’s becoming particularly useful for coding and research. Developers can debug and optimise scripts much faster, and researchers can scale up literature reviews or generate outputs far more efficiently.


Some universities are also starting to experiment with student-facing tools like chatbots for admissions, personalised feedback, or admin support. And you can already see even more use cases emerging, from automated email marketing using AI agents, to call assistance services that weren’t even on the radar two years ago.


In my role, I’ve been helping teams at The University of Manchester explore these kinds of pilots safely, balancing the excitement with the realities of actual implementation. AI is unlocking possibilities that, honestly, felt impossible not long ago. Because things are moving so fast, it’s natural that people feel a bit overwhelmed. Nobody wants to risk being left behind.


My advice? Start experimenting with the tools directly (safely, of course) because that’s how you’ll start to discover where and how AI can really add value to your work.


What’s the best way for marketers to stay up to date with AI, given how fast the field is evolving?

First, don’t panic. Not every new feature is a revolution or relevant to your role. The field is maturing, and while some changes are significant, they’re not always disruptive. Most people probably won’t be out of a job any time soon.


A good mindset is to follow developments strategically. Focus on what actually matters for your work or sector, rather than trying to track everything. There’s often a temptation to chase the next big thing, but many of the most meaningful improvements (like better reasoning, multi-modal capabilities, or more agentic workflows) only really matter if they support your goals.


Personally, I recommend picking two or three core tools and learning them properly. That’s better than trying to dabble in everything.


That’s the same approach I take in my own work. I focus on Copilot, Gemini, and GPT for content generation, coding, reasoning, and productivity. I also experiment with tools like CrewAI and n8n, which allow you to build multi-agent workflows and start exploring automation layers that go beyond pure text generation. These types of orchestration tools are becoming increasingly important as the field moves towards building more complex AI systems.


Finally, to reduce the hype-fatigue and avoid the constant sense of panic, it helps to pick a few key sources of information that you follow consistently. This way, you avoid the pressure of endlessly browsing for every new update.


If something doesn’t show up in your trusted sources, that’s fine. Truly important breakthroughs tend to surface quickly anyway. In my case, I rely on a mix of peer discussions, YouTube channels, and a few carefully chosen newsletters like Data Points from DeepLearning.AI or IBM’s Think newsletter, sources that cut through the noise and focus on real trends.


This is also the approach I encourage when advising teams on AI adoption: stay focused, be intentional, and keep your learning targeted.

 

Do you think universities are too limited by focusing only on Copilot? Could they be missing out on better-performing models?

It depends entirely on the use case. Copilot is an AI application (it uses GPT in the background) but it’s built for general productivity. If your work is highly specialised, like coding or automation, you might benefit from other models specifically trained for those tasks, like Codex, Code Llama or models with enhanced reasoning capabilities.


The risk for universities is not so much in using Copilot, but in assuming that one single tool covers all AI potential. For basic productivity, it’s very effective. But if you're looking to build more advanced AI workflows, for example, multi-step processes where different models handle planning, task decomposition, or decision-making, you’d be restricted using Copilot inside Office 365. That’s where platforms like Azure AI Studio, Copilot Studio, or agent orchestration tools like CrewAI and n8n give you far more flexibility.


So, it's not just about which model you're using, but whether your infrastructure allows you to combine multiple models, customise workflows, and adapt to emerging capabilities.

 

What are some overlooked but high-impact AI use cases for marketers?

Marketers often think of AI in terms of content generation like writing an email or social post. But there are dozens of small productivity wins people miss.


For example, you can screenshot a web page and ask a model to extract the data into Excel, or use deep research features to run benchmarking by summarising course pages. I’ve also seen people use it to summarise academic papers and speed up literature reviews. And some institutions are even experimenting with AI personas to test how different audiences might respond to content or campaigns. These aren’t headline-grabbing use cases, but they save serious time.


What gets really interesting is when you start combining some of these smaller tasks into multi-step workflows. For example, building AI agents that monitor competitor content, generate synthesis reports, create draft content, and feed insights into marketing planning.


Outside of content and campaign work, I’ve been exploring reasoning models for market research, applying something called judgmental forecasting. This is especially helpful when you don’t have a lot of data to work with and need to bring in qualitative factors to build a forecast. I’ve put together a method that combines models like Gemini, LLaMA, GPT, and DeepSeek to forecast university applications. It’s still early days, so I can’t say yet how accurate it will be, but by the end of 2025, we should have a much better idea of how useful the approach is.

 

How do you evaluate which tasks are suitable for AI?

The first step would be to build a map of your processes. Once you understand which tasks contribute to your strategic goals (whether that’s brand development, lead generation, or user engagement), you can look for areas where AI can complement or accelerate those steps.


That’s different from jumping straight into a tool because it looks promising. If you don’t know what problem you’re solving, you’ll just focus on speed and efficiency, which risks missing the creative and strategic potential of AI.


For instance, you might use AI to visualise a campaign concept as a draft video. You probably wouldn’t want to publish it directly, but it can help communicate your vision to collaborators. That’s a huge creative benefit that goes beyond automation.

 

A problem we often notice with less confident writers is the Einstellung effect – where people keep tweaking something they’re comfortable with rather than trying something new. What are some other risks of over-relying on AI in the creative process?

I think one real danger is that people may lose critical thinking skills if they’re not careful. Current research has found that the use of AI can weaken critical thinking, mainly because of something called cognitive offloading. Some studies have also found that frequent use of AI can reduce people’s confidence to work independently, meaning there are tasks they may no longer feel comfortable starting without AI support.


Another risk is that creative outputs can start to sound generic if people rely too heavily on AI’s default suggestions, rather than pushing for uniqueness. This can be especially problematic for junior professionals, as there’s a danger that skills like writing, research, or critical analysis don’t fully develop if AI handles those tasks too early in their careers.

Finally, there’s always the risk of amplifying existing biases in the data these models were trained on, which makes critical review even more important.


The key is mindset. AI should be treated as a partner, not a solution. Think of it like a new colleague: it won’t get everything right the first time, but with the right prompts and context, it can become a very capable collaborator.

 

How do you balance productivity with ethical and sustainability concerns?

In terms of environmental concerns, AI’s impact is real, and it's important that we acknowledge it. AI consumes significant energy and water, especially through data centres. But this debate needs a broader perspective, because in our daily lives millions of people also engage in activities that are similarly harmful to the environment, like fast fashion, car use, food waste, or international travel.


I’m from Chile, where, in the Atacama Desert, there are huge areas just piled with unwanted new clothes from fast fashion manufacturers. These piles are so big, you can see them from satellite images.


So, while AI’s footprint shouldn’t be overlooked, I believe the conversation needs to go beyond AI and trigger a much broader reflection on how we approach sustainability as a whole.


Quicker changes will likely come through government regulations, but those often take time and depend heavily on political agendas and what brings more votes. In the meantime, our best approach is to apply to AI the same environmental lens we use for other areas of our lives. Whether you're asking GPT for advice or driving your car, be intentional, responsible, and conscious of the broader trade-offs.

 

What’s your take on AI’s impact on our brands? How should marketing teams think about it strategically?

AI should be part of your brand strategy, not just a tool on the side. If your content is AI-generated, or supported by AI, that affects tone, consistency, and even user trust. So, define upfront: will you use AI to generate video, image, or text content? Will you label AI-generated content? Will it sit alongside human-created content or replace parts of it?


Building any brand, you need consistency in voice, visuals, and values. AI has to align with these, and marketing teams should set those boundaries early to avoid sending conflicting signals.

 

How can marketers future-proof their careers in this AI-driven environment?

The real differentiator won’t be whether you use AI, but how well you use it. In the future, most people won’t be replaced by AI, but by others who use AI better.


That means learning how to prompt effectively, how to combine tools, and how to manage workflows where different models serve different purposes. It's about being able to direct the tools as part of a broader strategy, not just using them reactively.


And perhaps most importantly, marketers need to think in terms of processes and workflows. Map your strategy, identify where AI adds value, and integrate it in a way that complements creativity rather than replacing it.

 

What’s your final advice for higher ed marketers starting their AI journey?

Be deliberate. Start by mapping your marketing activities and identifying which ones are repetitive, strategic, or creative. Then look for AI tools that can support those categories.


Avoid starting with the tool first just because it’s new or popular, instead start with the problem you’re trying to solve. That said, this doesn’t mean you shouldn’t experiment. Sometimes the best way to understand a tool’s potential is simply to try it. Start small, test what it can do, and once you’re more familiar with its capabilities, you can start applying it to more complex projects.


Once AI becomes part of your workflow, treat it as a long-term capability. Build a feedback loop: test new features, evaluate results, and refine how you use it over time. The models will evolve, and so will your understanding of where they fit.


But throughout all of this, always stay in control. AI can strengthen your brand, sharpen your message, and improve your outcomes, but only if you remain the strategist guiding the process. The real advantage won’t come from simply using AI, but learning how to use it thoughtfully, intentionally, and in a way that complements your expertise.

 
 
 

Comments


bottom of page