Skip to Main Content

Library Guides

Generative AI: GenAI: Effective Use

Effective Use of Generative Artificial Intelligence (GenAI)

Find in this page our recommendations for using GenAI effectively for learning and writing. 

Contents of this page:

  1. Find the Mix of Human and AI Contribution that is Good for You
  2. Recommended Use of GenAI to Learn and Write
  3. Prompt Engineering
  4. How Good is GenAI at Academic Writing?
  5. References

Find the Mix of Human and AI Contribution that is Good for You

GenAI can be a powerful tool, but has to be used with caution if it is to be used effectively (helping you to achieve your goals) and ethically (helping to achieve "good" goals, or at least not causing harm). 

Establish the right mix of human and AI contribution

We suggest three areas which should guide your decision-making on this matter: 

  1. Guidelines of the university (or other institutions and recipients of your work, e.g. workplace, publisher etc). The University of Westminster has guidelines about what use of GenAI is acceptable and not acceptable. For a start: 1) having GenAI write your work is not accepted; and 2) all use of GenAI should be disclosed. (See tab Using GenAI: Ethics and Rules.)
  2. Ethical issues with using AI, including respect for intellectual property and environmental impact. (See tab Using GenAI: Ethics and Rules.) 
  3. Your goals and objectives. For example, why are you studying at university? What do you want to achieve with the coursework you are preparing? Consider that:
    • GenAI can provide an answer fast. However, before employing generative AI, even for non-assessed tasks or independent learning, you should consider the following questions:Can a GenAI enhance my understanding, considering my main learning objective(s)? Am I confident I can accurately check any GenAI response? Are my questions or prompts to the GenAI well-informed and aligned with my learning objectives? Does the GenAI aid your understanding and ability to ask further meaningful questions? Are you in control and able to act independently and ultimately make your own decisions? Are you responsible for your work and your decisions rather than simply passively relying on the GenAI to lead on your academic endeavours?
    • GenAI can help you at different stages of the writing process, but your effort at these steps (including generating ideas, researching, drafting and editing) have educational value and are important to develop and exercise advanced cognitive skills. GenAI will be around with us in the future so it is important to be able to use it, but you should also develop and exercise your cognitive abilities to be independent of AI. We want AI to augment, not replace, your learning and development (Dergaa et al, 2024; Nyholm, 2024).

Use these considerations to guide you to the best use of GenAI for you

Recommended Use of GenAI to Learn and Write

Our overall recommendation is that you:

1) Use GenAI for specific tasks, making sure you check the results and think critically about them.

2) Do not use GenAI to create a final output that you will present in your assessment passing it as yours. Such use would be:

    a) Academic misconduct

    b) Most probably, of inadequate quality.

Use AI as a tool but do not trust or rely on it. This is likely to be the spirit of our collaboration and coexistence with AI in the near future. 

Below follow some ideas for delimited tasks that GenAI can help you with, if you supervise and critically evaluate them. These align with the university guidance for use of generative AI.

Generating ideas, drafting an outline

You can ask AI to help you generating ideas around a topic, identifying pros and cons of an issue, act as your sounding board/critical friend, help overcome the "writer's block". Be careful however and consider the following: 

  • You know best what the question/task you have been given really means and requires. For example, if you ask the GenAI to help you generate an outline for an essay question, it will miss context such as the Learning Outcomes, assessment criteria, instructions given in class, reading list etc.
  • GenAI does not actually "understand".
  • GenAI may not be authorised to express itself on sensitive matters, or to use certain expressions. Its freedom of expression is limited. 

Therefore, be inspired but not constrained by the GenAI suggestions. Challenge yourself to outsmart the AI, think outside the box, compare how you understand the topic, what links you can make, what do you want to say in your work. Produce your own mind map and outline. Check this guide on generating ideas for essay writing. 

Explaining something

GenAI can be pretty good at giving you a general, clear introduction to a topic, for your own "consumption".

Be wary however of the following. GenAI: 

  • Can make mistakes.
  • Doesn't have a sense of truth (Haggart, 2023; Paglieri, 2024). See also Jacobsen (2024) on AI being too "heady", not understanding cause and effect.
  • GenAI does not typically recall how it "learnt" something - was it an authoritative source, a blog, or is it simply misremembering/hallucinating? Some GenAI, like Copilot and Perplexity, attempt to provide sources for their answers. Note that these engines sometimes confuse sources, and the sources they use in turn need to be carefully checked for accuracy.
  • Does not critically evaluate the information it was given.
  • Will not go in much depth about a topic, although you can try to iterate so get more details. 
  • Needless to say, it is not an authority. 

Researching

We advise you to be very cautious using GenAI to get recommendations on good sources/literature on a certain topic and consider:

  • How updated and comprehensive are the training data that the AI is using? 
  • Does the AI have access to literature situated beyond the paywall (scholarly journal articles, books etc)?
  • Does the AI have access to the internet? For example, GrammarlyGO doesn't have access to the internet, whereas Copilot and Perplexity do.
  • On what basis is the AI recommending specific sources? The AI most probably cannot recall those sources, and cannot understand or critically evaluate them.
  • The use of some sources can perpetuate bias and work against a plurality and diversity of voices.
  • AI can hallucinate and make up sources that do not exit.  

Therefore, while you can check if AI has any valid ideas for your reading, we recommend understanding how to undertake research as in a more traditional manner to complement and check answers provided by a GenAI. You can check out our page on research skills for more guidance. 

Summarising texts 

You can ask AI to summarise longer texts. Be careful though as:

  • To reiterate, AI does not really understand.
  • Much meaning, ideas, craft and references will be lost in a summary.

Improving your language

There is less controversy as to the ethics and effectiveness of using AI to help you polish your language. AI can serve as a great equaliser for those who do not shine at grammar or were not schooled in an English-speaking country. GenAI overall is very proficient at languages and can write in different genres. This is probably how we would all be had we read dozens of terabytes of text data! 

There are no limitations on your use of writing editors like Word and Grammarly, which give suggestions to correct your text. Pay attention because these programmes are not perfect, and can both fail to capture flawed phrasing (false negative) and correct phrasing that is acceptable (false positive). Grammarly can be a useful tool to support your writing, but it should not be relied upon exclusively for editing scholarly writing. It is most effective when used in conjunction with critical evaluation by the writer.

You should exercise even more caution when using GenAI to rephrase or rewrite your work:

  • The University of Westminster guidance on GenAI does not allow students to submit a draft piece of writing and ask an AI system to simply re-write it in good English or to re-structure it.
  • The AI may not understand your work, and therefore rephrase it altering the meaning.
  • Extensive rephrasing will sound inauthentic and cast doubts on the authenticity of your work. 

For further guidance on academic writing please use our Academic Skills guides.

Prompt Engineering

When you use GenAI, prompt engineering can help you to make the most of the interaction.

Prompt engineering is the process of refining input to different generative AI systems to produce specific outputs, either text or images. The following steps will help create prompts that will generate better responses that are less likely to be harmful or biased and more likely to give valuable outputs. 

1) Identify the goal

State what you want the generative AI system to help you with and the type of response required. 

Example: You want ChatGPT to act as a personal tutor and explain to you how generative AI models work, which can be a complex topic to learn for non-specialists.

You can start by inputting the following prompt, which is your goal: 'Explain how generative AI models are trained in less than 100 words. Explain this in a way that all users can understand.'

2) Add detail

Provide as much detail as you can, including the intended audience, how much output you would like, how accessible you want it to be, as well as any other potentially relevant context.

Example: 'Explain how generative AI models are trained and developed in less than 100 words. Explain this in an accessible way that all users can understand. The audience for this text is undergraduate and postgraduate university students.'

3) Compose the prompt

Construct a succinct prompt that effectively conveys key information relevant to the task, using the keywords and phrases identified in the previous steps. Include the most relevant industry-specific and topic-related terms in the prompt.

Example: 'Explain how generative AI models are trained and developed, mentioning deep learning and machine learning. Do this in less than 100 words. Explain this in an accessible way that all users would be able to understand. The audience for this text is undergraduate and postgraduate university students.'

4) Test and assess

Evaluate the responses generated, modifying the prompt until the desired output is obtained. This process can be repeated until a successful prompt is created. 

It is also useful to remember that these tools are chatbot tools that remember previous prompts made, creating a conversational style interaction. 

For more guidance on prompt engineering, check the university resource on prompting. 

Prompt engineering videos

Prompt engineering videos on LinkedIn Learning 

The Art of Prompt Engineering Video (YouTube)

How Good is GenAI at Academic Writing?

Are you tempted to use GenAI for writing extensive passages of your coursework? 

This would not be allowed according to the guidelines at the University of Westminster.

Furthermore, the GenAI output may not be that good after all. Have you used GenAI already? How do you think GenAI does at "academic writing"? In this section, we try to assess how GenAI fares, at the time of writing, in relation to some key characteristics of academic writing, namely: 1) interpretation of the question/task, 2) use of evidence, 3) referencing, 4) critical thinking, 5) objectivity, 6) logic, 7) academic language. It appears that Gen AI can fulfill some of the formal criteria for academic writing (e.g. writing in correct, formal language) but it has serious shortcomings in many substantive ones.   

1) Interpretation of the question/task

GenAI may struggle to understand what the question/task you have been given really means. For example, if you ask the GenAI to help you generate an outline for an essay question, it may miss context such as the Learning Outcomes, assessment criteria, instructions given in class, reading list etc.

In addition, GenAI does not naturally engage in "problem-building", that is, redefining the problem/question given - although it can attempt to do that if you direct it. Note that complex writing tasks generally require us to refine and interpret a given problem or even, especially in research, identify and define new problems, and formulating (research) questions (Flower and Hayes, 1980). 

2) Use of evidence

Access to evidence

AI chatbots "know" enormous amounts of data, but they are still limited to the data they have access to. Some GenAI models do not have access to the internet. Most are not able to go beyond paywalls and "read" directly from the sources scholarly articles and books, also because of copyright issues.

Evaluation of evidence

Gen AI does not assess evidence critically and does not undertake fact-checking. GenAI normally operates by correlating words, rather than understanding, assessing, analysing and reasoning about concepts. Therefore, it can make mistakes if it relies on incorrect, incomplete or "confusing" training data. 

Accuracy and hallucinations

GenAI can produce "hallucinations", that is, information that seems realistic but is nonsensical or unfaithful to its source (Ji et al, 2023). The data provided by GenAI cannot be trusted - always verify data using reliable sources. 

3) Referencing

The ability of GenAI tools to provide references varies from model to model. Be careful as some models, when prompted to provide references, may hallucinate, producing references and creating citations to publications that do not exist. This has made it to the news in the case of s lawyer who, during a trial, quoted court decisions ChatGPT had invented for him

4) Critical thinking 

Software does not think - it simply executes a programme. However it could be programmed to act as if it thought critically. So how does GenAI do with analysis and evaluation, two main components of critical thinking? GenAI does attempt to display them, but arguably this is not the level of critical thinking expected at university and for academic writing.

  • Analysis: GenAI can identify some relevant aspects of an issue, but it fails at advanced analytical skills including questioning, challenging, considering different perspectives, understanding contexts, doubting, and intellectual curiosity.  
  • Evaluation: GenAI may attempt to evaluate and draw conclusions, but it struggles to devise arguments and say something original that comes from research and interpretation of an issue. 

Here lies an important difference between human and GenAI writing. With writing, humans don't only communicate meaning they find; they also create meaning. This happens as, in the process of writing, we restructure old knowledge on a subject, create new concepts (Flower and Hayes, 1980), and clarify and evaluate aspects of our views (Dunleavy, 2003, p. 136-7).

5) Objectivity

Machines are supposed to be objective but GenAI is only as objective and unbiased as the training data it has been fed. (See tab Using GenAI: Ethics and Rules.) 

6) Logic

You would expect machines to be logical, but GenAI is not logic-based like GOFAI (See tab AI and GenAI).

Interestingly, GenAI outputs have been found to prioritise local coherence (the relationship of meanings or contents of a verbalization to the preceding utterance produced) as opposed to global coherence (how well each sentence of a sample relates to the overall theme of a conversational topic). If ChatGPT makes a mistake, it will keep following the original mistaken assumption into madness (Clark, 2023; Hough and Barrow, 2003). 

Also note that presently many GenAI programmes are not very reliable at maths.

7) Language

GenAI programmes tend to master language. Very rarely will they make a grammar mistake, and they can use different registers. Their writing is clear.

There are some flaws as to language too, however. GenAI writing is formulaic, often using repetitive language. Remember that good academic language has to be formal, but also stylish and colourful, engaging the reader (Sword, 2012, pp.7-8). 

References

Bender, E.M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021) On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922

Clark, J. (2023) How not to write like ChatGPT. Technically Product, https://www.technicallyproduct.co.uk/writing/how-not-to-write-like-chatgpt/

Dergaa, I., Ben Saad, H., Glenn, J. M., Amamou, B., Ben Aissa, M., Guelmami, N., Fekih-Romdhane, F., & Chamari, K. (2024). From tools to threats: a reflection on the impact of artificial-intelligence chatbots on cognitive health. Frontiers in psychology15, 1259845. https://doi.org/10.3389/fpsyg.2024.1259845

Dunleavy, P. (2003) Authoring a PhD : how to plan, draft, write and finish a doctoral thesis or dissertation. Basingstoke: Palgrave Macmillan.

Flower, L., & Hayes, J. R. (1980) The Cognition of Discovery: Defining a Rhetorical Problem. College Composition and Communication31(1), 21–32. https://doi.org/10.2307/356630

Haggart, 2023, "Unlike with academics and reporters, you can’t check when ChatGPT’s telling the truth" in The Conversation, 30 January 2023, https://theconversation.com/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth-198463

Hough, M., & Barrow, I. (2003). Descriptive discourse abilities of traumatic brain-injured adults. Aphasiology, 17(2),183-191

Jacobsen, R. (2024) Brains Are Not Required When It Comes to Thinking and Solving Problems—Simple Cells Can Do It. Scientific American, February, https://www.scientificamerican.com/article/brains-are-not-required-when-it-comes-to-thinking-and-solving-problems-simple-cells-can-do-it/ 

Ji, Ziwei; Lee, Nayeon; Frieske, Rita; Yu, Tiezheng; Su, Dan; Xu, Yan; Ishii, Etsuko; Bang, Yejin; Dai, Wenliang; Madotto, Andrea; Fung, Pascale (November 2022). "Survey of Hallucination in Natural Language Generation" (pdf). ACM Computing Surveys. Association for Computing Machinery. 55 (12): 1–38. arXiv:2202.03629doi:10.1145/3571730S2CID 246652372. Retrieved 15 January 2023.

Lawton, G., no date, https://www.techtarget.com/searchenterpriseai/definition/generative-AI

Musser G. (2023), ‘How AI Knows Things No One Told It’, Scientific American. Available at: https://www.scientificamerican.com/article/how-ai-knows-things-no-one-told-it/ (Accessed: 18 December 2023).

Nyholm, S. (2024) ‘Artificial Intelligence and Human Enhancement: Can AI Technologies Make Us More (Artificially) Intelligent?’, Cambridge Quarterly of Healthcare Ethics, 33(1), pp. 76–88. doi:10.1017/S0963180123000464.

Paglieri, F. (2024) Expropriated Minds: On Some Practical Problems of Generative AI, Beyond Our Cognitive Illusions. Philos. Technol. 37, 55 https://doi.org/10.1007/s13347-024-00743-x

Sword, H. (2012) Stylish academic writing. Cambridge, Massachusetts ; Harvard University Press. Available at: https://doi.org/10.4159/harvard.9780674065093.

Tegmark, M. (2017) Life 3.0 : being human in the age of artificial intelligence. London: Allen Lane.