Skip to Main Content

Ethical and evaluative use

Using GenAI in ethical ways

GenAI tools can assist us in our daily lives, at work, studying and many other places. As with any tool, ethical, evaluative and appropriate use is key.  

Creatorship

GenAI can be used to create almost anything, so it can be very tempting to use it to create your assignments at university. However, it is expected that while you are at university you are developing your own knowledge and skill set. By presenting AI-generated work as your own, you have not been able to develop and show these skills. 

Accessibility

Most tools have a free basic account that can be used by anyone, but these usually come with restrictions such as limits on the number of uses within a time frame. Many tools now charge for access to the platform or to premium features. This can create barriers for those who are unable to afford these costs. Due to data security, intellectual property and privacy concerns, staff must not require students to use GenAI for learning and assessment outside of those tools provided by QUT.

Privacy

Like most online and digital tools, GenAI has to ability to collect and store data about its users. When signing up, users may unknowingly allow companies to collect this data if the terms and conditions are not read and understood properly. This data can then be used to further train and refine the models or in some instances sold to the highest bidder. 

Bias

Bias has been a concern in technology for a long time, and generative AI is no different. Bias can exist for many reasons, including: 

  • People inserting their biases when they create the models
  • The datasets used to train models, and   
  • Generative AI creating biases from how it interprets the data it has been trained on and the questions it is asked. 

 

Academic integrity

At university, your work needs to be approached with honesty and integrity. This means giving credit when it's due and acknowledging contributions, including if and how generative AI has been used in your assignments. 

Individual units at QUT should provide guidance around the use of AI in assessment and each unit should offer clear guidance as to how GenAI may be used. Some units will allow GenAI tools to be used in a  particular context for example, to understand an assessment taks, to help with study or generating content while others will not permit any use. 

QUT has a policy regarding Academic Integrity see the Manual of Policies and Procedures: Academic Integrity as well as Academic integrity and plagiarism - Student - QUT Portal.

Many library database agreements contain clauses around the use of AI, so please read the conditions of use. 

Accuracy

Gen AI tools can generate grammatically correct sentences that sound authentic and true, by predicting the next best word based on the training data they've been given. If you ask ChatGPT to write a news story covering the financial earnings of a company it is likely to include a non-existent quote from the CFO or CEO, because it knows these stories usually include one. These factual errors are called AI hallucinations. Basically, its job is not to be right 100% of the time, it is to sound convincing enough most of the time. 

QUT guidance around the use of GenAI

Critically evaluating AI generated content

GenAI content can be unreliable, producing content that sounds incredibly credible but often isn't. And they often don't cite their sources. 

When reviewing content generated by AI, a critical evaluation should be done, the same as with information from a Google search or academic database. One way to do this is the CRAAP test. More information on the CRAAP test and how to use it can be found here

QUT's Study Smart course also has a module 'Evaluate' which may be useful in evaluating and analysing information from a ChatGPT or other generative AI models. 

Approaching assessment: appropriateness, attribution and acknowledgement

Attribution and acknowledgement are critical if using of GenAI in your assessments. If you use generative AI in any element of your work, the person that marks it needs to know what's yours and what comes from somewhere else.  

Check out cite|write for detailed instructions on how to reference generative AI.

In each style, you’ll find instructions under Internet sources > Generative AI (e.g. ChatGPT) 

AI and academic publishing

As well as being used by students, AI tools such as ChatGPT have also been used in academic publishing papers, with some authors even listing, or trying to list, the tool as a co-author. This has thrown up the question of whether ChatGPT (or any LLM) can be considered an academic author. As authorship in academia is understood in a different way to authorship of a newspaper article or short story, this has ignited much debate and differing opinions, 

In late January 2023, both Springer-Nature group and Science journals published stances on their use of AI in their journals. Nature has advised that no LLM or AI tool with be accepted as a credited author on a paper as these tools cannot take accountability for the work. But they will accept research papers where the tools have been used if the appropriate acknowledgement is made. Science also does not allow AI to be credited as the author and has updated its license and editorial policies to make it explicit that text, figures, images, or graphics generated or produced by AI or LLMs cannot be used in submitted work.

Elsevier, which publishes almost 3,000 journals including Lancet and Cell, has taken a similar stance. While it does not allow AI to be an author, it can be used to improve the readability and language of the article as long as it is declared how the tools have been used. 

Limitations and Drawbacks

GenAI tools can generate grammatically correct sentences that sound authentic and true, by predicting the next best word based on the training data they've been given. These models have been trained on billions of parameters from the internet and can do a pretty good job of predicting the next word. If you ask ChatGPT to write a news story covering the financial earnings of a company it'll likely include a non-existent quote from the CFO or CEO, because it knows these stories usually include one. As a result, any form of academic writing can be filled with fictional citations and references because the model knows there should be one, so it makes one up. These factual errors are called AI hallucinations. Basically, its job is not to be right 100% of the time, it is to sound convincing enough most of the time. 

Models can also be programmed to not answer certain questions that could be harmful, toxic, or in some instances political which many claim leads to bias. Open AI claims that ChatGPT is politically neutral, but some users claim they have been able to get it to write positive poetry about US President Joe Biden, but it refused to engage when asked to do the same about former President Donald Trump. AI image generator MidJourney has banned a range of words relating to the human reproductive system among others from being used as a prompt to prevent people from generating gory, sexual or shocking images. While it can be argued that some or all of these measures are being taken to protect users, they do impart a level of bias that users can't bypass.

Key considerations

Here are things to keep in mind when using or considering using generative AI.  

  • Is the use of generative AI allowed in my unit? 
  • If AI does this for me, what am I learning/not learning? 
  • How am I going to use this content? 
  • Who owns the content? Who created the model? 
  • Can I verify what has been generated?
  • Can I find out where the information came from? 
  • What biases might be involved? 
Tags: AI, artificial intelligence, ChatGPT, Generative AI