Search engines and AI assistants differ not just in how they work, but also in the sources they use:
-
Search engines retrieve only from indexed webpages, pointing you to live sites you can evaluate yourself
-
AI assistants generate answers using two types of sources:
-
Training data (books, articles, websites, and datasets ingested during development)
-
Live web content accessed through built-in search features
This explains why:
-
Search results are transparent lists of links
-
AI outputs are blended, conversational answers that may be harder to trace
-
Reproducibility is an issue: the same AI query may not always give the same answer
AI assistants may draw on academic material as well as less reliable sources like blogs or Reddit. However:
-
Much academic and professional literature remains behind paywalls
-
These tools mainly rely on open-access research and freely available web content
-
Outputs often blend academic and non-academic material, without transparent, verifiable references
This is why AI-generated content (from tools like ChatGPT, Claude, or Copilot Chat) is not considered an academic source, even if it discusses academic topics. To access paywalled academic and professional literature, use LibrarySearch and subject databases provided by the University Library.
Bottom line: AI is a great assistant — but it doesn’t replace proper research. For essays or lab reports, you must reference published academic sources. In creative portfolios, you may use AI to brainstorm but must still evidence your design research. In professional fields such as Law, Nursing, or Engineering, specialist databases and official sources remain essential because AI assistants cannot access all required evidence. These practices also prepare you for Honours projects and postgraduate research.