This presentation introduces basics of artificial intelligence (AI) and generative AI, including their history and technological foundations. It discusses transformer architecture, large language models, retrieval augmented generation, and the differences between interpretive and generative AI, focusing on the advantages and limitations of these technologies as they are expressed in U-M's GenAI tools: U-M GPT, U-M Maizey, the U-M GPT Toolkit, and GoBlueAI.
Important links:
Slides for this presentationSlides for the subsequent session:
Introduciton to the GenAI Instruction Maizey {co-sponsored with Instructor College}U-M Library andGenAI Instruction MaizeyU-M Library and GenAI Instruction LibguideFeedback formNote on context windows (related to discussion at 20:16 in the video or slide 20 in the slide deck):
Information about file upload and context window limits for U-M GPT are available on the
U-M GPT In Depth webpage. Aside from file upload limits in U-M GPT (of 10 files or 50 pages), the context window for the models available through U-M GPT are large enough that users are unlikely to encounter context window limits with a single or even several prompts. The limit will most likely be encountered in the context of an entire conversation and U-M GPT lets users know when the context window limit is reached. Apart from the context window, users are limited to approximately 75 prompts per hour for text-based models (GPT-4o, Llama 3.2, etc.) and approximately 10 prompts per hour for image-based models (GPT-Image-1).
Note on historical Maizey context window (related to the discussion about context windows at around 28:33 in the video, or slide 33 in the slide deck):
The version of GPT used in Maizey before GPT 4.1 (GPT 4o) also had a context window of 128,000 tokens. A version prior to that (in place in early 2024) had a lower context window of 8,000 tokens.