Reasoning across a 402-page transcript | Gemini 1.5 Pro Demo
This is a demo of long context understanding, an experimental feature in our newest model, Gemini 1.5 Pro using a 402-page PDF transcript and a series of multimodal prompts.
This demo is a continuous recording of a live model interaction. Sequences have been shortened with response times shown.
Token count details: The input PDF file (326,658 tokens) and image (256 tokens) total 326,914 tokens. The text inputs add additional tokens into the prompt, yielding the 327,309 token total shown in the interface.
To learn more about Gemini 1.5, visit
Subscribe to our Channel:
Tweet with us on X:
Follow us on Instagram:
Join us on Facebook: