AI researchers have been working on developing large language models (LLMs) in order to better understand learning and cognition. OpenAI's newest model, GPT-4, is one such example and was trained at an unprecedented scale of compute and data. In this paper, we explore early experiments with GPT-4, detailing our findings and discussing its capabilities and potential uses.