Using in-context learning techniques, ChatGPT - a large-scale language model based on the advanced GPT-3.5 architecture - has shown remarkable potential in correcting grammatical errors. This study explores its capabilities in zero-shot and few-shot chain-of-thought (CoT) settings, showcasing its impressive results in Grammatical Error Correction (GEC). Our evaluation involves assessing ChatGPT’s performance and comparing it to other models.