🧠 Fixing the 'charmap' Codec Decode Error in litellm (Windows)

If you're working with Python packages like CrewAI or LangChain , you might have run into an unexpected error: UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1980 This is a Windows-only error caused by reading UTF-8 files using the default cp1252 encoding. In this post, I’ll walk you through exactly why this happens and how I fixed it while building a math tutoring system using OpenAI and CrewAI , without relying on litellm.

2/21/20252 min read

🧠 Fixing the 'charmap' Codec Decode Error in litellm (Windows)

💥 The Problem
While building a math tutoring system using CrewAI and LangChain with an OpenAI backend (ChatOpenAI), I ran into the following Python error:

This happened during startup, even though I was not directly importing or using litellm. It turned out that CrewAI depends on litellm internally , especially for tokenizer utilities like anthropic_tokenizer.json.

The core issue boils down to how files are opened by default in Python on Windows — using the cp1252 encoding — which fails when reading files containing characters outside its range (like those saved as UTF-8).

🧩 My Setup
I had:
A MathTutoringSystem built using ChatOpenAI from LangChain.
.env file for managing secrets, including the OPENAI_API_KEY.
No direct reference to litellm in my code.
Installed crewai, which pulled in litellm as a dependency.

Even after uninstalling litellm, I couldn’t avoid reinstalling it because crewai required it. But the version available via PyPI was causing the decode error.

🔍 Root Cause
The error trace led me to this line inside litellm:

Python defaults to the system encoding (cp1252) unless specified otherwise, so it failed to open a UTF-8 encoded file.

✅ The Fix
To fix the issue once and for all, I manually patched the file in my installed package:


Before:

After:

By explicitly specifying encoding="utf-8", the file now loads correctly, regardless of the operating system or locale settings.

🛠 Optional Bonus: Monkey-Patching Trick (Advanced)

If you don’t want to modify installed packages every time, here’s a monkey-patch you can add at the top of your main script to override json.load() behavior globally:

⚠️ Use this carefully, as it affects all file loading across your application.


Even if you’re not using litellm directly, libraries like crewai depend on it under the hood . So you can’t remove it entirely without forking or rewriting parts of CrewAI. The best strategy is to either:

  • Patch the encoding error manually.
    Install a fixed version from GitHub.
    Let the monkey-patch handle decoding globally.

📦 Why You Might Still Need litellm

🚀 Final Thoughts


Debugging file encoding issues can be frustrating, but once you understand how Python handles character sets by default, it becomes easier to spot and prevent these problems. I hope this guide helps anyone else stumbling upon the same litellm error on Windows.


If you found this helpful, consider starring the litellm repo on GitHub and contributing fixes upstream!

📬 Let’s Connect

If you're building educational tools, AI tutors, or just debugging tricky library issues, I’d love to hear from you! Feel free to leave a comment or reach out on LinkedIn/Twitter/X.

📁 Hashtags

#Python #Litellm #CrewAI #OpenAI #LangChain #UnicodeDecodeError #Windows #TechTips #DevJourney #CodeFix

check the youtube video