Python 3.11 will be released in October 2022. Even though October is still months away, you can already preview some of the upcoming features, including the new task and exception groups that Python 3.11 has to offer. Task groups let you organize your asynchronous code better, while exception groups can collect several errors happening at the same time and let you handle them in a straightforward manner.
In this tutorial, youβll:
- Install Python 3.11 alpha on your computer, next to your current Python installations
- Explore how exception groups can organize several unrelated errors
- Filter exception groups with
except*and handle different types of errors - Use task groups to set up your asynchronous code
- Test smaller improvements in Python 3.11, including exception notes and a new internal representation of exceptions
There are many other improvements and features coming in Python 3.11. Check out whatβs new in the changelog for an up-to-date list.
Free Download: Click here to download free sample code that demonstrates some of the new features of Python 3.11.
Python 3.11 Alpha
A new version of Python is released in October each year. The code is developed and tested over a seventeen-month period before the release date. New features are implemented during the alpha phase, which lasts until May, about five months before the final release.
About once a month during the alpha phase, Pythonβs core developers release a new alpha version to show off the new features, test them, and get early feedback. Currently, the latest alpha version of Python 3.11 is 3.11.0a7, released on April 5, 2022.
Note: This tutorial uses the seventh alpha version of Python 3.11. You might experience small differences if you use a later version. In particular, a few aspects of the task group implementation are still being discussed. However, you can expect most of what you learn here to stay the same through the alpha and beta phases and in the final release of Python 3.11.
The first beta release of Python 3.11 is just around the corner, planned for May 6, 2022. Typically, no new features are added during the beta phase. Instead, the time between the feature freeze and the release date is used to test and solidify the code.
Cool New Features
Some of the currently announced highlights of Python 3.11 include:
- Exception groups, which will allow programs to raise and handle multiple exceptions at the same time
- Task groups, to improve how you run asynchronous code
- Enhanced error messages, which will help you more effectively debug your code
- Optimizations, promising to make Python 3.11 significantly faster than previous versions
- Static typing improvements, which will let you annotate your code more precisely
- TOML support, which allows you to parse TOML documents using the standard library
Thereβs a lot to look forward to in Python 3.11! For a comprehensive overview, check out Python 3.11: Cool New Features for You to Try. You can also dive deeper into some of the features listed above in the other articles in this series:
In this tutorial, youβll focus on how exception groups can handle multiple unrelated exceptions at once and how this feature paves the way for task groups, which make concurrent programming in Python more convenient. Youβll also get a peek at some of the other, smaller features thatβll be shipping with Python 3.11.
Installation
To play with the code examples in this tutorial, youβll need to install a version of Python 3.11 onto your system. In this subsection, youβll learn about a few different ways to do this: using Docker, using pyenv, or installing from source. Pick the one that works best for you and your system.
Note: Alpha versions are previews of upcoming features. While most features will work well, you shouldnβt depend on any Python 3.11 alpha version in production or anywhere else where potential bugs will have serious consequences.
If you have access to Docker on your system, then you can download the latest version of Python 3.11 by pulling and running the python:3.11-rc-slim Docker image:
$ docker pull python:3.11-rc-slim
Unable to find image 'python:3.11-rc-slim' locally
latest: Pulling from library/python
[...]
$ docker run -it --rm python:3.11-rc-slim
This drops you into a Python 3.11 REPL. Check out Run Python Versions in Docker for more information about working with Python through Docker, including how to run scripts.
The pyenv tool is great for managing different versions of Python on your system, and you can use it to install Python 3.11 alpha if you like. It comes with two different versions, one for Windows and one for Linux and macOS. Choose your platform with the switcher below:
Use pyenv install --list to check which versions of Python 3.11 are available. Then, install the latest one:
$ pyenv install 3.11.0a7
Downloading Python-3.11.0a7.tar.xz...
[...]
The installation may take a few minutes. Once your new alpha version is installed, then you can create a virtual environment where you can play with it:
You can also install Python from one of pre-release versions available on python.org. Choose the latest pre-release and scroll down to the Files section at the bottom of the page. Download and install the file corresponding to your system. See Python 3 Installation & Setup Guide for more information.
Many of the examples in this tutorial will work on older versions of Python, but in general, you should run them with your Python 3.11 executable. Exactly how you run the executable depends on how you installed it. If you need help, see the relevant tutorial on Docker, pyenv, virtual environments, or installing from source.
Exception Groups and except* in Python 3.11
Dealing with exceptions is an important part of programming. Sometimes errors happen because of bugs in your code. In those cases, good error messages will help you debug your code efficiently. Other times, errors happen through no fault of your code. Maybe the user tries to open a corrupt file, maybe the network is down, or maybe authentication to a database is missing.
Usually, only one error happens at a time. Itβs possible that another error wouldβve happened if your code had continued to run. But Python will typically only report the first error it encounters. There are situations where it makes sense to report several bugs at once though:
- Several concurrent tasks can fail at the same time.
- Cleanup code can cause its own errors.
- Code can try several different alternatives that all raise exceptions.
In Python 3.11, a new feature called exception groups is available. It provides a way to group unrelated exceptions together, and it comes with a new except* syntax for handling them. A detailed description is available in PEP 654: Exception Groups and except*.
PEP 654 has been written and implemented by Irit Katriel, one of CPythonβs core developers, with support from asyncio maintainer Yury Selivanov and former BDFL Guido van Rossum. It was presented and discussed at the Python Language Summit in May 2021.
This section will teach you how to work with exception groups. In the next section, youβll see a practical example of concurrent code that uses exception groups to raise and handle errors from several tasks simultaneously.
Handle Regular Exceptions With except
Before you explore exception groups, youβll review how regular exception handling works in Python. If youβre already comfortable handling errors in Python, you wonβt learn anything new in this subsection. However, this review will serve as a contrast to what youβll learn about exception groups later. Everything youβll see in this subsection of the tutorial works in all versions of Python 3, including Python 3.10.
Exceptions break the normal flow of a program. If an exception is raised, then Python drops everything else and looks for code that handles the error. If there are no such handlers, then the program stops, regardless of what the program was doing.
You can raise an error yourself using the raise keyword:
>>> raise ValueError(654)
Traceback (most recent call last):
...
ValueError: 654
Here, you explicitly raise a ValueError with the description 654. You can see that Python provides a traceback, which tells you that thereβs an unhandled error.
Sometimes, you raise errors like this in your code to signal that something has gone wrong. However, itβs more common to encounter errors raised by Python itself or some library that youβre using. For example, Python doesnβt let you add a string and an integer, and raises a TypeError if you attempt this:
>>> "3" + 11
Traceback (most recent call last):
...
TypeError: can only concatenate str (not "int") to str
Most exceptions come with a description that can help you figure out what went wrong. In this case, it tells you that your second term should also be a string.
You use tryβ¦except blocks to handle errors. Sometimes, you use these to just log the error and continue running. Other times, you manage to recover from the error or calculate some alternative value instead.
A short tryβ¦except block may look as follows:
>>> try:
... raise ValueError(654)
... except ValueError as err:
... print(f"Got a bad value: {err}")
...
Got a bad value: 654
You handle ValueError exceptions by printing a message to your console. Note that because you handled the error, thereβs no traceback in this example. However, other types of errors arenβt handled:
>>> try:
... "3" + 11
... except ValueError as err:
... print(f"Got a bad value: {err}")
...
Traceback (most recent call last):
...
TypeError: can only concatenate str (not "int") to str
Even though the error happens within a tryβ¦except block, itβs not handled because thereβs no except clause that matches a TypeError. You can handle several kinds of errors in one block:
>>> try:
... "3" + 11
... except ValueError as err:
... print(f"Got a bad value: {err}")
... except TypeError as err:
... print(f"Got bad types: {err}")
...
Got bad types: can only concatenate str (not "int") to str
This example will handle both ValueError and TypeError exceptions.
Exceptions are defined in a hierarchy. For example, a ModuleNotFoundError is a kind of ImportError, which is a kind of Exception.
Note: Because most exceptions inherit from Exception, you could try to simplify your error handling by using only except Exception blocks. This is usually a bad idea. You want your exception blocks to be as specific as possible, to avoid unexpected errors occurring and messing up your error handling.
The first except clause that matches the error will trigger the exception handling:
>>> try:
... import no_such_module
... except ImportError as err:
... print(f"ImportError: {err.__class__}")
... except ModuleNotFoundError as err:
... print(f"ModuleNotFoundError: {err.__class__}")
...
ImportError: <class 'ModuleNotFoundError'>
When you try to import a module that doesnβt exist, Python raises a ModuleNotFoundError. However, since ModuleNotFoundError is a kind of ImportError, your error handling triggers the except ImportError clause. Note that:
- At most one
exceptclause will trigger - The first
exceptclause that matches will trigger
If youβve worked with exceptions before, this may seem intuitive. However, youβll see later that exception groups behave differently.
While at most one exception is active at a time, itβs possible to chain related exceptions. This chaining was introduced by PEP 3134 for Python 3.0. As an example, observe what happens if you raise a new exception while handling an error:
>>> try:
... "3" + 11
... except TypeError:
... raise ValueError(654)
...
Traceback (most recent call last):
...
TypeError: can only concatenate str (not "int") to str
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
...
ValueError: 654
Note the line During handling of the above exception, another exception occurred. Thereβs one traceback before this line, representing the original TypeError caused by your code. Then, thereβs another traceback below the line, representing the new ValueError that you raised while handling the TypeError.
This behavior is particularly useful if you happen to have an issue in your error handling code, because you then get information about both your original error and the bug in your error handler.
You can also explicitly chain exceptions together yourself using a raiseβ¦from statement. While you can use chained exceptions to raise several exceptions at once, note that the mechanism is meant for exceptions that are related, specificially where one exception happens during the handling of another.
This is different from the use case that exception groups are designed to handle. Exception groups will group together exceptions that are unrelated, in the sense that they happen independently of each other. When handling chained exceptions, youβre only able to catch and handle the last error in the chain. As youβll learn soon, you can catch all the exceptions in an exception group.
Group Exceptions With ExceptionGroup
In this subsection, youβll explore the new ExceptionGroup class thatβs available in Python 3.11. First, note that an ExceptionGroup is also a kind of Exception:
>>> issubclass(ExceptionGroup, Exception)
True
As ExceptionGroup is a subclass of Exception, you can use Pythonβs regular exception handling to work with it. You can raise an ExceptionGroup with raise, although you probably wonβt do that very often unless youβre implementing some low-level library. Itβs also possible to catch an ExceptionGroup with except ExceptionGroup. However, as youβll learn in the next subsection, youβre usually better off using the new except* syntax.
In contrast to most other exceptions, exception groups take two arguments when theyβre initialized:
- The usual description
- A sequence of sub-exceptions
The sequence of sub-exceptions can include other exception groups, but it canβt be empty:
>>> ExceptionGroup("one error", [ValueError(654)])
ExceptionGroup('one error', [ValueError(654)])
>>> ExceptionGroup("two errors", [ValueError(654), TypeError("int")])
ExceptionGroup('two errors', [ValueError(654), TypeError('int')])
>>> ExceptionGroup("nested",
... [
... ValueError(654),
... ExceptionGroup("imports",
... [
... ImportError("no_such_module"),
... ModuleNotFoundError("another_module"),
... ]
... ),
... ]
... )
ExceptionGroup('nested', [ValueError(654), ExceptionGroup('imports',
[ImportError('no_such_module'), ModuleNotFoundError('another_module')])])
>>> ExceptionGroup("no errors", [])
Traceback (most recent call last):
...
ValueError: second argument (exceptions) must be a non-empty sequence
In this example, youβre instantiating a few different exception groups that show that exception groups can contain one exception, several exceptions, and even other exception groups. Exception groups arenβt allowed to be empty, though.
Your first encounter with an exception group is likely to be its traceback. Exception group tracebacks are formatted to clearly show you the structure within the group. Youβll see a traceback when you raise an exception group:
>>> raise ExceptionGroup("nested",
... [
... ValueError(654),
... ExceptionGroup("imports",
... [
... ImportError("no_such_module"),
... ModuleNotFoundError("another_module"),
... ]
... ),
... TypeError("int"),
... ]
... )
+ Exception Group Traceback (most recent call last):
| ...
| ExceptionGroup: nested (3 sub-exceptions)
+-+---------------- 1 ----------------
| ValueError: 654
+---------------- 2 ----------------
| ExceptionGroup: imports (2 sub-exceptions)
+-+---------------- 1 ----------------
| ImportError: no_such_module
+---------------- 2 ----------------
| ModuleNotFoundError: another_module
+------------------------------------
+---------------- 3 ----------------
| TypeError: int
+------------------------------------
The traceback lists all exception that are part of an exception group. Additionally, the nested tree structure of exceptions within the group is indicated, both graphically and by listing how many sub-exceptions there are in each group.
You learned earlier that ExceptionGroup doubles as a regular Python exception. This means that you can catch exception groups with regular except blocks:
>>> try:
... raise ExceptionGroup("group", [ValueError(654)])
... except ExceptionGroup:
... print("Handling ExceptionGroup")
...
Handling ExceptionGroup
This usually isnβt very helpful, because youβre more interested in the errors that are nested inside the exception group. Note that youβre not able to directly handle those:
>>> try:
... raise ExceptionGroup("group", [ValueError(654)])
... except ValueError:
... print("Handling ValueError")
...
+ Exception Group Traceback (most recent call last):
| ...
| ExceptionGroup: group (1 sub-exception)
+-+---------------- 1 ----------------
| ValueError: 654
+------------------------------------
Even though the exception group contains a ValueError, youβre not able to handle it with except ValueError. Instead, you should use a new except* syntax to handle exception groups. Youβll learn how that works in the next section.
Filter Exceptions With except*
There have been attempts at handling multiple errors in earlier versions of Python. For example, the popular Trio library includes a MultiError exception that can wrap other exceptions. However, because Python is primed toward handling one error at a time, dealing with MultiError exceptions is less than ideal.
The new except* syntax in Python 3.11 makes it more convenient to gracefully deal with several errors at the same time. Exception groups have a few attributes and methods that regular exceptions donβt have. In particular, you can access .exceptions to obtain a tuple of all sub-exceptions in the group. You could, for example, rewrite the last example in the previous subsection as follows:
>>> try:
... raise ExceptionGroup("group", [ValueError(654)])
... except ExceptionGroup as eg:
... for err in eg.exceptions:
... if isinstance(err, ValueError):
... print("Handling ValueError")
... elif isinstance(err, TypeError):
... print("Handling TypeError")
...
Handling ValueError
Once you catch an ExceptionGroup, you loop over all the sub-exceptions and handle them based on their type. While this is possible, it quickly gets cumbersome. Also note that the code above doesnβt handle nested exception groups.
Instead, you should use except* to handle exception groups. You can rewrite the example once more:
>>> try:
... raise ExceptionGroup("group", [ValueError(654)])
... except* ValueError:
... print("Handling ValueError")
... except* TypeError:
... print("Handling TypeError")
...
Handling ValueError
Each except* clause handles an exception group thatβs a subgroup of the original exception group, containing all exceptions matching the given type of error. Consider the slightly more involved example:
>>> try:
... raise ExceptionGroup(
... "group", [TypeError("str"), ValueError(654), TypeError("int")]
... )
... except* ValueError as eg:
... print(f"Handling ValueError: {eg.exceptions}")
... except* TypeError as eg:
... print(f"Handling TypeError: {eg.exceptions}")
...
Handling ValueError: (ValueError(654),)
Handling TypeError: (TypeError('str'), TypeError('int'))
Note that in this example, both except* clauses trigger. This is different from regular except clauses, where at most one clause triggers at a time.
First the ValueError is filtered from the original exception group and handled. The TypeError exceptions remain unhandled until theyβre caught by except* TypeError. Each clause is only triggered once, even if there are more exceptions of that type. Your handling code must therefore deal with exception groups.
You may end up only partially handling an exception group. For example, you could handle only ValueError from the previous example:
>>> try:
... raise ExceptionGroup(
... "group", [TypeError("str"), ValueError(654), TypeError("int")]
... )
... except* ValueError as eg:
... print(f"Handling ValueError: {eg.exceptions}")
...
Handling ValueError: (ValueError(654),)
+ Exception Group Traceback (most recent call last):
| ...
| ExceptionGroup: group (2 sub-exceptions)
+-+---------------- 1 ----------------
| TypeError: str
+---------------- 2 ----------------
| TypeError: int
+------------------------------------
In this case, the ValueError is handled. But that leaves two unhandled errors in the exception group. Those errors then bubble out and create a traceback. Note that the ValueError is not part of the traceback because itβs already been handled. You can see that except* behaves differently from except:
- Several
except*clauses may trigger. except*clauses that match an error remove that error from the exception group.
This is a clear change from how plain except works, and may feel a bit unintuitive at first. However, the changes make it more convenient to deal with multiple concurrent errors.
You canβalthough you probably donβt need toβsplit exception groups manually:
>>> eg = ExceptionGroup(
... "group", [TypeError("str"), ValueError(654), TypeError("int")]
... )
>>> eg
ExceptionGroup('group', [TypeError('str'), ValueError(654), TypeError('int')])
>>> value_errors, eg = eg.split(ValueError)
>>> value_errors
ExceptionGroup('group', [ValueError(654)])
>>> eg
ExceptionGroup('group', [TypeError('str'), TypeError('int')])
>>> import_errors, eg = eg.split(ImportError)
>>> print(import_errors)
None
>>> eg
ExceptionGroup('group', [TypeError('str'), TypeError('int')])
>>> type_errors, eg = eg.split(TypeError)
>>> type_errors
ExceptionGroup('group', [TypeError('str'), TypeError('int')])
>>> print(eg)
None
You can use .split() on exception groups to split them into two new exception groups. The first group consists of errors that match a given error, while the second group consists of those errors that are left over. If any of the groups end up empty, then theyβre replaced by None. See PEP 654 and the documentation for more information if you want to manually manipulate exception groups.
Exception groups wonβt replace regular exceptions! Instead, theyβre designed to handle the specific use case where itβs useful to deal with several exceptions at the same time. Libraries should clearly differentiate between functions that can raise regular exceptions and functions that can raise exception groups.
The authors of PEP 654 recommend that changing a function from raising an exception to raising an exception group should be considered a breaking change because anyone using that library needs to update how they handle errors. In the next section, youβll learn about task groups. Theyβre new in Python 3.11 and are the first part of the standard library to raise exception groups.
Youβve seen that itβs possible, but cumbersome, to deal with exception groups within regular except blocks. Itβs also possible to do the opposite. except* can handle regular exceptions:
>>> try:
... raise ValueError(654)
... except* ValueError as eg:
... print(type(eg), eg.exceptions)
...
<class 'ExceptionGroup'> (ValueError(654),)
Even though you raise a single ValueError exception, the except* mechanism wraps the exception in an exception group before handling it. In theory, this means that you can replace all your except blocks with except*. In practice, that would be a bad idea. Exception groups are designed to handle multiple exceptions. Donβt use them unless you need to!
Exception groups are new in Python 3.11. However, if youβre using an older version of Python, then you can use the exceptiongroup backport to access the same functionality. Instead of except*, the backport uses an exceptiongroup.catch() context manager to handle multiple errors. You can learn more about catching multiple exceptions in How to Catch Multiple Exceptions in Python.
Asynchronous Task Groups in Python 3.11
You learned about exception groups in the previous section. When would you use them? As noted, exception groups and except* arenβt meant to replace regular exceptions and except.
In fact, chances are that you donβt have a good use case for raising exception groups in your own code. Theyβll likely be used mostly in low-level libraries. As Python 3.11 gets more widespread, packages that you rely on may start raising exception groups, so you may need to handle them in your applications.
One of the motivating use cases for introducing exception groups is dealing with errors in concurrent code. If you have several tasks running at the same time, several of them may run into issues. Until now, Python hasnβt had a good way of dealing with that. Several asynchronous libraries, like Trio, AnyIO and Curio, have added a kind of multi-error container. But without language support, itβs still complicated to handle concurrent errors.
If youβd like to see a video presentation of exception groups and their use in concurrent programming, have a look at Εukasz Langaβs presentation How Exception Groups Will Improve Error Handling in AsyncIO.
In this section, youβll explore a toy example that simulates analyzing several files concurrently. Youβll build the example from a basic synchronous application where the files are analyzed in sequence up to a full asynchronous tool that uses the new Python 3.11 asyncio task groups. Similar task groups exist in other asynchronous libraries, but the new implementation is the first to use exception groups in order to smooth out error handling.
Your first versions of the analysis tool will work with older versions of Python, but youβll need Python 3.11 to take advantage of task and exception groups in the final examples.
Analyze Files Sequentially
In this subsection, youβll implement a tool that can count the number of lines in several files. The output will be animated so that you get a nice visual representation of the distribution of file sizes. The final result will look something like this:

Youβll expand this program to explore some features of asynchronous programming. While this tool isnβt necessarily useful on its own, itβs explicit so that you can clearly see whatβs happening, and itβs flexible so that you can introduce several exceptions and work toward handling them with exception groups.
Colorama is a library that gives you more control of output in your terminal. Youβll use it to create an animation as your program counts the number of lines in the different files. First, install it with pip:
$ python -m pip install colorama
As the name suggests, Coloramaβs primary use case is adding color to your terminal. However, you can also use it to print text at specific locations. Write the following code into a file named count.py:
# count.py
import sys
import time
import colorama
from colorama import Cursor
colorama.init()
def print_at(row, text):
print(Cursor.POS(1, 1 + row) + str(text))
time.sleep(0.03)
def count_lines_in_file(file_num, file_name):
counter_text = f"{file_name[:20]:<20} "
with open(file_name, mode="rt", encoding="utf-8") as file:
for line_num, _ in enumerate(file, start=1):
counter_text += "β‘"
print_at(file_num, counter_text)
print_at(file_num, f"{counter_text} ({line_num})")
def count_all_files(file_names):
for file_num, file_name in enumerate(file_names, start=1):
count_lines_in_file(file_num, file_name)
if __name__ == "__main__":
count_all_files(sys.argv[1:])
The print_at() function is at the heart of the animation. It uses Coloramaβs Cursor.POS() to print some text at a particular row or line in your terminal. Next, it sleeps for a short while to create the animation effect.
You use count_lines_in_file() to analyze and animate one file. The function opens a file and iterates through it, one line at a time. For each line, it adds a box (β‘) to a string and uses print_at() to continually print the string on the same row. This creates the animation. At the end, the total number of lines is printed.
Note: Positioning your terminal cursor with Colorama is a quick way to create a simple animation. However, it does mess with the regular flow of your terminal, and you may experience some issues with text being overwritten.
Youβll have a smoother experience by clearing the screen before analyzing the files and by setting the cursor below your animation at the end. You can do this by adding something like the following to your main block:
# count.py
# ...
if __name__ == "__main__":
print(colorama.ansi.clear_screen())
count_all_files(sys.argv[1:])
print(Cursor.POS(1, 1 + len(sys.argv)))
You can also change the number thatβs added to the second argument of Cursor.POS() here and in print_at() to get a behavior that plays nicely with your terminal setup. When you find a number that works, you should do similar customizations in later examples as well.
Your programβs entry point is count_all_files(). This loops over all filenames that you provide as command-line arguments and calls count_lines_in_file() on them.
Try out your line counter! You run the program by providing files that should be analyzed on the command line. For example, you can count the number of lines in your source code as follows:
$ python count.py count.py
count.py β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (28)
This counts the number of lines in count.py. You should create a few other files that youβll use to explore your line counter. Some of these files will expose that youβre not doing any exception handling at the moment. You can create a few new files with the following code:
>>> import pathlib
>>> import string
>>> chars = string.ascii_uppercase
>>> data = [c1 + c2 for c1, c2 in zip(chars[:13], chars[13:])]
>>> pathlib.Path("rot13.txt").write_text("\n".join(data))
38
>>> pathlib.Path("empty_file.txt").touch()
>>> bbj = [98, 108, 229, 98, 230, 114, 115, 121, 108, 116, 101, 116, 248, 121]
>>> pathlib.Path("not_utf8.txt").write_bytes(bytes(bbj))
14
Youβve created three files: rot13.txt, empty_file.txt, and not_utf8.txt. The first file contains the letters that map to each other in the ROT13 cipher. The second file is a completely empty file, while the third file contains some data thatβs not UTF-8 encoded. As youβll see soon, the last two files will create problems for your program.
To count the number of lines in two files, you provide both their names on the command line:
$ python count.py count.py rot13.txt
count.py β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (28)
rot13.txt β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (13)
You call count_all_files() with all the arguments provided at the command line. The function then loops over each file name.
If you provide the name of a file that doesnβt exist, then your program will raise an exception that tells you so:
$ python count.py wrong_name.txt
Traceback (most recent call last):
...
FileNotFoundError: [Errno 2] No such file or directory: 'wrong_name.txt'
Something similar will happen if you try to analyze empty_file.txt or not_utf8.txt:
$ python count.py empty_file.txt
Traceback (most recent call last):
...
UnboundLocalError: cannot access local variable 'line_num' where it is
not associated with a value
$ python count.py not_utf8.txt
Traceback (most recent call last):
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe5 in position 2:
invalid continuation byte
Both cases raise errors. For empty_file.txt, the issue is that line_num gets defined by iterating over the lines of the file. If there are no lines in the file, then line_num isnβt defined, and you get an error when you try to access it. The problem with not_utf8.txt is that you try to UTF-8-decode something that isnβt UTF-8 encoded.
In the next subsections, youβll use these errors to explore how exception groups can help you improve your error handling. For now, observe what happens if you try to analyze two files that both raise an error:
$ python count.py not_utf8.txt empty_file.txt
Traceback (most recent call last):
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe5 in position 2:
invalid continuation byte
Note that only the first error, corresponding to not_utf8.txt, is raised. This is natural, because the files are analyzed sequentially. That error happens long before empty_file.txt is opened.
Analyze Files Concurrently
In this subsection, youβll rewrite your program to run asynchronously. This means that the analysis of all the files happens concurrently instead of sequentially. Itβs instructive to see your updated program run:

The animation shows that lines are counted in all files at the same time, instead of in one file at the time like before.
You achieve this concurrency by rewriting your functions into asynchronous coroutines using the async and await keywords. Note that this new version still uses old async practices, and this code is runnable in Python 3.7 and later. In the next subsection, youβll take the final step and use the new task groups.
Create a new file named count_gather.py with the following code:
# count_gather.py
import asyncio
import sys
import colorama
from colorama import Cursor
colorama.init()
async def print_at(row, text):
print(Cursor.POS(1, 1 + row) + str(text))
await asyncio.sleep(0.03)
async def count_lines_in_file(file_num, file_name):
counter_text = f"{file_name[:20]:<20} "
with open(file_name, mode="rt", encoding="utf-8") as file:
for line_num, _ in enumerate(file, start=1):
counter_text += "β‘"
await print_at(file_num, counter_text)
await print_at(file_num, f"{counter_text} ({line_num})")
async def count_all_files(file_names):
tasks = [
asyncio.create_task(count_lines_in_file(file_num, file_name))
for file_num, file_name in enumerate(file_names, start=1)
]
await asyncio.gather(*tasks)
if __name__ == "__main__":
asyncio.run(count_all_files(sys.argv[1:]))
If you compare this code to count.py from the previous subsection, then youβll note that most changes only add async to function definitions or await to function calls. The async and await keywords constitute Pythonβs API for doing asynchronous programming.
Note: asyncio is the library for doing asynchronous programming thatβs included in Pythonβs standard library. However, Pythonβs asynchronous computing model is quite general, and you can use other third-party libraries like Trio and Curio instead of asyncio.
Alternatively, you can use third-party libraries like uvloop and Quattro. These arenβt replacements for asyncio. Instead, they add performance or extra features on top of it.
Next, note that count_all_files() has changed significantly. Instead of sequentially calling count_lines_in_file(), you create one task for each file name. Each task prepares count_lines_in_file() with the relevant arguments. All tasks are collected in a list and passed to asyncio.gather(). Finally, count_all_files() is initiated by calling asyncio.run().
What happens here is that asyncio.run() creates an event loop. The tasks are executed by the event loop. In the animation, it looks like all the files are analyzed at the same time. However, while the lines are counted concurrently, theyβre not counted in parallel. Thereβs only one thread in your program, but the thread continously switches which task itβs working on.
Asynchronous programming is sometimes called cooperative multitasking because each task voluntarily gives up control to let other tasks run. Think of await as a marker in your code where you decide that itβs okay to switch tasks. In the example, thatβs mainly when the code sleeps before the next animation step.
Note: Threading achieves similar results but uses preemptive multitasking, where the operating system decides when to switch tasks. Asynchronous programming is typically easier to reason about than threading, because you know when tasks may take a break. See Speed Up Your Python Program With Concurrency for a comparison of threading, asynchronous programming, and other kinds of concurrency.
Run your new code on a few different files and observe how theyβre all analyzed in parallel:
$ python count_gather.py count.py rot13.txt count_gather.py
count.py β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (28)
rot13.txt β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (13)
count_gather.py β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (31)
As your files animate in your console, youβll see that rot13.txt finishes before the other tasks. Next, try to analyze a few of the troublesome files that you created earlier:
$ python count_gather.py not_utf8.txt empty_file.txt
Traceback (most recent call last):
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe5 in position 2:
invalid continuation byte
Even though not_utf8.txt and empty_file.txt are now analyzed concurrently, you only see the error raised for one of them. As you learned earlier, regular Python exceptions are handled one by one, and asyncio.gather() is limited by this.
Note: You can use return_exceptions=True as an argument when awaiting asyncio.gather(). This will collect exceptions from all your tasks and return them in a list when all tasks are finished. However, itβs complicated to then handle these exceptions properly, because theyβre not using Pythonβs normal error handling.
Third-party libraries like Trio and Curio do some special error handling thatβs able to deal with multiple exceptions. For example, Trioβs MultiError wraps two or more exceptions and provides a context manager that handles them.
More convenient handling of multiple errors is exactly one of the use cases that exception groups were designed to handle. In your counter application, youβd want to see a group containing one exception per file that fails to be analyzed, and have a simple way of handling them. Itβs time to give the new Python 3.11 TaskGroup a spin!
Control Concurrent Processing With Task Groups
Task groups have been a planned feature for asyncio for a long time. Yuri Selivanov mentions them as a possible enhancement for Python 3.8 in asyncio: Whatβs Next, a presentation he gave at PyBay 2018. Similar features have been available in other libraries, including Trioβs nurseries, Curioβs task groups, and Quattroβs task groups.
The main reason the implementation has taken so much time is that task groups require properly dealing with several exceptions at once. The new exception group feature in Python 3.11 has paved the way for including asynchronous task groups as well. They were finally implemented by Yury Selivanov and Guido van Rossum and made available in Python 3.11.0a6.
In this subsection, youβll reimplement your counter application to use asyncio.TaskGroup instead of asyncio.gather(). In the next subsection, youβll use except* to conveniently handle the different exceptions that your application can raise.
Put the following code in a file named count_taskgroup.py:
# count_taskgroup.py
import asyncio
import sys
import colorama
from colorama import Cursor
colorama.init()
async def print_at(row, text):
print(Cursor.POS(1, 1 + row) + str(text))
await asyncio.sleep(0.03)
async def count_lines_in_file(file_num, file_name):
counter_text = f"{file_name[:20]:<20} "
with open(file_name, mode="rt", encoding="utf-8") as file:
for line_num, _ in enumerate(file, start=1):
counter_text += "β‘"
await print_at(file_num, counter_text)
await print_at(file_num, f"{counter_text} ({line_num})")
async def count_all_files(file_names):
async with asyncio.TaskGroup() as tg:
for file_num, file_name in enumerate(file_names, start=1):
tg.create_task(count_lines_in_file(file_num, file_name))
if __name__ == "__main__":
asyncio.run(count_all_files(sys.argv[1:]))
Compare this to count_gather.py. Youβll note that the only change is how tasks are created in count_all_files(). Here, you create the task group with a context manager. After that, your code is remarkably similar to the original synchronous implementation in count.py:
def count_all_files(file_names):
for file_num, file_name in enumerate(file_names, start=1):
count_lines_in_file(file_num, file_name)
Tasks that are created inside a TaskGroup are run concurrently, similar to tasks run by asyncio.gather(). Counting files should work identically to before, as long as youβre using Python 3.11:
$ python count_taskgroup.py count.py rot13.txt count_taskgroup.py
count.py β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (28)
rot13.txt β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (13)
count_taskgroup.py β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (29)
One great improvement, though, is how errors are handled. Provoke your new code by analyzing some of your troublesome files:
$ python count_taskgroup.py not_utf8.txt empty_file.txt
+ Exception Group Traceback (most recent call last):
| ...
| ExceptionGroup: unhandled errors in a TaskGroup (2 sub-exceptions)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "count_taskgroup.py", line 18, in count_lines_in_file
| for line_num, _ in enumerate(file, start=1):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe5 in position 2:
| invalid continuation byte
+---------------- 2 ----------------
| Traceback (most recent call last):
| File "count_taskgroup.py", line 21, in count_lines_in_file
| await print_at(file_num, f"{counter_text} ({line_num})")
| ^^^^^^^^
| UnboundLocalError: cannot access local variable 'line_num' where it is
| not associated with a value
+------------------------------------
Note that you get an Exception Group Traceback with two sub-exceptions, one for each file that fails to be analyzed. This is already an improvement over asyncio.gather(). In the next subsection, youβll learn how you can handle these kinds of errors in your code.
Yuri Selivanov points out that the new task groups offer a better API than the old asyncio.gather(), as task groups are βcomposable, predictable, and safe.β Additionally, he notes that task groups:
- Run a set of nested tasks. If one fails, all other tasks that are still running would be canceled.
- Allow to execute code (incl. awaits) between scheduling nested tasks.
- Thanks to ExceptionGroups, all errors are propagated and can be handled/reported.
β Yuri Selivanov (Source)
In the next subsection, youβll experiment with handling and reporting errors in your concurrent code.
Handle Concurrent Errors
Youβve written some concurrent code that sometimes raises errors. How can you handle those exceptions properly? Youβll see examples of error handling soon. First, though, youβll add one more way that your code can fail.
The problems in your code that youβve seen so far all raise before the analysis of the file begins. To simulate an error that may happen during the analysis, say that your tool suffers from triskaidekaphobia, meaning that itβs irrationally afraid of the number thirteen. Add two lines to count_lines_in_file():
# count_taskgroup.py
# ...
async def count_lines_in_file(file_num, file_name):
counter_text = f"{file_name[:20]:<20} "
with open(file_name, mode="rt", encoding="utf-8") as file:
for line_num, _ in enumerate(file, start=1):
counter_text += "β‘"
await print_at(file_num, counter_text)
await print_at(file_num, f"{counter_text} ({line_num})")
if line_num == 13:
raise RuntimeError("Files with thirteen lines are too scary!")
# ...
If a file has exactly thirteen lines, then a RuntimeError is raised at the end of the analysis. You can see the effect of this by analyzing rot13.txt:
$ python count_taskgroup.py rot13.txt
rot13.txt β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (13)
+ Exception Group Traceback (most recent call last):
| ...
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "count_taskgroup.py", line 23, in count_lines_in_file
| raise RuntimeError("Files with thirteen lines are too scary!")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| RuntimeError: Files with thirteen lines are too scary!
+------------------------------------
As expected, your new triskaidekaphobic code balks at the thirteen lines in rot13.py. Next combine this with one of the errors you saw earlier:
$ python count_taskgroup.py rot13.txt not_utf8.txt
rot13.txt β‘
+ Exception Group Traceback (most recent call last):
| ...
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "count_taskgroup.py", line 18, in count_lines_in_file
| for line_num, _ in enumerate(file, start=1):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe5 in position 2:
| invalid continuation byte
+------------------------------------
This time around, only one error is reported even though you know both files should raise an exception. The reason you get only one error is that the two issues would be raised at different times. One feature of task groups is that they implement a cancel scope. Once some tasks fail, other tasks in the same task group are canceled by the event loop.
Note: Cancel scopes were pioneered by Trio. The final implementation of cancel scopes and which features theyβll support in asyncio is still being discussed. The following examples work in Python 3.11.0a7, but things may still change before Python 3.11 is finalized.
In general, there are two approaches that you can take to handle errors inside your asynchronous tasks:
- Use regular
tryβ¦exceptblocks inside your coroutines to handle issues. - Use the new
tryβ¦except*blocks outside your task groups to handle issues.
In the first case, errors in one task will typically not affect other running tasks. In the second case, however, an error in one task will cancel all other running tasks.
Try this out for yourself! First, add safe_count_lines_in_file() which uses regular exception handling inside your coroutines:
# count_taskgroup.py
# ...
async def safe_count_lines_in_file(file_num, file_name):
try:
await count_lines_in_file(file_num, file_name)
except RuntimeError as err:
await print_at(file_num, err)
async def count_all_files(file_names):
async with asyncio.TaskGroup() as tg:
for file_num, file_name in enumerate(file_names, start=1):
tg.create_task(safe_count_lines_in_file(file_num, file_name))
# ...
You also change count_all_files() to call the new safe_count_lines_in_file() instead of count_lines_in_file(). In this implementation, you only deal with the RuntimeError raised whenever a file has thirteen lines.
Note: safe_count_lines_in_file() doesnβt use any specific features of task groups. You could use a similar function to make count.py and count_gather.py more robust as well.
Analyze rot13.txt and some other files to confirm that the error no longer cancels the other tasks:
$ python count_taskgroup.py count.py rot13.txt count_taskgroup.py
count.py β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (28)
Files with thirteen lines are too scary!
count_taskgroup.py β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘ (37)
Errors that are handled donβt bubble up and affect other tasks. In this example, count.py and count_taskgroup.py were properly analyzed even though the analysis of rot13.txt failed.
Next, try to use except* to handle errors after the fact. You can, for example, wrap your event loop in a tryβ¦except* block:
# count_taskgroup.py
# ...
if __name__ == "__main__":
try:
asyncio.run(count_all_files(sys.argv[1:]))
except* UnicodeDecodeError as eg:
print("Bad encoding:", *[str(e)[:50] for e in eg.exceptions])
Recall that except* works with exception groups. In this case, you loop through the UnicodeDecodeError exceptions in the group and print their first fifty characters to the console to log them.
Analyze not_utf8.txt together with some other files to see the effect:
$ python count_taskgroup.py rot13.txt not_utf8.txt count.py
rot13.txt β‘
count.py β‘
Bad encoding: 'utf-8' codec can't decode byte 0xe5 in position 2
In contrast to the previous example, the other tasks are canceled even though you handle the UnicodeDecodeError. Note that only one line is counted in both rot13.txt and count.py.
Note: You can wrap the call to count_all_files() inside a regular tryβ¦except block in the count.py and count_gather.py examples. However, this will only allow you to deal with at most one error. In contrast, task groups can report all errors:
$ python count_taskgroup.py not_utf8.txt count_taskgroup.py empty.txt
count_taskgroup.py β‘
Bad text: ["'utf-8' codec can't decode byte 0xe5 in position 2"]
Empty file: ["cannot access local variable 'line_num' where it i"]
This example shows the result of having several concurrent errors after you expand the code in the previous example to deal with both UnicodeDecodeError and UnboundLocalError.
If you donβt handle all exceptions that are raised, then the unhandled exceptions will still cause your program to crash with a traceback. To see this, switch count.py to empty_file.txt in your analysis:
$ python count_taskgroup.py rot13.txt not_utf8.txt empty_file.txt
rot13.txt β‘
Bad encoding: 'utf-8' codec can't decode byte 0xe5 in position 2
+ Exception Group Traceback (most recent call last):
| ...
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "count_taskgroup.py", line 21, in count_lines_in_file
| await print_at(file_num, f"{counter_text} ({line_num})")
| ^^^^^^^^
| UnboundLocalError: cannot access local variable 'line_num' where it is
| not associated with a value
+------------------------------------
You get the familiar UnboundLocalError. Note that part of the error message points out that thereβs one unhandled sub-exception. Thereβs no record in the traceback of the UnicodeDecodeError sub-exception that you did handle.
Youβve now seen an example of using task groups in order to improve the error handling of your asynchronous application, and in particular being able to comfortably handle several errors happening at the same time. The combination of exception groups and task groups makes Python a very capable language for doing asynchronous programming.
Other New Features
In every new version of Python, a handful of features get most of the buzz. However, most of the evolution of Python has happened in small steps, by adding a function here or there, improving some existing functionality, or fixing a long-standing bug.
Python 3.11 is no different. This section shows a few of the smaller improvements waiting for you in Python 3.11.
Annotate Exceptions With Custom Notes
You can now add custom notes to an exception. This is yet another improvement to how exceptions are handled in Python. Exception notes were suggested by Zac Hatfield-Dodds in PEP 678: Enriching Exceptions with Notes. The PEP has been accepted, and an early version of the proposal was implemented for Python 3.11.0a3 to Python 3.11.0a7.
In those alpha versions, you can assign strings to a .__note__ attribute on an exception, and that information will be made available if the error isnβt handled. Hereβs a basic example:
>>> try:
... raise ValueError(678)
... except ValueError as err:
... err.__note__ = "Enriching Exceptions with Notes"
... raise
...
Traceback (most recent call last):
...
ValueError: 678
Enriching Exceptions with Notes
Youβre adding a note to the ValueError before reraising it. Your note is then displayed together with the regular error message at the end of your traceback.
Note: The rest of this section was updated on May 9, 2022 to reflect changes to the exception notes feature that was made available with the release of Python 3.11.0b1.
During discussions of the PEP
.__note__ was changed to .__notes__ which can contain several notes. A list of notes can be useful in certain use cases where keeping track of individual notes is important. One example of this is internationalization and translation of notes.
There is a also new dedicated method, .add_note(), that can be used to add these notes. The full implementation of PEP 678 is available in the first beta version of Python 3.11 and later.
Going forward, you should write the previous example as follows:
>>> try:
... raise ValueError(678)
... except ValueError as err:
... err.add_note("Enriching Exceptions with Notes")
... raise
...
Traceback (most recent call last):
...
ValueError: 678
Enriching Exceptions with Notes
You can add several notes with repeated calls to .add_note() and recover them by looping over .__notes__. All notes will be printed below the traceback when the exception is raised:
>>> err = ValueError(678)
>>> err.add_note("Enriching Exceptions with Notes")
>>> err.add_note("Python 3.11")
>>> err.__notes__
['Enriching Exceptions with Notes', 'Python 3.11']
>>> for note in err.__notes__:
... print(note)
...
Enriching Exceptions with Notes
Python 3.11
>>> raise err
Traceback (most recent call last):
...
ValueError: 678
Enriching Exceptions with Notes
Python 3.11
The new exception notes are also compatible with the exception groups.
Reference the Active Exception With sys.exception()
Internally, Python has represented an exception as a tuple with information about the type of the exception, the exception itself, and the traceback of the exception. This changes in Python 3.11. Now, Python will internally store only the exception itself. Both the type and the traceback can be derived from the exception object.
In general, you wonβt need to think about this change, as itβs all under the hood. However, if you need to access an active exception, you can now use the new exception() function in the sys module:
>>> import sys
>>> try:
... raise ValueError("bpo-46328")
... except ValueError:
... print(f"Handling {sys.exception()}")
...
Handling bpo-46328
Note that you usually wonβt use exception() in normal error handling like above. Instead, itβs sometimes handy to use in wrapper libraries that are used in error handling but donβt have direct access to active exceptions. In normal error handling, you should name your errors in the except clause:
>>> try:
... raise ValueError("bpo-46328")
... except ValueError as err:
... print(f"Handling {err}")
...
Handling bpo-46328
In versions prior to Python 3.11, you can get the same information from sys.exc_info():
>>> try:
... raise ValueError("bpo-46328")
... except ValueError:
... sys.exception() is sys.exc_info()[1]
...
True
Indeed, sys.exception() is identical to sys.exc_info()[1]. The new function was added in bpo-46328 by Irit Katriel, although the idea was originally floated in PEP 3134, all the way back in 2005.
Reference the Active Traceback Consistently
As noted in the previous subsection, older versions of Python represent exceptions as tuples. You can access traceback information in two different ways:
>>> import sys
>>> try:
... raise ValueError("bpo-45711")
... except ValueError:
... exc_type, exc_value, exc_tb = sys.exc_info()
... exc_value.__traceback__ is exc_tb
...
True
Note that accessing the traceback through exc_value and exc_tb returns the exact same object. In general, this is what you want. However, it turns out that there has been a subtle bug hiding around for some time. You can update the traceback on exc_value without updating exc_tb.
To demonstrate this, code up the following program, which changes the traceback during handling of an exception:
1# traceback_demo.py
2
3import sys
4import traceback
5
6def tb_last(tb):
7 frame, *_ = traceback.extract_tb(tb, limit=1)
8 return f"{frame.name}:{frame.lineno}"
9
10def bad_calculation():
11 return 1 / 0
12
13def main():
14 try:
15 bad_calculation()
16 except ZeroDivisionError as err:
17 err_tb = err.__traceback__
18 err = err.with_traceback(err_tb.tb_next)
19
20 exc_type, exc_value, exc_tb = sys.exc_info()
21 print(f"{tb_last(exc_value.__traceback__) = }")
22 print(f"{tb_last(exc_tb) = }")
23
24if __name__ == "__main__":
25 main()
You change the traceback of the active exception on line 18. As youβll soon see, this wouldnβt update the traceback part of the exception tuple in Python 3.10 and earlier. To show this, lines 20 to 22 compare the last frame of the tracebacks referenced by the active exception and the traceback object.
Run this with Python 3.10 or an earlier version:
$ python traceback_demo.py
tb_last(exc_value.__traceback__) = 'bad_calculation:11'
tb_last(exc_tb) = 'main:15'
The important thing to note here is that the two line references are different. The active exception points to the updated location, line 11 inside bad_calculation(), while the traceback points to the old location inside main().
In Python 3.11, the traceback part of the exception tuple is always read from the exception itself. Therefore, the inconsistency is gone:
$ python3.11 traceback_demo.py
tb_last(exc_value.__traceback__) = 'bad_calculation:11'
tb_last(exc_tb) = 'bad_calculation:11'
Now, both ways of accessing the traceback give the same result. This fixes a bug that has been present in Python for some time. Still, itβs important to note that the inconsistency was mostly academic. Yes, the old way was wrong, but itβs unlikely that it caused issues in actual code.
This bug fix is interesting because it lifts the curtain on something bigger. As you learned in the previous subsection, Pythonβs internal representation of exceptions changes in version 3.11. This bug fix is an immediate consequence of that change.
Restructuring Pythonβs exceptions is part of an even bigger effort to optimize many different parts of Python. Mark Shannon has initiated the faster-cpython project. Streamlining exceptions is only one of the ideas coming out of that initiative.
The smaller improvements that youβve learned about in this section examplify all the work that goes into maintaining and developing a programming language, beyond the few items stealing most of the headlines. The features that youβve learned about here are all related to Pythonβs exception handling. However, there are many other small changes happening as well. Whatβs New In Python 3.11 keeps track of all of them.
Conclusion
In this tutorial, youβve learned about some of the new capabilities that Python 3.11 will bring to the table when itβs released in October 2022. Youβve seen some of its new features and explored how you can already play with the improvements.
In particular, youβve:
- Installed an alpha version of Python 3.11 on your computer
- Explored exception groups and how you use them to organize errors
- Used
except*to filter exception groups and handle different types of errors - Rewritten your asynchronous code to use task groups to initiate concurrent workflows
- Tried out a few of the smaller improvements in Python 3.11, including exception notes and a new internal representation of exceptions
Try out task and exception groups in Python 3.11! Do you have a use case for them? Comment below to share your experience.
Free Download: Click here to download free sample code that demonstrates some of the new features of Python 3.11.


