Here's what mine look like...
~$ sudo zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
data 5.44T 2.75T 2.69T - 3% 50% 1.00x ONLINE -
Here you can see my 'data' pool is at 50% capacity
~$ sudo zpool status
pool: data
state: ONLINE
scan: scrub repaired 0 in 4h1m with 0 errors on Sun Aug 16 06:01:14 2020
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdd ONLINE 0 0 0
sdc ONLINE 0 0 0
logs
ata-OCZ-AGILITY3_OCZ-7FM2M2KI1395XLWG-part5 ONLINE 0 0 0
cache
ata-OCZ-AGILITY3_OCZ-7FM2M2KI1395XLWG-part6 ONLINE 0 0 0
errors: No known data errors
~$ sudo zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
data 2.52T 2.75T 0 128K 0 2.75T
data/backup 2.52T 50.1G 28.0G 22.1G 0 0
data/home 2.52T 29.6G 19.7G 9.92G 0 0
data/images 2.52T 959G 13.8G 945G 0 0
data/media 2.52T 1.72T 34.2G 1.69T 0 0
data/www 2.52T 14.7G 106M 14.6G 0 0
Here you can see for my larger datasets (images, media) that the snapshots are the lesser of the used capacity as that data is relatively static. Compare that to home and backup where changing data within the dataset causes the snapshot used space to grow larger than the dataset.
BTW you can list the snapshots without doing '
sudo zpool set listsnapshots=on alexandria' thus
sudo zfs list -o space -t snapshot
or
sudo zfs list -t snapshot
or
sudo zfs list -t all
But I don't think that ZFS pool and it's dataset are the cause of your issues but it might be that the ZFS logs have consumed space.
What does df -h show for your system?