Both are mentioned together and are effective data obfuscation technologies. But they are not the same and are not interchangeable. In some cases such as electronic payments, they are used together to secure end-to-end process.

Mathematically transforms plain text into cipher text using an encryption algorithm and a key.
Scales to larger data volumes with just the use of a small encryption key to decrypt data.
Used for structured fields as well as unstructured data such as entire files.
Ideal for exchanging sensitive data with third parties who share the key.
Format preserving encryption schemes come with a trade off of lower strength.
Original data leaves the organization but in encrypted form.

Original data leaves the organization but in encrypted form.
#Tokenization randomly generates a token value for plain text and stores the mapping in a database. Difficult to scale securely and maintain performance in the db while it scales. Used for structure data fields such as payment cards, SS numbers. It’s difficult to exchange because it requires direct access to a token vault mapping token values.
Format can be maintained without any diminished strength of the security. Original data never leaves the organization, satisfying certain compliance requirements.