Skip to content
100% in your browser. Nothing you paste is uploaded — all processing runs locally. Read more →

UUID vs GUID

On this page
  1. They’re the same thing
  2. Why two names?
  3. The byte order gotcha
  4. In code
  5. Should I worry about the difference?
  6. Practical advice
  7. Try it

TL;DR. UUID and GUID are the same 128-bit identifier defined by the same RFC. “GUID” is Microsoft’s name for it. There’s one historical wrinkle: Microsoft’s binary serialization order differs from everyone else’s. Otherwise they’re interchangeable.

They’re the same thing

Both UUID (Universally Unique Identifier) and GUID (Globally Unique Identifier) refer to a 128-bit number, almost always written as 32 hex characters in five groups separated by hyphens:

xxxxxxxx-xxxx-Mxxx-Nxxx-xxxxxxxxxxxx

The M nibble encodes the version (1, 3, 4, 5, 7, …) and the top bits of the N nibble encode the variant (which family of UUID specs the value belongs to). The rest is either timestamp, randomness, or hash bytes depending on the version.

The standard is RFC 4122 (2005), updated by RFC 9562 (2024) which added v6, v7, v8, and the max UUID.

Why two names?

Microsoft adopted UUIDs early (in COM/OLE in the early ’90s) and called them GUIDs. The name stuck inside the Windows ecosystem: .NET’s type is System.Guid, SQL Server’s column type is uniqueidentifier (with NEWID() generating one), and the Win32 API exposes CoCreateGuid. Outside Windows — in the JVM, in Postgres, in Linux, in JavaScript — the name is “UUID”.

When you see a 128-bit hex identifier in the wild, you can treat the two names as synonyms. There’s no behavior difference at the value level.

The byte order gotcha

There’s exactly one place the names matter: binary serialization.

So the same value 00112233-4455-6677-8899-aabbccddeeff serializes as:

RFC 4122:    00 11 22 33  44 55  66 77  88 99  aa bb cc dd ee ff
Microsoft:   33 22 11 00  55 44  77 66  88 99  aa bb cc dd ee ff

If you’re parsing GUIDs from a Windows binary file, an OLE document, or a SQL Server uniqueidentifier extracted as bytes, you need to byte-swap the first three groups. If you’re working with the canonical hex string, you don’t.

In code

Language / runtimeWhat it’s calledStandard generator
C# / .NETGuidGuid.NewGuid()
SQL ServeruniqueidentifierNEWID(), NEWSEQUENTIALID()
Javajava.util.UUIDUUID.randomUUID()
Gouuid.UUID (google/uuid)uuid.New()
PostgreSQLuuidgen_random_uuid() (v4), uuidv7() (v18+)
MySQLCHAR(36) or BINARY(16)UUID()
Pythonuuid.UUIDuuid.uuid4(), uuid.uuid7() (3.13+)
JavaScriptstringcrypto.randomUUID()

Should I worry about the difference?

For 99% of work: no. If your stack is end-to-end string-based (UUIDs in JSON, in URLs, as text columns), the names mean nothing and the values move freely between systems.

The one case to think carefully about:

Storing UUIDs as binary(16) in SQL Server and reading them back from another platform.

SQL Server stores them in Microsoft order. If you read those bytes from C# the byte order is hidden behind Guid’s parsing logic and everything works. If you read those same bytes from a Java or Python migration script, you’ll see the first three groups byte-swapped and the strings will look wrong. Convert via the canonical hex form, not raw bytes, when crossing platform boundaries.

Practical advice

Try it

The UUID generator on this site speaks both names equally — output is the canonical hex string, which works as both a UUID and a GUID anywhere.