NFS with Azure File Sync: Crazy Smart Option for UNIX / Linux File Access

Someone asked me this week about options for file access on a UNIX host, and I of course thought first about NFS (Samba would be an option too, but this sounded like an older host).  There’s a lot of interest in getting the data up to Azure as well so it can be used by other systems (Linux and maybe Windows), which got me thinking…

A preview of NFS protocol support was recently announced for several Azure services (here):

  • NFS 4.1 support on Azure Files – optimized for random access workloads with in-place data updates. Providing full POSIX file system support. Azure Files is built on the same hardware as Azure Blobs premium tier and has software differences to provide full NFS4.1 support and high performance for random access workloads.
  • NFS 3.0 support on Azure Blobs – best suited for large scale read-heavy sequential access workload where data will be ingested once and minimally modified further (large scale analytic data, backup and archive, NFS apps for media rendering, and genomic sequencing etc.)
    It offers lowest total cost of ownership.

…but both options are in preview right now (Winter of 2020), not always a first choice for production workloads…they are coming soon though!

While Azure NetApp Files (ANF) already offers high-performance NFS access to files in Azure (and it could not be simpler to setup and use!), I think it’s worth also pointing out that Azure File Sync with a Windows Server VM can also be used to provide high-performance NFS access in Azure, on premises, or both via it’s multi-master replication capabilities.

I understand that Windows Server isn’t the first thing that pops into your head when you want a huge, high-performance NFS Server, but combining it with Azure File Sync gives it some rather interesting capabilities.

Why Use Azure File Sync?

Azure File Sync Azure provides the capability to synchronize file hierarchies on NTFS between Window Server and Azure File Shares, including the ability to tier/recall requested files.  This allows existing Windows Server to function much like a storage gateway to a large Azure File share.

Existing file servers can use Azure File Sync to copy files to seamlessly Azure, and ultimately use Azure Backup to create periodic snapshots of the files which can be rapidly restored (if desired).

Setting up Windows as a files server with SMB and NFS (to connect UNIX and Linux hosts along with Windows clients) along with Azure File Sync might look something like this:

2-File Server

Having “multi-protocol” access to the files system over both NFS and SMB may not be your requirement, but it is a bonus for some environments.

Once Azure File Sync is in place, (here’s a walk though of how to set it up) all files written to NTFS on the File server (regardless of the protocol) will be replicated in the Azure File Share… where they could (today) be accessed directly via SMB, or replicated to another Windows server elsewhere (in this case Azure) to provide SMB and NFS access to the files:

3-File Server

Depending on your usage patterns, the File Servers could have relatively little local storage, but provide access to TB / PB of Azure Files storage with cloud tiering enabled.

Just a Little About Cloud Tiering

Cloud tiering in Azure File Sync allows the file system hierarchy to sync down to file servers along with file “stubs” without all the associated data – reducing the time to file system “dial tone”.  All the files will appear to be on the local server when users request them but will only be recalled when needed – reducing significantly bandwidth necessary for a full, local restore.

EnableCloudTiering

NFS Versus SMB with File Sync – They Get Together Very Well!

This doesn’t need to be an either-or situation with SMB and NFS – on Windows you can share out the same files and directories with both if you want to!

Server Manager in Windows shows that very effectively – here a little server I have at home with shares pointed to the same drive and directory for both protocols:

ServerManager

NFS and SMB do used different security, mounting/mapping mechanisms, and expose different information to the clients – let me show you one interesting thing.

I connected a Windows 10 client to the “Stuff” share as drive Z: using NFS and did a DIR of the “Videos” folder:

Mount and DIR

Nothing special to see.  I also then mapped drive X: to the same location via SMB:

NetUse and DIR

The DIR looks similar, but still a tiny bit different – what’s up with the “( )” around the file sizes?.

When you dig into the actual details, you see more about the files… here are the properties of one of the files exposed over NFS – I cannot tell the file is tiered to Azure:

File Properties NFS

…and here’s what I see via SMB:

File Properties SMB

Those “( )” are letting me know that the Size and “Size on Disk” are different, and I don’t think that NFS communicates “Size on disk” over the protocol, but that doesn’t matter… the file is tiered to Azure.

Using either with NFS or SMB the file is accessible and useable to my client… even though its contents are stored in an Azure File Share – saving me LOTS of space on my server!