This architecture is fairly common. It's not at all unusual to have VMs directly access storage, especially when they want to share a common volume. The other reason is for apps that require a shared storage device or direct access to storage such as MSCS (with iSCSI in guest), SnapDrive/SnapManager for xxx (RDM or iSCSI in guest), Oracle RAC (yeah yeah I know support blah blah blah), web server farms, etc.
In your example, you don't have to have a separate storage VLAN. VMs can access storage from the same VLAN as the ESXi VMkernels. Gotcha is to make sure your exports for VM datastores are tight. Even that doesn't really stop a VM admin from spoofing an IP address from an ESXi VMkernel and now he has root access to your datastores. If you put the VM storage net on a separate VLAN and subnet, you can export the Linux volumes just to that subnet. You can either use DHCP or a script that uses the same last octet for the Linux storage IPs.
The other option is to VLAN the existing (user/intranet/world-facing) interfaces of the VMs and route that to storage. If you don't already have that portgroup, vSwitch and physical switch set up for it, that may be more complicated than setting up the back end / storage network.
On the filer side, another VLAN / subnet on an existing port or ifgrp (f.k.a. VIF) already serving VLANs is pretty trivial. Don't forget to add the VLAN to the allowed VLANs on the switch ports on both sides (filer and ESXi).
I hoe this helps!