The usual scenario is a lan, with the internal servers (e.g. the file server), secured by firewall solutions, a dmz, with external servers like www or ftp, also secured with a firewall.
Yep, a tried and trusted architecture.
Everybody tells you: isolate the fileserver from public networks, don't use smb or appleshare across the internet. Ok.
But we have the following scenario: we are at university here, students want to acces data from the internet. So there must be some kind of internet access to the file server. Here are my questions:
- Why do people run ftp servers to share files, but tell me that cifs(smb) and appleshare are "insecure" on public networks. Both encrypt passwords... and data is not encrypted in ftp, too (?). I is much simpler for users to use the same protocol (smb/applehare) in university networks and at home (and ftp doesn't keep type and creator information important for the mac-clients).
You're misunderstanding what 'everybody' tells you. When they say "isolate the fileserver from public networks, don't use smb or appleshare across the internet", they mean: * Isolate the file server from public networks, * make sure the data on the file server, which is assumed to be sensitive, doesn't leave the private network, * don't allow access to the file server across the Internet without adequate protection. Don't use cleartext protocols such as SMB or Appleshare. FTP and HTTP aren't alternatives to CIFS or Appleshare, they're worse when you're talking about file server access. FTP servers are good to provide free access to public data, same with HTTP servers. You *can* use HTTPS for secure, i.e. strongly encrypted and mutually authenticated access to file resources. Or you can employ VPN techniques.
- I don't want to have one external server and one internal. I'm almost sure that just the file I need when connecting from the outside will always be on the internal server than ;-), and how to explain our users that they have one account, but are to store data ont wo file servers... Is it the only solution to have one internal and one external file server, not connected?
It is usually sensible, security-wise, to keep a file server with sensitive data well away from the Internet, away from any direct contact. The same applies to databases, BTW. If you were to ask me, you'd need to have very good reasons to place such a beast in a network that's accessible from the Internet without at least a sound application layer gateway inbetween. The alternative is to place a server in the DMZ, which is supplied with copies of (*only*) the necessary data stored on the internal box. It makes things less transparent to users, yes, but I'm sure it can be made to be entirely usable.
- If I really install a second external file server, what about linking it into the internal one? So I could create a subdir "internet_box" in users' home dir's, pointing to their nfs-mounted directories on the external server. So they could decide to make their files internet-accessable or not (some will have all their data on the external server, while seeing only one file server, while others who only work from university network won't use this directory at all). What about this scenario?
We security folks typically don't like to use NFS or other remote filesystem schemes, including SMB, across security boundaries. It's not as bad to use them across the DMZ-internal net boundary as it is between the Internet and a private net, but it still gives us very creepy feelings.. If you need to use NFS or SMB, SMB is preferred security-wise.
How do you implement such installations? We are going to expand our students' computer lab soon, and I want to have a clear structure of servers and networks before.
I'd: * Set up a second file server in the DMZ and a replication scheme to (automatically) transfer, i.e. push the relevant data from the internal machine to the DMZ. * Arrange for the students to be able to access the DMZ server via VPN, HTTPS or SMB-over-SSL. Full authentication would be mandatory. * If files need to be modified from the outside, I'd try to use something with the check-out/check-in functionality of CVS. Since this would probably involve inbound communication between the DMZ server and the file server, I'd put a strong focus on security, making sure the DMZ server can't make the file server do anything it shouldn't and also making sure that it can't perform a DoS against anything important. * If files need to be inserted from the outside, I'd try to go with an 'incoming' area on the DMZ server, from which the internal server periodically pulls and subsequently removes new data, which it then sorts to destination directories. These may then be published on the DMZ server in part or whole. Ideally, I'd use a document management system. It'd probably be most appealing to users and most practical if it used WWW techniques. Cheers Tobias