[BBLISA] A question on DHCP "shoulds".

Rudie, Tony Tony.Rudie at fmr.com
Mon Aug 31 10:11:43 EDT 2009


Interesting.

How well does AIX play in a completely-DHCP world?  And what about clustering?  We use Veritas Cluster Server and Oracle RAC.  (the clustering is used on several OS platforms).  And how does 3DNS fit in?  

 - Tony Rudié 

-----Original Message-----
From: bblisa-bounces at bblisa.org [mailto:bblisa-bounces at bblisa.org] On Behalf Of John Hanks
Sent: Monday, August 31, 2009 9:57 AM
To: Michael Tiernan; Back Bay LISA
Cc: Edward Ned Harvey
Subject: Re: [BBLISA] A question on DHCP "shoulds".

On Mon, Aug 31, 2009 at 9:15 AM, Edward Ned Harvey<bblisa3 at nedharvey.com> wrote:
>> I know that I can tell a DHCP server that machine with MAC address
>> [bla] is to always get IP address [foo] this seems straight forward
>> but the question is, if machine with MAC address [bla] treats it's IP
>> address as statically assigned, as in, it's hardwritten into the
>> configuration/startup scripts, does that "violate" (for lack of a
>> better term) the rules of DHCP?
>
> Absolutely no problem.  I do this all the time, and here are the reasons
> why:

I'm going to take the opposing viewpoint, if only to make this a more
lively discussion.

My opinion is that the only machines in an environment that should be
set statically are the DHCP and DNS servers and, if these are
virtualized, the hosts which make up the virtualization
infrastructure. My view of a network infrastructure places DHCP and
DNS at the foundation. If I find myself layering complexity later,
like making many static IP address assignments, then I prefer to step back
and fix the underlying foundational issues in a way that preserves the
centralized control of IP address assignment.

> If a linux machine is a dhcp client, then the linux machine will assign
> itself whatever hostname the dhcp server says.  It will go modify its own
> "hosts" file, and resolv.conf, and sysconfig/network.  I have a specific
> requirement:  The "hosts" file must contain both the unqualified and FQDN of
> the host.  "10.1.1.50  myserver  myserver.example.com"  But if the hsots
> file is created by DHCP, that gets removed.  IMHO, I would call that OS
> damage.  (A server should be totally static, and resilient, and behave well
> regardless of other servers, within reasonable limits.)  Which means - sure,
> that's no problem for laptops, but servers ... that's a big no-no.

I'd take the different path of configuring DHCP on the servers to
build a hosts file which met the requirements. dhclient and dhcpcd
both have hooks for pre and post scripts and I wouldn't be surprised
if this particular problem were already addressed in these tools for
most distributions, although I have no real example to point to as the
default /etc/hosts has been sufficient for me so far (knock on wood,
Murphy's law will likely kick me in the groin with this later today).

> So all linux servers get statically assigned IP's.
>
> Now - I never want to accidentally assign some other server the same IP
> address.  So obviously the static IP addresses are assigned *outside* of the
> dynamic pool.  But just to be really really sure ... I create  DHCP
> reservation for each server, which will never be used because the server
> will never request dhcp.
>
> By creating the reservation, I ensure it can never be assigned, even by
> accident, to any other system.

What you have found comfort in is the exact reason I prefer the
opposite solution. Enforcing this level of documentation upkeep, even
(especially?) when I have been the only admin in a one-admin shop, has
proven, umm, lets say difficult (I accept that I am irresponsible in
these matters, thus the tendency toward self-enforcing solutions.) If
every system gets it's assignment from the DHCP server, I have no
choice but to maintain properly configured and up-to-date DHCP and DNS
configurations. I envy those of you who have the discipline to
maintain unenforced documentation systems, but it's just not something
I do well.

DHCP and DNS are not emerging technologies, they are mature and
stable, and in my view there's no reason to avoid exploiting the full
power of them in managing a server environment. Backup DHCP servers
are simplistic to set up for static ranges; reliable, robust and
redundant services are easy to maintain. DNS is distributed by design
and again, easy to make robust and redundant.

This is one of those 'many ways to skin a cat' areas and I suspect
there are as many answers as there are sysadmins. My suggestion would
be to invest some time in learning how DHCP and DNS can not just
assign addresses and names, but become a foundational part of how you
track and manage your systems and make your life easier down the road.
The one thing about them that irritates me is the lack of a well done
open source project to integrate the management of these two services.

jbh

_______________________________________________
bblisa mailing list
bblisa at bblisa.org
http://www.bblisa.org/mailman/listinfo/bblisa




More information about the bblisa mailing list