An Accelerated Introduction to Solaris 10: Part 210 Mar '06 - 16:40 by benr
Here we are in the second installment of our accellorated look at Solaris 10 and OpenSolaris. These blog entries are intended as a way to get experienced and intellegent sysadmins up to speed with Solaris as quickly as possible. If your the kind of user/admin who knows how to look in a manual or man page then you've only got one problem when learning a new OS: finding the name of the tool you want! If you know what it is your looking for you can learn about it quickly. Now, back to the fun...
On Linux you can learn a lot about your system by digging around in /proc. On Solaris we too have a /proc filesystem but you'll only find information reguarding running processes in there, which is both avalible for you to dig through yourself and is used by the "ptools" ('man proc' for more detail). If you want to examine the devices on your system, you can use diffrent several tools.
On SPARC systems you can use the prtdiag tool found in /usr/platform/sun4u/sbin/ (substitute 'sun4u' for your specific platform for more info) to learn all sorts of things about your system, such as memory interleaving, IO devices, system temps and other goodies. But on Solaris/X86 you don't have that avalible because there are so many varients on X86.
For examining your hardware, you'll probly be most at home with Xorg's excellent pciscan tool, found in /usr/X11/bin/. In addition, the Solaris tool prtconf can be used to look at the system through Solaris's eyes. prtconf will show you how much memory you have and outlines the devices and displays the associated driver and instance of each device. Sadly, prtconf can be hard to read, since it reports devices based on the driver name so if you don't know what the driver names are you can't tell whats there. The tools becomes much more useful when you supply the -v flag to make it verbose, but then the ammount of output is so large that you can easily get lost, therefore about the only time prtconf is actually useful is when you are trying to figure out why some device isn't being seen or working.
Solaris has two diffrent device filesytems:
- /devices: Contains 'physical' paths to devices. The following exaple path is for a fibre channel device (notice the WWN after ssd), on the first GBIC (fp@0,0), of a QLogic HBA (qla), on the first PCI bus:
- /dev Contains 'logical' paths to devices, which are typically just symlinks to the /devices tree.
Storage devices are accessable in two forms: via a block device (the "normal" way, found in /dev/dsk/) and via raw devices (also called charrector devices, found in /dev/rdsk/). Because of Solaris's storage framework, all storage devices are accessed in the same way and are accessed as SCSI devices. On Linux, "hda1" means the first partition on the first IDE disk. On Solaris "c0t0d0s0" means controller 0, target 0, LUN 0, slice 0. On Solaris we call "paritions" slices, and typically slice 2 is used to represent the full disk, just as typically hda with no parition number typically represents the full disk on a Linux system.
Using the format tool, you can view, partition, and modify disks. You can create filesystems using mkfs just like on Linux, although where Linux uses the argument -t (fs) to represent the filesystem to be used, Solaris uses -F (fs), the use is the same, just the flag is diffrent.
On Solaris there is very very rarely any reason to reboot! You can ask the system to search for new devices, load the drivers, and create the device files simply by using the devfsadm command. Read the man page before using it. Alternatively, if you really do like to reboot, you can either boot the system with -r or simply touch /reconfigure to make the system update at next boot. Myself and some other SA's strongly discourage users from configuring hardware at boot because its much more difficult to fix something that breaks if the system isn't booted. In almost all cases I prefer to use devfsadm -vC, which searches for devices and also cleans up old, unused devices in the process.
On a Linux system you typically have a file or two to edit in order to setup networking properly, or even a GUI to wrap it all for you. On Solaris the network setup is "integrated" into the various system files in /etc. Here is a run down of how you'd manually setup a new interface on Solaris:
- Add the name of the system to /etc/nodename. It will be the only thing in this file. This is the common name of the system reguardless of how many hostnames the system really has.
root@anysystem devices$ cat /etc/nodename anysystem
- Add the hostname and IP address to /etc/hosts.
- Add any appropriate subnet masks to /etc/netmasks. These netmasks are based on the various networks your system might be part of. For 192.168.100.0/24 you'd add the line: "192.168.100.0 255.255.255.0"
- Add the hostname that you added to /etc/hosts to /etc/hostname.(interface). That is, if you wanted to add a new IP to the "nge0" (nVidia Gigabit Ethernet) interface, I'd put "192.168.100.42 cuddlistic1" in /etc/hosts and "cuddlistic1" in /etc/hostname.nge0. The hostname is the only thing that goes in that file.
- Add DNS information to /etc/resolv.conf
- Add the default router address to /etc/defaultrouter. If your gateway was 192.168.1.254 you'd "echo 192.168.1.254 > /etc/defaultrouter". Again, the IP of the router is the only thing that goes in that file.
- Now either reboot or, preferably, svcadm restart svc:/network/physical.
If you wanted to use DHCP intead, you'd simply ensure that the primary hostname was in /etc/nodename and then, instead of all that other stuff, touch /etc/dhcp.(interface). So if you want nge0 to use DHCP just touch /etc/dhcp.nge0 and your done.
This all might see like too much work and very tedious at first, but it really is much cleaner because everything hinges on itself. On most Linux systems the /etc/hosts file isn't all that important because you could remove the hostname of your system from it and you'd still be working, however on Solaris you can change you IP address simply by editing /etc/hosts and restarting the interfaces. Personally, although the Solaris method is a little longer, its much easier to manage a large number of interfaces than having everything about each interface in seperate files.
You can check the routing and interface stats just like you do anywhere else, using netstat, and of course, ifconfig works like you expect it should.
Solaris provides IP Multipathing (IPMP) as a way to create failover interfaces, so that if one interface dies (or the switch its attached to) it'll failover to a second interface. You can read about IPMP here. Also, in Solaris 10 Update 1 the /sbin/dladm tool was added, which allows you to create VLANs and aggrigations (aka: trucking, bonding).
Almost every UNIX platform has some variety of LVM, the "Logical Volume Manager", avalible. Many OS's offer alternatives as well, such as the Veritas Volume Manager or Linux-RAID. The Solaris equivelent to LVM is the Solaris Volume Manager (SVM), which was once upon a time known as DiskSuite. All the SVM commands start with meta. For instance, metainit creates a new SVM object, such as a volume, and metastat outputs the current status of SVM.
SVM is a powerful tool, but can take some time and effort to learn and become proficient with, just like LVM does. Linux-RAID might not be a very good software RAID tool, but I'll admit that nothing else (except for ZFS, which isn't currently in a GA Solaris release) is as easy.
There are two things you should know before starting to use SVM: A) You must create a small parition on each disk (or, at least 2 or 3 of them) to contain the configuration database (metadb), and B) SVM doesn't yet allow for vanity names so everything has to be named "dNNN", so volumes will have boring names like "d100" instead of what you would normally expect, like "NotPr0nVol". The distributed database copies are annoying because unless you thought to create the partitions (only about 10MB is needed) ahead of time, you'll be in a bind. The naming convension is lame, but its being worked on... in fact, its one of my pet projects when I have time to work on it.
The topic of managing SVM is too large for this blog entry, so I'll blog that some other time. However, know that in the future you'll also have ZFS avalible, which can handle software RAID itself. So, moving forward, if you wanted a new software RAID, you'd be recommended to use ZFS. If you wanted to mirror an existing filesystem without modifying it, you'd go the SVM route.