INFO-VAX Sat, 05 Jul 2008 Volume 2008 : Issue 372 Contents: Great External Article on A-A Clustering Re: Happy Independence Day Re: Happy Independence Day Re: NTP on OpenVMS using TCPIP services Re: Tru64 file system source code now open source Re: VMS SAN Primer Re: VMS SAN Primer Re: VMS SAN Primer ---------------------------------------------------------------------- Date: Sat, 5 Jul 2008 00:24:23 +0000 From: "Main, Kerry" Subject: Great External Article on A-A Clustering Message-ID: The readers of this list may be interested in this external article extract= ed from Sue's latest update. http://www.availabilitydigest.com/public_articles/0306/openvms.pdf One of the best articles I have seen on active-active OpenVMS clustering. Regards Kerry Main Senior Consultant HP Services Canada Voice: 613-254-8911 Fax: 613-591-4477 kerryDOTmainAThpDOTcom (remove the DOT's and AT) OpenVMS - the secure, multi-site OS that just works. ------------------------------ Date: Fri, 04 Jul 2008 18:49:04 -0400 From: JF Mezei Subject: Re: Happy Independence Day Message-ID: <486ea8e3$0$5427$c3e8da3@news.astraweb.com> > I hope you'll be celebrating Independence Day today with the same vigor as > I. As far as I know, Québec has not yet declared independance, so we can't celebrate independance :-) (Although Québec City celebrated its 400th birthday yesterday) ------------------------------ Date: Fri, 04 Jul 2008 21:34:54 -0400 From: "Richard B. Gilbert" Subject: Re: Happy Independence Day Message-ID: <3c2dnWenpJvBTfPVnZ2dnUVZ_oninZ2d@comcast.com> JF Mezei wrote: >> I hope you'll be celebrating Independence Day today with the same vigor as >> I. > > As far as I know, Québec has not yet declared independance, Well, what are you waiting for! Vive Quebec Libre! ------------------------------ Date: Fri, 04 Jul 2008 21:11:15 -0400 From: JF Mezei Subject: Re: NTP on OpenVMS using TCPIP services Message-ID: <486eca38$0$5405$c3e8da3@news.astraweb.com> Jeffrey H. Coffield wrote: > In the TCPIP$NTP.CONF file is says using "peer" is Client/Server mode > where "the local host wants to obtain time from the remote server and is > willing to supply time to the remote server." > > Using "server" is Client mode is says "the local host wants to obtain > time from the remote server but it is not willing to provide time to the > remote server." This is what the documentation says, and that is what one would expect. However I have found that you need to use "server" on the TCPIP services version for it to work. I have server This gets VMS to go and fetch the time from the remote server, make its stratus 1 higher and then be willing to serve it to other nodes in my lan. when I had "peer", VMS would fetch the time from the remote server, but make itself a stratum 15, which means that other nodes on the lan would refuse to use its time. (aka: VMS became useless as a local server) ------------------------------ Date: Sat, 05 Jul 2008 04:45:09 +0200 From: Michael Kraemer Subject: Re: Tru64 file system source code now open source Message-ID: ChrisQ schrieb: > A quality ansi standard C compiler is very usefull, especially if you > need to (for example) build all the gnu tools. Though binaries might > have been available, I always find it good practice to build everything > from scratch, to find your way around the system and to impose youir own > structure on it all. I do mainly embedded work here and Tru64 and > Alpha was never a supported platfrom by the gnu tools in a cross > environment, so you had to roll your own. I don't think this is generally true. When DEC Unix was in better shape than today, binaries of the usual Gnu stuff were available, no need to build them from scratch. Even nowadays one can find some of them eg at "thewrittenword". > Trying to do that with the > usually broken cc that comes with many unices was a nonstarter. As said, from the mid-90s (latest) onwards most Unices came w/o cc. It wasn't necessary if one just wanted to run apps rather than doing development. And I don't find the cc (bundled or ordered) of "most Unices" any more broken than DEC's one. In fact cc in DEC Unix Versions 3.2/4.0 had quite some bugs which surfaced already with relatively simple apps. > You > might ask, why do embedded development on Alpha ?, but the > sheer interactive speed and compile performance of alpha was just so > seductive compared to Pc's and Sparc offerings at the time, just > didn't want to use anything else. I might ask why doing development on a platform with supposedly rather limited offer in cross development tools, compared to PCs and Sun boxes. > That + a long history working with dec > kit etc. This might be the real reason rather than technical merits. > You can't expect the suits to understand all that, but if > technical excellence, design flair and sound engineering doesn't drive > progress, what does ?. Money ? >>> Tru64 was written from the start to be a modern, secure, 3nd >>> generation unix, >> >> >> >> But it came way too late, just as Alpha. > > > Here we must disagree - Alpha wasn't too late in the early to mid > nineties. It was. Two or three years after Power, the other late comer in game. > It was doing very well Hardly. Otherwise it (and probably DEC) would still be alive. > thanks and world class, but I > musn't get started on that thread again :-). We had that before, yes :-) > What's dead is dead and the > world moves on... Well, speculating about the past and what could have been costs nothing, except your leisure time, of course. > Regards, > > Chris ------------------------------ Date: Fri, 04 Jul 2008 21:19:45 -0400 From: JF Mezei Subject: Re: VMS SAN Primer Message-ID: <486ecc4e$0$1903$c3e8da3@news.astraweb.com> Paul Lentz wrote: > I sorta knew there couldn't be much difference... But wait a minute, don't SANs use very different terminology. They talk about switches, fabric etc . And don't SANs have many many capabilities such as RAID, abilities to comvine physical disks into a single drive, or partition a single drive into multiple drives ? Do SANs provide any concept of shared locking ? Can a node request that a block on a drive be locked for writes by other nodes ? Or is it pretty much a total free for all with SANs just blindly executing requests on any drive from any node ? (I would assume that SANs would have ability to provide "views" which means that a particular node woudl have a defined list of disks it can access ? Or can it go and peak at disk drives that have been assigned to other nodes ? Seems to me that there would be a large number of management issues to deal with that would not be needed in case of a VMS cluster. A VMS cluster offers a single security concept, shared locking etc. When you have different seperate nodes accesing drives in a SAN, those are no longer applicable. ------------------------------ Date: Fri, 04 Jul 2008 21:11:54 -0500 From: David J Dachtera Subject: Re: VMS SAN Primer Message-ID: <486ED86A.22C858C2@spam.comcast.net> JF Mezei wrote: > > Paul Lentz wrote: > > > I sorta knew there couldn't be much difference... > > But wait a minute, don't SANs use very different terminology. They talk > about switches, fabric etc . True. However, the terminolgy has become very confused (confusing). When folks say "SAN", they really mean "storage array". When folks say "fibre channel", they really mean "storage area network" (SAN - as in the interconnecting infrastructure). ...and "a separate 'fabric'" equates roughly to a VSAN (Virtual Storage Area Network), corrolary to a VLAN. VSANs taking "zoning" to another level, as it were. On CI or over Ethernet, the VMS equivalent would be the cluster id. number. > And don't SANs have many many capabilities such as RAID, abilities to > comvine physical disks into a single drive, or partition a single drive > into multiple drives ? Yes. Think: "HSG" or SWXCR. > Do SANs provide any concept of shared locking ? Does a CI provide such a concept? ...shared SCSI...? > Can a node request that > a block on a drive be locked for writes by other nodes ? Within the confines of an operating "domain" such as a VMS cluster, certainly. However, it requires a distributed lock manager. > Or is it pretty much a total free for all with SANs just blindly > executing requests on any drive from any node ? In so far as "drive" and "node" are virtual concepts, yes. However, there is no "magic" which enables sharing. Read on... > (I would assume that SANs would have ability to provide "views" which > means that a particular node woudl have a defined list of disks it can > access ? Yes and no. "LUNs" (remember: FC is just a way to carry the SCSI protocol over a light "beam") are "mapped" to specific fibre adapters ("FA" for short, in the parlance) on the storage array, and "masked" for access by specifc HBAs (by WWID). > Or can it go and peak at disk drives that have been assigned to > other nodes ? Zoning, mapping and masking restrict "visibility" between specific HBAs and LUNs. > Seems to me that there would be a large number of management issues to > deal with that would not be needed in case of a VMS cluster. A VMS > cluster offers a single security concept, shared locking etc. When you > have different seperate nodes accesing drives in a SAN, those are no > longer applicable. Well, you're confusing SANs with MSCP-served storage. The best way to think of a storage array is as if a tremendously talented SWXCR were housed in a rack/frame with fairly large number of physical drives. The physical drives are grouped together by the array manager (a person, that is) into virtual devices. Think: RAIDsets, mirrored RAIDsets (5+1 for example) and mirrored stripe sets. Quite literally, a superset of what's available on an HSJ, HSZ or HSG. Those virtual devices are thent presented to specific hosts via zoning, mapping and masking. ...however, it is just storage. A LUN. It's still up to the host operating environment to manage that storage. Such management is NOT the array's job in a FCSF/SAN anymore than it would be in an HSJ on a CI-based storage array. The array simply presents storage. Each LUN appears to the host as if it were a separate "SCSI" device. A "LUN" may occupy a portion of each disk in a disk group (in EVA parlance), for example. VMS, Windows, UX, AIX, etc. only "sees" a SCSI device over FC ($1$DGAnnnnn:), while the actual storage presented may consist of a RAIDset or a stripeset, with or without mirroring (on the array, not HBVS). There's no "magic" in a FCSF SAN which can allow incompatible operating environments to either co-exist or share storage devices. The limitations of each operating environment transcend the storage domain, regardless. Clear as mud, eh? Thought so... D.J.D. ------------------------------ Date: Fri, 04 Jul 2008 21:19:32 -0500 From: David J Dachtera Subject: Re: VMS SAN Primer Message-ID: <486EDA34.A4819971@spam.comcast.net> Paul Lentz wrote: > > "Ed Wilts" wrote in message news:954ba91a-4d5d-4a5e-961e-79689a2f132e@w8g2000prd.googlegroups.com... > On Jul 4, 2:08 am, "Paul Lentz" wrote: > > >> Storage Works boxes and HS series controllers. But never got to touch anything officially labeled SAN. >> Can anybody point me in > the direction to get started to become a VMS Alpha SAN know-it-all??? > > >I managed a CI-based cluster for a bunch of years and the migration to > >a modern SAN was painless. If you have the concepts down, the rest is > >just syntax and experience. My VMS hosts still lead the pack with > >boot-from-SAN. Other than our VMware blades, the other platform > >admins have not yet switched to a boot-from-SAN approach. > > >For VMS systems booting from the SAN, wwidmgr is the tool you have to > >know - there's a dedicated manual for it. > > I sorta knew there couldn't be much difference... It was the part where I needed to know what tools are used and all you guys helped > me with that. WWIDMGR is the piece within he ALpha console which allows you to teach SRM how to find the boot device it should present to the o.s. For VMS, if your system disk is shadowed, you can set up two possible paths to each of two shadow-set members, for a total of four "devices"; Then, your specify those "devices" as the possibilities for the value of the bootdef_dev environment variable. The system should be able to find at least one of the four possibilities. Some hardware - p-Series machine from IBM, for example - have trouble finding their boot device on fibre channel. So, boot-from-SAN is not always supported/supportable. Deal with each case individually. D.J.D. ------------------------------ End of INFO-VAX 2008.372 ************************