INFO-VAX Sun, 12 Oct 2008 Volume 2008 : Issue 551 Contents: Re: How to create a shareable image on IA64 using Pascal Re: Status of Intel's Common System Interconnect ? ---------------------------------------------------------------------- Date: Sun, 12 Oct 2008 17:24:16 +0800 From: "Richard Maher" Subject: Re: How to create a shareable image on IA64 using Pascal Message-ID: Hi Bob, I've got absolutely nothing against people touting for business in a forum like this, and if spreading a bit of FUD around helps drive some anxious or insecure customers in your direction them more power to ya! But when customers (and anything left that resembles a development community) are put off from using VMS for a shared-memory application because "there be dragons" or "we use *nix for that" then I've had enough! On *nix, and sepcifically with Java, you can find all sorts of documentation discussing threading issues and why using wait/notify is now the recommended thread synchronization method, plus heaps of discussion *and examples* in development forums about performance vs atomicity and mutex granularity. Yet on VMS, all one hears is "I wouldn't do that if I were you" or "We know something you don't know nah nah nanah nah" :-( If you are really that concerned about what can go wrong with a global section application then why don't you bring out a Technical Journal article with the "right" way to do it? Perhaps a statistics Global Section that is being hammered by hundreds of concurrent processes, discussing lighweight locking techniques vs the DLM? Incorporate the new Infiniband API for cluster propagation and I'll read it myself! Otherwise, let it be known, that sharing data between processes via a Global Section (whether an installed PSECT or $crmpsc or VLM) is something that has been done on VMS since about year dot. Maybe Larry Ellison should agree with you and doesn't offer Cache Fusion on VMS? I see this as essential and basic functionality that people should be able to receive support for and not the usual "As seen at Gezelter labs" or "While I was speaking to Koffee and Butrus Butrus the other day at my address to the G7". Regards Richard Maher "Bob Gezelter" wrote in message news:8a3dbb4d-bfc9-433b-ba9f-efa9b3ae84ec@q26g2000prq.googlegroups.com... > On Oct 8, 5:52 am, "Richard Maher" > wrote: > > Hi Adrian, > > > > (Not so much for your benefit as for aothers that may be reading here) > > > > Please do not listen to the incompetent filfth that have responded here! > > Heaven forbid that VMS has to contend with the "Can't handle shared memory > > slur" on top of everything else :-( > > > > There is nothing wrong with "COMMON" areas and shared PSECTs! Tell the usual > > suspects to "you know what"! > > > > OOh! Global Sections are spooky; leave them to Unix! What about Cache > > Fusion? What about an operating system that offers functionality? Global > > Sections "a bridge too far"? > > > > You all make me sick! > > > > Regards Richard Maher > > > > "Ade" wrote in message > > > > news:rYlGk.35222$0D6.20078@newsfe01.ams2... > > > > > Hi, > > > > > Does anyone have a quick example of how to create a shareable image using > > > Pascal on IA64 (or please correct the code below). It is intended that > > this > > > image, when installed with /open/header/share/write, will be used simply > > as > > > a data repository for other applications which are linked together with > > it. > > > All the examples I have seen in the documentation refer to shared memory > > > between linked object files rather than against a shareable image. Any > > help > > > you can give me would be greatly appreciated. > > > > > Regards, > > > > > Adrian Birkett > > > > > My example: > > > > > $ edit common.pas > > > { potential common storage module } > > > [environment ('common')] > > > module common; > > > type > > > common_r = record > > > my_int : integer; > > > my_str : varying [10] of char; > > > end; > > > end. > > > > > $ pascal common.pas > > > $ link common/share > > > $ install add/open/head/share/write disk1:[dir]common.exe > > > $ define common disk1:[dir]common.exe > > > > > $ edit prog1.pas > > > {simple data writer program} > > > [inherit ('disk1:[dir]common')] > > > > > program prog1(input output); > > > var common:[common]common_r; > > > begin > > > common.my_int := 1000; > > > common.my_str := 'Hello'; > > > end. > > > > > $ pascal/debug/nooptim prog1 > > > $ link prog1, sys$input/opt > > > disk1:[dir]common.exe/share > > > $ run prog1 !step through to end > > > > > [in a different process] > > > > > $ edit prog2.pas > > > {simple data reader} > > > [inherit ('disk1:[dir]common')] > > > > > program prog2(input output); > > > var common:[common]common_r; > > > begin > > > writeln('int = ', common.my_int); > > > writeln('str = ', common.my_str); > > > end. > > > > > $ pascal/debug/nooptim prog2 > > > $ link prog2, sys$input/opt > > > disk1:[dir]common.exe/share > > > $ run prog2 !noting that prog1 is still running in your other > > process > > > int = 0 > > > str = > > Richard, > > I normally do not respond to comments of this sort, but a > clarification is in order. As I can read it, none of the posts in this > thread said "OpenVMS cannot do it". What was said was "In almost all > cases, this is a dangerous practice". > > I have NEVER claimed that OpenVMS cannot handle shareable storage, > merely that in my extensive, 30+ year experience, I have seen far move > (actually, overwhelmingly more) incorrect implementations of shared > memory management than I have seen correctly done ones. > Synchronization errors (e.g., race conditions) are devilish difficult > to identify and eliminate, and virtually impossible to reproduce on > demand. Avoiding deadlocks and synchronization starvation in complex > systems is even more difficult. > > Even something as simple as setting switch variable can be fraught > with hazard. Many programmers who first learned to program using COBOL > (and indeed, those working in COBOL) use strings to store switch > values. In one of my AST talks (admittedly multithread -- AST and > regular; not multiprocessor, which is worse) I reminded attendees that > the MOVC3 and MOVC5 instructions were interruptable, thus could be > interrupted by a number of events, including AST delivery. The > "simple" act of setting changing a switch string from "YES" to "NO " > could give rise to intermediate values of "YO ", "YE ", "NES", "NOS", > etc.). Nobody in the audience had considered that possibility > (admittedly several members of Engineering who heard about my example > found it amusing). The same thing can happen with inadvertantly mis- > aligned data. At one conference, someone overheard my discussion of > that hazard and realized that that was the likely reason why his large > scale simulation system sporadically produced invalid floating point > values. > > These problems are exacerbated when the underlying architecture or > processor is changed, and the timing relationships alter.I have even > seen situations where a change in the IO configuration caused timing > issues. > > My overall recommendation is to proceed with EXTREME care. A LINK, or > a simple test case is far different than multiple processes doing a > lot of work at high speed. It is all too easy to jump into shared > memory as a way to avoid complexity and become mired in a quagmire. > Since I have not seen Adrian's code base, I do not know if this is a > highly dynamic global area, or a set of read-only parameters. Even a > set of shared performance counters can become complex. > > If the application is something along the lines of a large scale cache > (e.g., RMS global buffers, a DBMS, a very high performance system) > than I would consider the option of global storage, after: > - the fact of the global storage was hidden behind an API (or at > least a set of macros) > - a first implementation of the API was done using a resource monitor > model (a separate process that implemented the data store) > - the performance issue was severe enough to justify the extensive > developer time and review to make sure that the implementation was > valid and reliable. > > As noted, I had a most entertaining converesation more than a decade > ago on the implementation of a stock trading message switch where the > client took exception to not using shared commons (I used DECnet > logical links, a client subroutine shareable library, and a central > datalink and server process; in effect a non-privileged device driver/ > ACP model) to implement data link management. They even wanted me to > guarantee that if the performance was not adequate, I would > reimplement things at no charge. > > I did the implementation on an Alphastation 3000, and the bottleneck > was somewhere around 10K transactions/minute. I was limited by the > trace code (using DECterm) and the link to the trading switch. I > probably could have gone quite a bit further with a little work. The > beta went into production without any problems. > > I had a similar experience several years earlier with an application > that was supposed to support multiple terminals. It was quite a bit > more cost effective to install the single user image as a shareable > image than it was to spend man years to work multithreading the > application. It was also far more resilient. > > Rule 1: Architect allowing the complex solution, but use a less > complex solution if at all possible. Context switches between > processes may be costly, but system crashes in the middle of the day, > are far costlier. > > Thus, my comment that if one is in the applications business, one > should "Tread with care" > > - Bob Gezelter, http://www.rlgsc.com > ------------------------------ Date: Sun, 12 Oct 2008 02:49:03 -0700 (PDT) From: IanMiller Subject: Re: Status of Intel's Common System Interconnect ? Message-ID: <8bd8ead3-a47c-45de-ab32-da8762e5124e@d45g2000hsc.googlegroups.com> On Oct 12, 1:20=A0am, JF Mezei wrote: > I lost track of what has been and hasn't been released yet. Has Intel > released any CSI (or whatever it name might be this week) based systems > yet on either the 8086 or Ia64 architectures ? > > If so, have any vendors started to assemble such systems and start to > market/sell them ? > > Is this something that is coming real soon now, or have there been > delays that put this well into next year ? > > During a presentation 2 years ago, an HP guy had said that HP might not > adopt CSI for its IA64 based systems since it has its own proprietary > chips. Does anyone know if this is still true ? Or will HP adopt the CSI > =A0for both 8086 and IA64 systems ? > > At the motherboard level, would components such as those used to build > "blades" have to be changed to interface betwen the blade system > architecture and the motherboard's CSI interface ? Or are those > connected at a higher level or abstraction and not affected by CSI ? several seconds spent searching intel.com revealed that the Common System Inconnect is now called QuickPath (or Quick Path Interconnect or even QPI) and "Starting in 2008 with its next generation microarchitectures=97code named Nehalem and Tukwila=97Intel is incorporating a scalable shared memory (also known as non-uniform shared access or NUMA). Intel=92s new system architecture and platform technology will be called Intel=AE QuickPath Technology." I leave as an exercise to the reader to search hp.com for quickpath and Nehalem and tukwila. According to the VMS public roadmap VMS V8.4 will run on Tukwila ------------------------------ End of INFO-VAX 2008.551 ************************