List of Archived Posts

2001 Newsgroup Postings (01/19 - 02/20)

Java as a first programming language for cs students
Review of Steve McConnell's AFTER THE GOLD RUSH
FCC rulemakings on HDTV
Power failure during write (was: Re: Disk drive behavior (again))
Now early Arpanet security
Attribbute debugging quote?
Java as a first programming language for cs students
Scarcasm and Humility was Re: help!
"HAL's Legacy and the Vision of 2001: A Space Odyssey"
"HAL's Legacy and the Vision of 2001: A Space Odyssey"
Review of the Intel C/C++ compiler for Windows
Review of the Intel C/C++ compiler for Windows
Now early Arpanet security
Now early Arpanet security
IBM's announcement on RVAs
Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
FW: History Lesson
HELP
First OS?
HELP
Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
what is interrupt mask register?
what is interrupt mask register?
HELP
HELP
So long, comp.arch
z900 and Virtual Machine Theory
perceived forced conversion from cp/m to ms-dos in late 80's
occupational nightmares [was: Now early Arpanet security
z900 and Virtual Machine Theory
John Mashey's greatest hits
[OT] How old are us guys? (was: First OS?)
John Mashey's greatest hits
[OT] Currency controls (was: First OS?)
John Mashey's greatest hits
Why SMP at all anymore?
John Mashey's greatest hits
John Mashey's greatest hits
First OS?
John Mashey's greatest hits
John Mashey's greatest hits
what is interrupt mask register?
First OS?
First OS?
what is interrupt mask register?
PC Keyboard Relics
PC Keyboard Relics
IBM 705 computer manual
Stealth vs Closed
Kildall "flying" (was Re: First OS?)
Kildall "flying" (was Re: First OS?)
Stealth vs Closed
IBM 705 computer manual
Why SMP at all anymore?
I am fed up!
Checkpoint better than PIX or vice versa???
Disks size growing while disk count shrinking = bad performance
monterey's place in computing was: Kildall "flying" (was Re: First OS?)
Disks size growing while disk count shrinking = bad performance
z/Architecture I-cache
Java as a first programming language for cs students
Java as a first programming language for cs students
Java as a first programming language for cs students
10 OF THE BEST
Original S/360 Systems - Models 60,62 70
weather biasing where engineers live (was Re: Disk power numbers)
Z/90, S/390, 370/ESA (slightly off topic)
Digital signature w/o original document
Z/90, S/390, 370/ESA (slightly off topic)
Z/90, S/390, 370/ESA (slightly off topic)
7090 vs. 7094 etc.
Z/90, S/390, 370/ESA (slightly off topic)
Z/90, S/390, 370/ESA (slightly off topic)
Disks size growing while disk count shrinking = bad performance
Inserting autom. random signature
Inserting autom. random signature
Z/90, S/390, 370/ESA (slightly off topic)
Disks size growing while disk count shrinking = bad performance
36-bit MIME types, PDP-10 FTP
Disks size growing while disk count shrinking = bad performance
Z/90, S/390, 370/ESA (slightly off topic)
Disks size growing while disk count shrinking = bad performance
what makes a cpu fast

Java as a first programming language for cs students

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Java as a first programming language for cs students
Newsgroups: alt.folklore.computers
Date: Fri, 19 Jan 2001 19:30:58 GMT
jmfbahciv writes:
Nah, probably not odd. That's just style. In our shop, the problem was that, if the main code was written first, there was never any time to rewrite it to do it right. JMF would do his hacks on weekends. In his mind, what he did on the weekends was for his pleasure and not to be "construed as a commitment of Digital Equipment Corp." (in-house joke). So no one would know that the main code had ever been written.

... something that i ran across 20 years ago.

The Psychology of Change in Development Methodology by Gerald
M. Weinberg (Ethnotech, Inc., Lincoln, Nebraska, copyright 1979,
Ethnotech, inc).

+ + +
Ford's Fundamental Feedback Formula
+ + +
 . . .
Back in the 1920's, Congress was investigation river pollution
and invited Henry Ford, as a top industrialist, to testify. When Ford
took the stand, he chided the Congress for wasting good tax money on
complicated anti-pollution laws. "All you need," he said, "was one
simple law to clear up all polluted rivers."
 . . .
Anyone can take any amount of water from any river for any purpose
whatsoever, as long as they return the water upstream from where they
took it out
. . .
---Simply put, Ford's Fundamental Feedback Formula says, if you
want people to change what they're doing, make sure they are fed back
the consequences of what they're doing.
 . . .
Bibliography:

Weinberg, Gerald M., An Introduction to General System Thinking, John
Wiley & sons, New York, 1975

Weinberg, Gerald M. & Weinberg, Daniela, On the Design of Stable
Systems, John Wiley & Sons, New York, 1979

Weinberg, Gerald M., The Psychology of Computer Programming, Van
Nostrand-Reinhold, New York, 1971

Freedman, Danial P., and Weinberg, Gerald M., EthnoTECHnical Review
Handbook, 2nd Edition, Ethnotech, Inc., Lincoln, Nebraska, 1979

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Review of Steve McConnell's AFTER THE GOLD RUSH

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Review of Steve McConnell's AFTER THE GOLD RUSH
Newsgroups: comp.software-eng,comp.programming,alt.folklore.computers
Date: Fri, 19 Jan 2001 19:51:06 GMT
spinoza9999 writes:
The problem is that the American inability to write software has a broad cultural base which includes a virtualization of knowledge. Partly a result of the deliberate destruction of the public education system and partly a result of computerization, people increasingly seem to regard one narrative as good as another.

... from a long ago discussion about american industry (in general, not just software industry) ... one might consider the recent Internet IPO boom consistent with some of these observations.

The May 4, 1981 issue of Time magazine carried an article entitled, The Money Chase -- Business School Solutions May be Part of the U.S. Problem.

They are, in other words, a professional managerial caste that considers itself trained--and therefore destined-- to take command of the nation's corporate life. This might prove a misfortune of some magnitude. For although the M.B.A.s generally see themselves as the best and the brightest, a growing number of corporate managers, look on them as arrogant amateurs, trained only in figures and lacking experience in both the manufacture of goods and the handling of people. Worse, the flaws in the polished surface of the M.B.A. now appear to reflect flaws in the whole system of American business management, in its concepts, its techniques, its values and priorities. The M.B.A., then, is both a cause and a symptom of some fundamental problems afflicting the U.S. economy.

Yet even as the corporate recuriters scramble to sign up these young paragons, there in increasingly widespread criticism of M.B.A.s, their traning and their functions. The indictment reads like this:

M.B.A.s are too expensive.

They are too aggressive.

They lack loyalty.

But the most fundamental criticism is that the misjudgments and mistakes characteristic of U.S. management as a whole can be blamed at least in part on the managerial methods and ideas of the up-and-coming M.B.A.s. There has been these critics say, too much emphasis on short-term profit, not enough on long-range planning; too much on financial maneuvering, not enough on the technology of producing goods; too much on readily available markets, not enough on international development. Admits Lee J. Seidler, a Wall Street securities analyst and professor at the New York Univerisy Graduate School of Business Administration: 'It may be that some of the basic tools we've been teaching in business schools for 20 years are inordinately biased toward the short term, the sure payoff.'

Why is it, for example, that so much of U.S. plant and equipment has become considerably older than that of Japan? To newly assertive forign experts, the misguided emphasis on short-term profit seems to blind U.S. managers to the need for more research and development; moreover they appear unable to develop strategies for dealing with long-range problems of chronic inflation and soaring energy costs. And why has quality been declining? Partly because U.S. professional managers have cared less about what they produce than about selling it -- and less about selling than about bookkeeping and tax-law legerdemain and building conglomerates that sometimes fall in ruins. 'For much of the trouble of the American economy, says Akio Morita, chairman of Sony, American management has to take the responsibility.'


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

FCC rulemakings on HDTV

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FCC rulemakings on HDTV
Newsgroups: alt.folklore.computers
Date: Fri, 19 Jan 2001 23:00:20 GMT
jmaynard@thebrain.conmicro.cx (Jay Maynard) writes:
On Fri, 19 Jan 2001 08:58:38 -0800, Lars Poulsen wrote:
>Technically, we should just drop the HDTV transition.

Amen. HDTV is an answer in search of a question.


there is a line somewhere about a camel is a horse designed by committee

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Power failure during write (was: Re: Disk drive behavior (again))

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Power failure during write (was: Re: Disk drive behavior (again))
Newsgroups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.arch
Date: Thu, 25 Jan 2001 06:19:20 GMT
"Bill Todd" writes:
Since I never received an unambiguous answer to the original question, that's hard to say. My guess would be that if indeed it ever happens at all the likelihood of seeing it in any given power failure is quite low. But the answers left some ambiguity whether the disk ever actually did something questionable. The original assertion may have been that during a power failure host memory lost function and clamped to send nulls, rather than its previous contents, out over the still-active bus to the still-active disk, in which case the issue, while real, has nothing to do with, e.g., the disk's geometry, or number of sectors written. And IIRC another response quoted a problem with ext2fs on Linux, but left it unclear whether that problem was simply that ext2's fast-and-loose write-back policies left multi-sector updates interrupted in the middle of the write effectively trashed (again, nothing the disk can reasonably guard against).

i started this with observation about disk drives with the characteristic existed from the 60s up thru at least the 80s ... and that some of the issues might be similar to incomplete multi-sector writes (especially when going to a larger logical filesystem record size that had previously had the record size the same as disk physical sector size and relied on consistent sector writes) ... not necessarily that any current disks suffer from this characteristic.

i have no direct data on current generation of disks, other than many unix vendors "qualified" scsi disks as to their power-failure sector write characteristic (either a sector is completely written correctly or not at all). such a qualification program possibly imply that not all disks meet such qualification (i.e. a power-failure sector write may result in conditions other than the two states in the qualification)

random refs:
https://www.garlic.com/~lynn/2001.html#38
https://www.garlic.com/~lynn/2000g.html#43
https://www.garlic.com/~lynn/2000g.html#44
https://www.garlic.com/~lynn/2000g.html#47
https://www.garlic.com/~lynn/2001.html#6

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

Now early Arpanet security

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Now early Arpanet security
Newsgroups: alt.folklore.computers
Date: Fri, 26 Jan 2001 02:35:19 GMT
Ken McMonigal writes:
At Case-10 in 1973, I (legally) accessed the terminal room, on an upper floor of an unlocked Computer Engineering building, via a hallway door with rubber-button circular lock (so no limit on issuing of keys). (Ways to defeat this type of lock have been posted on the Internet) And I remember walking into the -10 machine room (which included the IMP) several times. So, that door either had no lock on it or was propped open.

following reference ... note there was finger slip in following, it was ucsd not ucsb.

https://www.garlic.com/~lynn/99.html#45

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Attribbute debugging quote?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Attribbute debugging quote?
Newsgroups: alt.folklore.computers
Date: Fri, 26 Jan 2001 02:29:14 GMT
Brian Inglis writes:
Big S/390s call out and tell the CE centre the history, the symptoms and the probable FRUs to bring along. The CE centre calls to schedule a timely downtime if required. The systems are engineered with internal redundancy (not customer functionality) to not require maintenance during the average time spent on one site.

the mainframe industry also has a reporting infrastructure that will gather and summarize all errors for all machines.

in the early '90s, one of the people watching those statistics for a particular new model noticed that a specific kind of (recoverable) error had occurred a total of 15-20 (this was the aggregate total across all machines over a period of a year) when they were only expecting a aggregate total of 3-5 such errors.

random refs:
https://www.garlic.com/~lynn/94.html#24
https://www.garlic.com/~lynn/2000.html#21
https://www.garlic.com/~lynn/2001.html#22

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Java as a first programming language for cs students

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Java as a first programming language for cs students
Newsgroups: alt.folklore.computers
Date: Fri, 26 Jan 2001 03:10:52 GMT
Lars Poulsen writes:
Gerald Weinberg is wonderful. Never met him but I like his books. The only psychologist I ever read who understands what it is we do. Your bibliography omitted a booklet called something like "Are your lights on?"

are your lights on is (C) 1990 ... my reference was literally a reposting of a posting I had made in spring of 1981

random refs:

http://www.geraldmweinberg.com/index.html
http://www.geraldmweinberg.com/books.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Scarcasm and Humility was Re: help!

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Scarcasm and Humility was Re: help!
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 26 Jan 2001 18:25:09 GMT
jchausler writes:
Never saw a green one and only saw one 2311 drive in yellow. This was in a shop which apparently didn't care much about color as there was a mix of red and blue although at least 2/3rds were blue.

CSC had 45 2314 drives connected to the 360/67 ... five 8+1 drive strings and a short 5 drive string. CSC had the IBM/CE repaint the 2314 control panel covers so that each string was color coded and made it much simpler to find and locate a particular pack/drive.

random refs:
https://www.garlic.com/~lynn/94.html#32
https://www.garlic.com/~lynn/99.html#121
https://www.garlic.com/~lynn/2000b.html#82

some 360/67 machine room pictures (some in color, at newcastle, not CSC)

https://web.archive.org/web/20030813223021/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_67/index.html

b&w picture showing 8+1 2314 string

https://web.archive.org/web/20030820135303/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_67/29.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

"HAL's Legacy and the Vision of 2001: A Space Odyssey"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "HAL's Legacy and the Vision of 2001: A Space Odyssey"
Newsgroups: comp.sys.super,rec.arts.books,rec.arts.sf.misc,sci.space.policy,alt.folklore.computers
Date: Sun, 28 Jan 2001 16:30:24 GMT
Dag Spicer wrote on 24 Jan 2001:
The Computer Museum History Center is delighted to present:

The HAL 9000 Computer and the Vision of 2001: A Space Odyssey

David G. Stork Chief Scientist


bcs had hired me for the summer of 69 to come in and help setup some of their computers ... i guess i was something like BCS employee 40-45. BCS had been formed something like 6 months previously but was still putting together its operation (although, in theory all data centers were in the process of being assimilated by bcs). I was still undergraduate ... but earlier in the year they had talked me into putting together a one week class and they brought in their technical people during spring break.

Anyway, I saw 2001 that summer in a theater in downtown seattle.

I happened to be in seattle again during the summer of '99 and got to go see 2010 right after the opening at the same theater. local papers had story about paul allen restoring the theater specifically for the opening of 2010.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

"HAL's Legacy and the Vision of 2001: A Space Odyssey"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "HAL's Legacy and the Vision of 2001: A Space Odyssey"
Newsgroups: comp.sys.super,rec.arts.books,rec.arts.sf.misc,sci.space.policy,alt.folklore.computers
Date: Sun, 28 Jan 2001 16:39:41 GMT
Anne & Lynn Wheeler writes:
bcs had hired me for the summer of 69 to come in and help setup some of their computers ... i guess i was something like BCS employee 40-45. BCS had been formed something like 6 months previously but was still putting together its operation (although, in theory all data centers were in the process of being assimilated by bcs). I was still undergraduate ... but earlier in the year they had talked me into putting together a one week class and they brought in their technical people during spring break.

that summer you could periodially see 747 (i believe) serial #3 flying over/around seattle. it had a long "boom" out the front of its nose. it was getting FAA flight certification.

you could also go see the 747 mock-up; jetways, internal seating, etc. The one thing that I distinctly remember from the 747 mock-up tour was that they claimed that the 747 would have so many passengers that passenger loading/unloading would never be done with fewer than four jetways (two from each side of the plane).

when was the last time you saw 747 served by four jetways?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Review of the Intel C/C++ compiler for Windows

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Review of the Intel C/C++ compiler for Windows
Newsgroups: comp.arch
Date: Sun, 28 Jan 2001 18:50:58 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
1) Backbone and switch connexions, when they are multiplexing multiple Fast Ethernet connexions. This seems to be 90% of its use at present. Obviously a good idea.

2) When the Gigabit Ethernet card is 'intelligent' enough to optimise memory access and do the framing itself. Reports are that SGI's does that, but I have no data on anyone else's.


there was presentation at the '89 ietf meeting (held at stanford) on gigabit requirements

random refs:

https://www.garlic.com/~lynn/93.html#32
https://www.garlic.com/~lynn/94.html#31

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Review of the Intel C/C++ compiler for Windows

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Review of the Intel C/C++ compiler for Windows
Newsgroups: comp.arch
Date: Sun, 28 Jan 2001 19:50:06 GMT
Anne & Lynn Wheeler writes:
there was presentation at the '89 ietf meeting (held at stanford) on gigabit requirements

random refs:

https://www.garlic.com/~lynn/93.html#32
https://www.garlic.com/~lynn/94.html#31


corrected random refs (93.html not 94.html)

https://www.garlic.com/~lynn/93.html#31

some of the people back then working on high-speed pipelined protocol engines for efficient network activity ... had also worked on SGI's pipelined geometry engine for graphics.

random refs:
http://www2.ics.hawaii.edu/~blanca/nets/xtp.html
https://web.archive.org/web/20020213141258/http://www2.ics.hawaii.edu/~blanca/nets/xtp.html
http://www.netstore.com.au/cbbooks/020/0201563517.shtml
https://web.archive.org/web/20020803062824/http://www.netstore.com.au/cbbooks/020/0201563517.shtml
http://www.prz.tu-berlin.de/docs/html/prot/protocols/xtp.html
https://web.archive.org/web/20020227024449/http://www.prz.tu-berlin.de/docs/html/prot/protocols/xtp.html
http://citeseer.nj.nec.com/Architecture/Clusters/
https://web.archive.org/web/20020520125342/http://citeseer.nj.nec.com/Architecture/Clusters/
http://www.cis.ohio-state.edu/htbin/rfc/rfc1453.html
https://web.archive.org/web/20020428220442/www.cis.ohio-state.edu/cgi-bin/rfc/rfc1453.html
http://www.mentat.com/xtp/xtp.html
https://web.archive.org/web/20020611191726/http://www.mentat.com/xtp/xtp.html
http://www.ca.sandia.gov/xtp/biblio.html
https://web.archive.org/web/20020423214256/http://www.ca.sandia.gov/xtp/biblio.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Now early Arpanet security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Now early Arpanet security
Newsgroups: alt.folklore.computers
Date: Sun, 28 Jan 2001 22:08:26 GMT
don@news.daedalus.co.nz (Don Stokes) writes:
6310 was a nice mid-80s separate screen & keyboard thing, emulated a Hazeltine 1500, ADM-3A and a couple of other things. (One of the emulations looked enough like a VT52 to use with DEC stuff, with only an occasional screen refresh.)

sometime in '79, ibm came out with the 3101 ascii display terminal ... separate screen that looked a little like the PC B&W monitor and a fat/thick keyboard (somewhat lighter than the 3277 keyboard) & a power/control case (possibly half as thick as the original PC case). you could get the 3101 with an optional hardcopy printer that could either slave to what was going on the screen or work in simple print screen mode.

at the time i had been using a cdi miniterm (300 baud, "heat" sensitive paper) as a home terminal (I had a dedicated "internal" corporate phone line installed in my house).

the very first 3101 versions were just simple dumb terminal emulation. enhancements came out sometime in '80 for full-screen block-mode. early keyboards also had something like four pfkeys (more like vt101) ... enhanced 3101-2 model (in early '80) was more like 3278 keyboard, 12 "alt" code pfkeys across the top and 13-24 w/o-alt pfkeys on the right.

In feb. 1980, I did a special terminal output driver for 3101 and adm3 displays at our location ... that replaced four or more consecutive blanks with cursor positioning (which just about doubled the effective thruput of most output). There was only a single control character difference between the 3101 implementation and the adm3 implementation.

Sometime in march '80, i replaced cdi miniterm w/3101 at home ... I think the same week that I got a note from Amdahl saying that they had Unix up and running production on one of their mainframes.

random refs:
https://www.garlic.com/~lynn/99.html#69
https://www.garlic.com/~lynn/2000g.html#17

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Now early Arpanet security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Now early Arpanet security
Newsgroups: alt.folklore.computers
Date: Sun, 28 Jan 2001 22:45:57 GMT
other 3101 refs:
http://www.cs.utk.edu/~shuford/terminal/ibm_ascii_terminals_news.txt
http://www.cs.utk.edu/~shuford/terminal/ibm.html
http://www.georgiasoftworks.com/term.htm
https://web.archive.org/web/20010422114437/http://www.georgiasoftworks.com/term.htm

list of terminal types refs:
http://blackroses.textfiles.com/hacking/tnet3.txt

http://tools.ietf.org/html/rfc1091.txt
http://tools.ietf.org/html/rfc930.txt

http://tools.ietf.org/html/rfc884.txt

random ref (that include 3101); An Encyclopedia of Macintosh Faults (11/84)
http://semaphorecorp.com/ss/ss18.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM's announcement on RVAs

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's announcement on RVAs
Newsgroups: bit.listserv.ibm-main
Date: Mon, 29 Jan 2001 05:37:39 GMT
rhawkins@SINGNET.COM.SG (Ron & Jenny Hawkins) writes:
I think the problems with IBM Storage Architecture come from building Enterprise Storage to cater for the limitations of OS/390 and z/OS, and then having to retrofit the box for the Unix market. Far better z/OS introduced a software layer to virtualise the IO subsystem (ala Veritas Volume Manager). I think this has a far greater potential to realize the Iceberg dream. Wouldn't you like to have the choices that the Unix people have had for years?

note that IBM actually had somthing similar in LVM several years prior to VVM.

circa 1989 ...
o OSF has decided to build its operating system (i.e. OSF/1) kernel from a Mach 2.5 / 4.3BSD kernel base, with a BSD "Tahoe" fast file system (FFS), and with Encore symmetric multiprocessing support (including parallelization of the file system and of networking). The AIXv3 logical volume manager (LVM) will be ported to the OSF/1 kernel, and the OSF/1 kernel will provide a shared library and loader system based on ideas from AIXv3 (with extensions). Run- time loading of modules into the kernel will be supported. (Note that the LVM will support extensible logical disks and disk mir- roring; however, the OSF/1 file system will not make use of logical volume extensibility.)

and posting to comp.arch.storage in 1993 ...


From: rogerk@veritas.com (Roger B.A. Klorese)
Subject: Re: Veritas Volume Manager (was Disk striping (RAID -1 ? :-))
Date: Mon, 12 Jul 93 21:57:41 PDT

Alan Rollow - Alan's Home for Wayward Tumbleweeds. writes:
>
>I'm interested in more information about the Veritas Volume Manager.  I'd
>assume that it provides the basic lvm functions of bad block revectoring,
>volume concatentation and mirroring.  Does it provide any other services
>such as Striping (RAID-0) or RAID-4/5?
>

I should probably introduce myself here; I'm the Product Marketing
Manager responsible for VERITAS Volume Manager (VxVM) and VERITAS
Visual Administrator (VxVA).  But I used to be the support and training
specialist for the products until very recently, so I'm not a complete
liar and my mind isn't totally mush.  ;-)

VxVM is not layered on LVM -- it descends partly from some technology
developed at Tolerant Systems in the mid-1980s -- so it does not take
exactly LVM's approach.  It does not do bad block revectoring, for
example, depending on underlying drivers to do so.  On the other hand,
it also does not have as rigid a set of constraints on extent (we call
them subdisk) size, supports more on-line administration capabilities,
and is generally more powerful.  It supports concatenation, mirroring
and striping; RAID-5 is under development, as are more functions you'd
need to sign an NDA for me to tell you about.  ;-)

VxVM 1.2, the current technology version, also supports a notion of
disk identity, which facilitates moving drives between addresses and
adapters without modifying the volume configuration; it also supports
manual (and with some simple programming, semi-automatic) cutover of
disks between shared-port systems when one of the systems fails.
(Sequent's CLUSTERS product is based partly on an earlier VxVM
version.)  This also assists with our simplified disk
evacuation/replacement/hot-sparing capability.

We also have capabilities for analyzing performance to the subdisk
level, as well as moving on-line data without interruption of
availability.  Extensions to our VxFS file system to support VxVM
integration make bringing a file system to a stable state easy; this,
in conjunction with our transaction-based configuration management,
make it possible to use VxVM for disk backup with no interruption of
availability.

As our technology is supplied as source to our OEM customers, the
versions, capability sets, and bundles are not identical from vendor to
vendor.  Some offer only VxVM, others VxVM and VxVA; some bundle it,
others offer it layered.  We also offer VxVM and VxVA for SCO under
our own label, through distributors.

>If it does provide Striping, how is the performance; both through-put
>and bandwidth?

We will have a performance brief, based on UnixWare and AIM-3,
available soon.  Our own simulated multi-reader loads show a throughput
improvement of about 3x from a single drive to a four-way stripe on
commodity SCSI disks; obviously, your actual mileage will vary with OS,
system, drives, etc.

>Does the user interface have a nice pretty (and hopefully functional)
>GUI and a command line interface for those times the system is stuck
>in single user mode?

...or remotely.  Yes, of course.  VxVA is a Motif application that
supports full VxVM function.  A full command line environment is
available too, as well as a library (on most platforms), plus a
disk-functions menu.  Some vendors also support character menus; we use
a mkdev script for SCO, and SVR4 supports OA&M extensions.

>I'll collect responses mailed to alan@nabeth.cxo.dec.com and
>summarize.

Feel free.  Or write me at rogerk@veritas.com.
--
ROGER B.A. KLORESE                                             VERITAS Software
4800 Great America Parkway      Santa Clara, CA 95054      +1 408-727-1222 x310
rogerk@veritas.com                               {apple,pyramid}!veritas!rogerk
There is a kind of success that is indistinguishable from panic. -- Edgar Degas

--
Alan Rollow                             alan@nabeth.cxo.dec.com

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Linux IA-64 interrupts [was Re: Itanium benchmarks ...]

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
Newsgroups: comp.arch
Date: Mon, 29 Jan 2001 14:39:27 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Watch out for that "of course"! If I remember correctly, one early version of MVS allowed page tables to be pageable, in the grounds that they were designing for 48-bit addressibility back in 1980. This was implemented and backed off, at least according to reports, on precisely the grounds that I described - yes, it mostly worked, but it was so fiendish as to be unmaintainable economically.

i put page'able tables into VM/370 in the early '70s and it shipped with the rest of the resource manager product that i did in '76. i dynamically calculated when real storage was being constrained and dissolved the pagetable, invalidated the associated segment table pointer, and "paged" the associated backing store table. Since I never paged a "segment" that had pages in real storage ... all the associated page table entries (for the segment) were uniformally invalid ... so it was easy to dissolve the page table and just use the segment table invalid entry.

vm/370 tightly bound the page table (for a segment) and the associated backing store table (i.e. location of pages on disk). after dissolving the page table, it was only really necessary to page out the backing store table (for a segment, since the strategy only bothered to page a table for a segment that had no allocated pages in real memory and so all the associated page table entries were uniformally invalid).

a normal time-sharing operation might have several hundred quiescent virtual memories belonging to users that were logged on but not having actually typed/done something in the last several minutes. The associated tables for quiescent virtual memory could represent 10-20% of available real storage. Bringing the backing store table and reconstituting the page table when a virtual memory re-activated was just a very amall percent additional "page overhead" associated with brining the actual virtual memory pages into real storage.

strictly batch MVS operations would tend to have little or no quiescent virtual memories ... just batch job virtual memory appearing, running to completion and then dissolving.

Original MVS design point called for a nominal rate of 4-5 page faults per second (not only were quiescent virtual memories hardly an issue but quiescent virtual pages were also hardly an issue).

At least one of the large VM/370-based time-sharing services used the page'able table strategy as part of cluster process migration ... i.e. all tables migrated to disk and then reconstituted on a different processor complex within the same (shared disk) cluster. Later they enhanced the support so that the tables could flow over communication links (allowing process migration between processor complexes that didn't have direct shared disk connectivity in common).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Linux IA-64 interrupts [was Re: Itanium benchmarks ...]

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
Newsgroups: comp.arch
Date: Mon, 29 Jan 2001 15:17:42 GMT
Anne & Lynn Wheeler writes:
i put page'able tables into VM/370 in the early '70s and it shipped with the rest of the resource manager product that i did in '76. i dynamically calculated when real storage was being constrained and dissolved the pagetable, invalidated the associated segment table

the strategy also involved not constituting a table at all until actually needed. when a virtual memory was initially formed ... none of the associated page tables &/or backing store tables were actually built until there was a page fault for the first virtual memory page in a segment (i.e. the corresponding segment table entry was just left flagged as invalid).

work was initially done in the early '70s on vm/370 base but didn't actually ship to customer until '76 (although distributed extensively internally ... there was a large number of internal installations, for instance the internal network was larger than the internet/arpanet until around '85 and the vast majority of the nodes in the internal network were vm/370).

random refs:

https://www.garlic.com/~lynn/94.html#52
https://www.garlic.com/~lynn/99.html#126
https://www.garlic.com/~lynn/99.html#180

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Linux IA-64 interrupts [was Re: Itanium benchmarks ...]

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
Newsgroups: comp.arch
Date: Mon, 29 Jan 2001 16:13:03 GMT
... and for really heavily constrained real storage ... it would switch to mode where if virtual memory had any significant execution pause, all of the segment table entries would be flagged invalid, the real tables left in place ... but the page table header time-stamped. then not only would all tables for a quiesent virtual memory be de-constituted ... but "active" virtual memories would be scanned for individual segments that were quiesent (based on psuedo-LRU scanning of time-stamps in individual page table headers).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Linux IA-64 interrupts [was Re: Itanium benchmarks ...]

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
Newsgroups: comp.arch
Date: Mon, 29 Jan 2001 17:50:29 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Ah! MVS used a very technique, as far as I recall, probably derived from yours. It is clearly a neat solution to a serious problem. What is more, it does NOT have the problems that I was referring to, because there is no way that a single page can need activating at a time that its page table entry is paged out (as you point out) :-)

I would tend to call your technique swappable page tables, but all terminology is debatable.

The reports that I heard of wrt MVS were of an all singing, all dancing, fully and automatically pageable, page table system. I saw a description, but cannot be certain that the code was ever released (or even implemented, though I think it was). It sounded horrific.


POK had difference of opinion at time. besides ludlow and company doing the initial aos prototype using CCWTRANS from cp/67 and testing on 360/67 ... they also had a large performance modeling group that I interacted with from time-to-time. I tried to heavily indoctrinate them in LRU replacement algorithms ... but one of the things the modeling group came up with is that performance would be better if they selected non-changed pages for replacement prior to selecting changed pages (i.e. only having to invalidate a real memory slot w/o first having to copy it out to disk is more efficient, right?). Absolutely no talking them out of it.

It wasn't until about '79 that MVS realized that it was selecting highly used, shared system services pages for replacement before private, changed task data pages for replacement and corrected the "optimization" that the performance modeling group had come up with.

About the same time, MVS discovered a number of other interesting anomolies in their handling of virtual memory. One was that at certain times tasks went to idle they would flush all associated pages (including forced writes) and place the pages in an available pool. What they found was that under light to intermediate load there were alwas doing the page writes even tho under those load conditions the pages were alwas being "reclaimed" (i.e. original task needed the pages before there was any reason to re-allocated the real storage to some other task). They determined that they could use some dynamic adaptive stuff to only do pre-writes of the pages when there was a high probability that the pages weren't be relaimed.

The MVS group then approached me ... that since it was such a significant performance discovery for MVS ... that it should also be able to retrofit it to VM also. I had to inform them that I had never, not done things that way. That the initial implementation that I had done 10 or so years previous as an undergraduate implemented pools and processes in that manner.

I also told them the tale of TSS/360 from that period ... that TSS/360 had sense of interactive tasks and batch tasks. Interactive tasks would have their virtual memory pages pre-staged from the 2311 disks to the 2301 fixed head "drum" prior to task activation. On leaving the interactive queue (either going inactive or transition to non-interactive), associated pages would be cleaned from the 2301 back to the 2311. This was in addition to forcing the pages into & out of real memory. Not only was TSS/360 doing the "fixed" algorithm real storage management used by MVS ... but also managing migration of pages to/from 2311<->2301 in a fixed manner ... w/o regard for any contention for the resource and/or any dynamic adaptive characteristics.

Some time I may relate the joke I played in the VM resource manager; because the MVS resource manager did something in a particular way ... that I had to implement something similar before I could ship to customers.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

FW: History Lesson

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FW: History Lesson
Newsgroups: bit.listserv.ibm-main
Date: Mon, 29 Jan 2001 16:25:00 GMT
Rick.Fochtman@BOTCC.COM (Rick Fochtman) writes:
Actually, in my experience, adding more programmers to a late project makes it even later. Generally takes too long to get new staff 'ramped up' into the project.

i once saw something about intellectual productivity of collections of people

max(IQ0, IQ1, ..., IQn)

sum(IQ0, IQ1, ..., IQn)/n

min(IQ0, IQ1, ..., IQn)

min(IQ0, IQ1, ..., IQn)/n

i.e. some collections tend to productivity equivalent to the member with the lowest IQ divided by the number of members (i.e. tends to zero).

i'm not sure if it was written up ... but management with "disasters" frequently were allocated additional (some cases, large numbers of) people to finish ... which resulted in their empire growing and increasing their executive level (there frequently is this perverse method of rewarding poorly performing projects). management that consistently came in on time and under budget might be viewed as not having to deal with hard problems ... as opposed to far exceeding requirements.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

HELP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HELP
Newsgroups: alt.folklore.computers
Date: Mon, 29 Jan 2001 18:45:00 GMT
Tom Van Vleck writes:
Most 7090 card reading was done by a 1401 next to it, which could read column binary (optional feature) or BCD. Reading these decks onto tape for input to the 7090 was sometimes called "pre-store." Column binary format, for FMS anyway, was also usually 72 columns of binary data followed by serialization, and each card said where to load its data, so dropping the deck was not always a disaster. I wrote programs to handle entire 960-bit "card images" for the social scientists.

i got a summer job to implement the 1401 (709-front-end) unit record function on a 360/30. the 2540 reader/punch and 1403 printer were moved from the 1401 to 360/30. It was valuable learning exercise because I got to design my own device drivers, interrupt handlers, storage allocation, task manager, monitor, etc.

normal operation for 360/30 was to do a read/feed/select-stacker combo i/o operation in tight loop. BCD (non-binary) 709 decks were easily handible with the 360 ebcdic. problem was column binary ... i.e. all 12 rows/holes were arrained into pair six-bit bytes .... which were invalid ebcdic punch combination.

For the 709-frontend application, I had to do a (ebcdic) read (no-feed) of 80 columns into 80 bytes. If that got an error, I would reread the card with column binary read (which on 360 read the 80 columns into 160 bytes). When I got a successful read, i would do a separate fed/select-stacker.

2540 i/o operations (1 byte, 8-bit op code) ... from gx20-1703-7


read, feed, select stacker      S S D 0 0 0 1 0
read                            1 1 D 0 0 0 1 0
read, feed (1400-compatibility) 1 1 D 1 0 0 1 0
feed, select stacker            S S 1 0 0 0 1 1
PFR, punch, feed, select stack  S S D 0 1 0 0 1
punch, feed select stack        S S D 0 0 0 0 1

where "S S" bits selects stacker

0 0     stacker R1
0 1     stacker R2
1 1     stacker RP3   (middle shared stacker between punch & reader)

where "D" bit selects
0       EBCDIC
1       col. binary

The 2540 was about a desk sized box about 40in? high with the reader on the right side and the punch on the left side. In the middle were five bins that processed cards went into. The left two bins/stack were exclusive for the punch. The right two bins/stack were exclusive for the reader and the middle bin/stacker could be selected by both the punch and reader.

I once did a student registration application with standard manilla colored cards containing student registration information to be read and red striped punched cards. student registration card would be read (into the middle stacker) and some preliminary validation performed. If a problem was detected, a blank (red-striped) card was "punched" right behind it.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

First OS?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First OS?
Newsgroups: alt.folklore.computers
Date: Mon, 29 Jan 2001 23:20:09 GMT
QSB@QRM-QRN.net (Allodoxaphobia) writes:
Such software was/were also called "control programs" and "monitors". Redundantly one O/S was call "Control Program Monitor".

in fact, CP/67 stands for control program/67 and CMS originally stood for Cambridge Monitor System (but was subsequently updated to be Conversational Monitor System).

there are probably other refs from Melinda's history

https://www.leeandmelindavarian.com/Melinda#VMHist

note hovever both CP/67 and CMS postdate os/360.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

HELP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HELP
Newsgroups: alt.folklore.computers
Date: Mon, 29 Jan 2001 23:23:18 GMT
Anthony M Lenton writes:
Are you sure the 2540 was moved from the 1401. AIUI 2540 was released with S/360. The 1402 reader, similar to the 2540, was the normal reader on a 1401 but would not function on a 2821 controller which was necessary to attach 1403 & 2540 to a 360/30. -- Tony Lenton

not positive ... i was just a student and they let me have the 360/30 and the gear and told me it had come from the 1401. before i started ... they had been running the 30 in 1401 emulation mode (i.e. booting and running the 1401 "MPIO" program deck from the reader with the 30 in 1401 hardware emulation mode).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Linux IA-64 interrupts [was Re: Itanium benchmarks ...]

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
Newsgroups: comp.arch
Date: Mon, 29 Jan 2001 23:23:56 GMT
Paul Repacholi writes:
User, P0 and P1 tables where page/swapable. Parts of the kernel are also pageable, but only at lower CPU priorities.

the original cp/67 kernel was non-pageable. I had done a hack on cp/67 the summer i worked at BCS that divided the cp/67 into two sections the "fixed" portion and a non-fixed portion. The CP/67 kernel ran in real, non-relocate mode ... but the hack i put in added some code to the cp/67 linkage supervisor that would check the "to" address and if above the line, do "virtual->real" translate on the address. If the operation failed then the paging supervisor was called to fetch the (kernel) page. Because the supervisor was running in non-virtual memory mode, the kernel code that was being paged had to be carefully segmented into 4k chunks (somewhat like OS/360 transient SVCs that had to fit in 2k chunks, i.e. i previously had done lot of optimization effort on various OS releases, including transient SVCs ... and didn't see why I couldn't add effectively a similar capability to cp/67, didn't have to deal with all the sys1.svclib PDS gorp tho).

While this hack never shipped in the standard CP/67 product, it was incorporated into the initial transition from cp/67 to vm/370 (shipping in the initial release of vm/370). Basically, there was a "kernel" virtual memory tables built for locating pageable kernel chunks even tho the tables were never used to actually execute virtual memory mode. In the original CP/67 hack, I also had copied the kernel symbol table into the kernel pageable space... something that didn't survive in the transition to VM/370 release one ... but I later put back in as part of some debugging tool activity.

2-3 years later, when I did the pageable tables for each virtual memory ... I built a similar abbreviated, miniture virtual memory table for each virtual address space ... and used that to map the main tables for purposes of managing the tables on disk and moving back&forth between real storage and disk. Virtual address tables were copied between the miniture virtual memory (that was used to move them into & out of real storage) and areas of fixed storage. Fixed storage only needed to contain the actual tables in active use and so got quite a bit of compression (compared to leaving them laid out in their miniture address space).

somewhat unreleated was that there was abbreviated psuedo task structure built to go along with the "kernel" virtual memory tables for kernel "paging". As part of the resource manager effort in the early '70s (some of which was just porting CP/67 work to VM/370 work that hadn't shipped in the standard product), I also totally rebuilt the kernel serialization primitives ... part of this involved re-assigning "hung" activity to the system psuedo task in order to totally eliminate the VM/370 equivalent of zombie tasks. Later I also made use of it when I redid the I/O supervisor to make it bullet proof for the disk engineering labs.

random refs:
https://www.garlic.com/~lynn/2001b.html#8
https://www.garlic.com/~lynn/2000f.html#66
https://www.garlic.com/~lynn/99.html#32
https://www.garlic.com/~lynn/99.html#130
https://www.garlic.com/~lynn/2000.html#75
https://www.garlic.com/~lynn/93.html#0
https://www.garlic.com/~lynn/94.html#2
https://www.garlic.com/~lynn/99.html#198
https://www.garlic.com/~lynn/2000c.html#69
https://www.garlic.com/~lynn/95.html#3
https://www.garlic.com/~lynn/96.html#18
https://www.garlic.com/~lynn/97.html#15
https://www.garlic.com/~lynn/99.html#31
https://www.garlic.com/~lynn/94.html#18

with regard to the kernel symbol table ... both CP/67 and VM/370 used a modified version of the 360 BPS loader to initialize a kernel in memory. The loader then transferred control to some specialized code that wrote the memory image to disk. Normal machine booting reread this memory image back from disk. The BPS loader builds the full symbol table as part of getting everything correctly initialized. A little known fact was that as part of the BPS loader transferring control to the application, a pointer to the symbol table & count of entries was passed in registers. All that was needed was copy the full BPS loader symbol table to end of kernel and write it to disk with the rest of the kernel (there was little or no ordering of the entries in the BPS symbol table so a little sorting was done as part of the copy).

random refs:
https://www.garlic.com/~lynn/94.html#11
https://www.garlic.com/~lynn/99.html#135
https://www.garlic.com/~lynn/2000b.html#32
https://www.garlic.com/~lynn/2001.html#8
https://www.garlic.com/~lynn/2001.html#14

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

what is interrupt mask register?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what is interrupt mask register?
Newsgroups: comp.arch
Date: Mon, 29 Jan 2001 23:35:03 GMT
handleym@ricochet.net (Maynard Handley) writes:
What sort of signals does the system (as a whole) provide to users that these bad things are happening? I suspect (based simply on standard CS and engineer mentality) that the problem is considered solved if a report goes to the OS which logs it somewhere. That's fine if someone is reading the logs, which may or may not occur. Again, the "engineering" response to this is to claim "operator error", but that seems to me an unwillingness to actually solve the problem.

typically on the mainframe systems there is huge body of recovery & retry logic for most ever anomolous condition ... in addition to the error going into logrec/erep for recording.

then there are detailed reports that go to all sorts of people based on the recording ... including stuff about soft errors and predictive failure analysis for pre-emptive service.

then the mainframe world has reporting services that provide industry wide reporting across the whole install base.

I have this story (see ref) where somebody watching the industry wide reporting noticed that there had been something like 15-20 total errors of a particular kind during a period of a year (this was the aggregate total errors across all installed, operating machines over a period of a year, not an avg. per machine per interval ... but the sum across all machines for the whole year) ... and they had only been expecting something like 3-5 such errors across all machines for a period of a whole year (imagine recording every SCSI bus error that might occur every where in the world for every machine with a SCSI bus and then generating industry wide reports on every such machine in the world).

random ref:
https://www.garlic.com/~lynn/2000.html#21
https://www.garlic.com/~lynn/2001.html#22

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

what is interrupt mask register?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what is interrupt mask register?
Newsgroups: comp.arch
Date: Mon, 29 Jan 2001 23:54:29 GMT
handleym@ricochet.net (Maynard Handley) writes:
What sort of signals does the system (as a whole) provide to users that these bad things are happening? I suspect (based simply on standard CS and engineer mentality) that the problem is considered solved if a report goes to the OS which logs it somewhere. That's fine if someone is reading the logs, which may or may not occur. Again, the "engineering" response to this is to claim "operator error", but that seems to me an unwillingness to actually solve the problem.

stuff nominally doesn't perculate up to the user unless it is impossible to mask the error/condition (retries, etc).

on the other hand, for mainframes with "batch" heritage there tends to be huge infrastruction for application programming traps; scenerios where evovling application programs started out accepting some default system abnormal termination (with appropriate error messages) ... but for some class of applications this prooved inadequate and they needed application implementation specific error masking. for this they would specify a trap and do application specific recovery.

This methodology tended to be somewhat heuristic and evolved along with "programmable operator" recover heuristics (i.e. installations installing automated operater message trapping and specifying installation specific recovery masking methodologies).

A simple example I've used within the past couple of years is disk space exhaustion by a SORT process in a complicated series of tasks for a complex commercial application. Various UNIX'es didn't even propagate the error indication up to the sort application level, as a result the resulting sort'ed file was truncated with no indication to subsequent steps. Many commercial mainframe shops specify a standard system recovery process in the case of disk space exhaustion ... and as a last resort, if all available processes fail, definitive indication is propagated.

somewhat orthogonal ... but comment about effectiveness of programmable operator in business critical 7x24 application:

https://www.garlic.com/~lynn/99.html#71

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

HELP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HELP
Newsgroups: alt.folklore.computers
Date: Wed, 31 Jan 2001 00:59:44 GMT
ehrice@his.com (Edward Rice) writes:
What was the target operating environment? Why did you have to go all the way down to device drivers and interrupt handlers -- even BOS would do some of that for you and all the higher-up systems would do it all.

the 360/30 had PCP (I think release3 thru 6) running and i used it to assemble and early testing of program. however they wanted to overlap card->tape as well as overlap tape->printer/punch. so that drove things to "basic" mode rather than "queued" mode and doing wait/post on ECBs. getting the I/O program with enuf overlap then drove it down to building my own CCWs and doing EXCP/SV0. The core program was about a box of (2000) cards, statements, comments, etc. Doing any of the standard DD stuff required having DCB macro. Turns out the core program assembled in about 10 minutes ... but each DCB macro took an extra six minutes of elapsed time to assemble (and needed five DCB macros, two tapes, one printer, one reader, one punch, you could watch the front panel lights and recognize when it was doing a DCB macro) which added 30 minutes to the elapsed assembly time (stretching it to 40 minutes).

Since I was already doing buffering and multi-threaded logic with wait/post ECBs ... along with building the CCWs for EXCP/SVC0 ... if I put in a little direct hardware support and used the BPS loader with the resulting text deck ... elapsed assemble time was cut from 40 minutes to 10 minutes (I could reasonably get three turn-arounds per hr instead of one). Also, with the operating system out of the way ... I could also play games with the full 64kbytes of memory for buffers trying to keep tapes, printer, punch, and reader running at full-speed. I also had somewhat better control over error retry strategies, i can't say that the system of the time was all that robust.

I would get the 360/30 from 8am saturday until 8am monday for testing. Doing the additional direct hardware support took a week or two of time during the week of debugging ... but it could double testing turn around.

It got so that for the 48hrs on the weekend ... I just lived machine language and punch cards. A number of times, recognizing simple program patche ... I could fan the assembler output punch cards, recognize the appropriate card, put the card into 029 and DUP a new card and multi-punch the new "binary" patch (this could be done in a couple minutes rather than the 10+ minutes it took to reboot PCP & re-assemble). Since I wasn't sleeping those 48hrs, I couldn't claim to dream in machine language ... but during the weekend i lived machine language (not english) and punch card codes (I didn't need the printing on the top of a card to read the card).

Oh, boy -- let's do a code review of a 35-year-old program that none of us has a copy of!

Lynn, why did you Read EBCDIC and then, if called for, Read Binary? Why not simply issue a Read Binary and then, if the 7-9 punch wasn't there, put the bits together into EBCDIC? That should have been trivial, since EBCDIC was designed with card-punches in mind.


it worked ... and most of the stuff was bcd ... very little was binary ... and it got so i could do it a full reader speed ... after i got the elastic buffering between the reader and the tape drives down.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

HELP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HELP
Newsgroups: alt.folklore.computers
Date: 30 Jan 2001 22:23:45 -0700
ehrice@his.com (Edward Rice) writes:
What was the target operating environment? Why did you have to go all the way down to device drivers and interrupt handlers -- even BOS would do some of that for you and all the higher-up systems would do it all.

the other (excuse?) was i used what it was they said was available. I had just finished my first computer class, introduction to fortran spring semester and they gave me this as a summer job. The fortran class used 709 ibsys ... you punched cards and were told what an ibsys control card looked like. You might have 30-40 cards in your fortran program, rubber band around it and you turned it in at a outside window, came back later & got your cards back with some paper wrapped around it with another rubber band.

I had to learn 360 machine language, 360 assembler, operating system macros, load & execute, OS JCL, hardware I/O operations, tapes, punches, printers, how to operate the machine, PCP system services (like bsam, qsam, wait/post), etc. in a couple days ... with just standard OS/360 documentation and no direction (other than the fact that the tapes i generated for the 709 had to be the same as what 1401 MPIO generated).

Along the way, I also picked up how to clean tape drives, how to take the reader & punch apart, clean it and put it back together again, power cycle the machine, etc. Also when things wouldn't power sequence correctly ... go around to each of the controllers, put them into CE-mode ... re-power the machine ... and then individually bring up the individual controllers. Say it is midnight sunday and you've been at it for 40 hrs straight and machine stops working ... and university isn't paying for off-shift maint. coverage. There is another 8 hrs of dedicated time ... so you start reading the FE manuals and trying various combinations of things to try and get it into usable condition.

The university normal operation finished production 8am saturday and normally things were shutdown. For the summer, they gave me a key and I got the machine room for 48 hrs (until 8am monday) and typically not only was I the only person in the machine room, but frequently for most of the time, the only person in the building. I typically brown bagged some stuff and worked 48hrs straight thru. This even continued into the fall semester but it was sometimes hard to make monday classes after having already been up for 48hrs straight.

One of the reasons that i was duping cards & multi-punching binary program patches was that I didn't get a manual on the loader and learn about REP cards until 8 weeks or so into the job (but by that time I could read & type (multi-punch) binary cards almost as fast as generating REP cards).

It probably wasn't until near the end of the summer that anybody from ibm bothered to even mention that there was something else besides OS/PCP (and the BPS loader deck that I had accidentally stumbled across).

Starting around the end of the summer, IBM started assigning brand-new system engineers to the university (for tutoring?) They would stay around for 3-4 months ... and then we would get a new one or two.

wage was same as the one i had washing dishes for the previous year (student jobs are like that) ... but was somewhat more interesting.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

So long, comp.arch

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: So long, comp.arch
Newsgroups: comp.arch
Date: Wed, 31 Jan 2001 16:00:10 GMT
mash@mash.engr.sgi.com (John R. Mashey) writes:
I've been winding down @ SGI since August, and 1/31/01 is my last day. I'm off to be a partner @ Sensei Partners, a brand-new early stage Venture Captial firm ... I've enjoyed the discussions over the last 15 years or so, but I expect to be helping build companies rather than computers, so I bid you all adieu and good luck. I will still be mash, but @heymash.com.

not too long ago, I almost reposted as a reference

From: mash@mips.COM (John Mashey)
Newsgroups: comp.arch
Subject: Re: R6000 vs BIT SPARC
Date: 29 Nov 89 03:14:57 GMT
Reply-To: mash@mips.COM (John Mashey)
Organization: MIPS Computer Systems, Inc.

In article rathmann@eclipse.Stanford.EDU (Peter K. Rathmann) writes:
>
> Ok, R6000 vs BIT SPARC is too early to call, but other comparisons
> might be even more interesting.  The R6000 based system was billed in
> the popular press as a mainframe, so does anyone have any numbers
> comparing it to more typical mainframes like the IBM 3090 or big
> Amdahl machines?
>

Please note: it's called an "Enterprise Server" rather than a mainframe;
the 6280 brochure doesn't ever say "mainframe", on purpose.
The claim is that the CPU performance is mainframe-uniprocessor class,
and that it's got pretty reasonable I/O.

According to a posting of Mike Taylor's a while back,
a few relevant numbers would be, for the fastest scalar mainframe for
which I have numbers:
                      Amdahl 5990 CPU         RC6280
Cycle time            10ns                    15ns (@67Mhz)
LLNL 64, Harmonic     11.9 MFlops             8.8 MFLOPS
LINPACK, 64, FORT     14.5 MFLOPS             10.3 MFLOPS
LINPACK, 32, FORT     16.8 MFLOPS             13.9 MFLOPS
Dhrystones (1.1)      90Kish?                 109K, -O3

... snip ... quite long winded

-john mashey  DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:         {ames,decwrl,prls,pyramid}!mips!mash  OR  mash@mips.com
DDD:          408-991-0253 or 408-720-1700, x253
USPS:         MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

z900 and Virtual Machine Theory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z900 and Virtual Machine Theory
Newsgroups: bit.listserv.ibm-main
Date: Thu, 01 Feb 2001 00:08:58 GMT
d10jhm1@US.IBM.COM (Jim Mulder) writes:
Fear not. They understood, as shown below. Keep in mind that if we consider the Virtual Machine Concept for (S/360 and its descendants) to have begun in 1967 (CP67), and IEF (SIE) to have come out with XA around 1982, then the era of the software VM implementation was 15 years, and the IEF era has already been 19 years.

the part with CP/67 was with the model 360/67 ... but had been done before with cp/40 on a 360/40 dating back to 1965. the 360/40 had custom hardware modification for virtual memory ... but the basic method of providing virtual machine was the native 360 architecture (although virtual memory assisted in address space isolation).

there was an early virtual machine implemented on (I believe) a 155 prior to the availability of 370 virtual memomory that used a base/bound address relocation mechanism (required contiguous allocated storage) that provided address space isolation (i think the feature was part of something to do with running DOS in a region under MVT? ... been awhile). It was done by Seattle IBM SE on the Boeing account.

The issue in the 360 & 370 era was very clear & stricked delineation between problem mode & supervisor mode (with no supervisor state information leakage into problem state)) and a single operation that provided for transition between

1) supervisor state & problem state 2) hypervisor address space and virtual machine address space

along with the ability of the hypervisor to address the virtual machine address space.

For cp/67 and vm/370 the method used was to run the hypervisor in non-translate (real) addressing mode and rely on the PSW bit to switch from relocate mode and non-relocate mode and at the same time for a PSW bit to switch between supervisor state and problem state ... with clean architecture that didn't allow leakage of supervisor state information into problem state.

ECPS (virtual machine microcode assist availabe in the mid-70s) introduced with the 138/148 provided for virtual timer support (i.e. timers that ran at virtual machine speed rather than wall-clock speed).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

perceived forced conversion from cp/m to ms-dos in late 80's

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: perceived forced conversion from cp/m to ms-dos in late 80's
Newsgroups: alt.folklore.computers
Date: Thu, 01 Feb 2001 18:05:20 GMT
B W Spoor writes:
That's why I _still_ program in Cobol, despite it not being fashionable on small systems.

Never came across REXX, but I have used Pascal in the past although I never really liked it.


in the past decade I had a job to port a 50,000 line pascal application from one platform to another. my impression was that majority of pascal implementations haven't been targeted at large complex applications; majority of them much more targeted as teaching tool &/or simple PC applications (i.e. I had done quite a bit of programming in the early '80s with turbo pascal on ibm/pc and got into some mainframe pascal in the mid-80s). For one platform I was constantly submitting bug reports and talking to the pascal support team ... in what I thot was going to be a simple & straight-forward port.

REXX i first used in the very late '70s when it was still "REX" and before the name change and release as a product.

randoms refs:

https://www.garlic.com/~lynn/94.html#11
https://www.garlic.com/~lynn/94.html#22
https://www.garlic.com/~lynn/2000b.html#29
https://www.garlic.com/~lynn/2000c.html#41

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

occupational nightmares [was: Now early Arpanet security

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: occupational nightmares [was:  Now early Arpanet security
Newsgroups: alt.folklore.computers
Date: Thu, 01 Feb 2001 19:00:54 GMT
jata@aepiax.net (Julian Thomas) writes:
No - you have it wrong - it was 'pretty little Polly Nomial' who was chased by curly Pi.

i happened to run across the file on a floppy some time ago

IMPURE MATHEMATICS In which it is related how Curly Pi accousted little Polly Nomial.

... snip ...

The moral of our sad story is this:

"If you want too keep your expressions convergent, never allow them a single degree of freedom..."

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

z900 and Virtual Machine Theory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z900 and Virtual Machine Theory
Newsgroups: bit.listserv.ibm-main
Date: Thu, 01 Feb 2001 19:29:06 GMT
edjaffe@PHOENIXSOFTWARE.COM (Edward E. Jaffe) writes:
CP's handling of STIDP is not a consideration. That is purely a software implementation issue. What's being discussed here is the ability of a host control program to obscure all hints at the true machine state during guest operation (without IEF of course). That's strictly a hardware issue. That capability was lost in ESA/370. Prior to that, the Virtual Machine Concept was still in-tact, no matter what CP returned for STIDP.

an issue is can anything in a virtual machine directly access priviledge state machine that would leak unwanted information into the virtual machine &/or change priviledge state info that allows it to affect the supervisor and/or other virtual machines.

just the fact that the machine is running in a virtual machine as opposed to a real machine has been around since (at least) CP/67 release 3 with various x'83'/diagnose support.

the logic was that the x'83' instruction implementation was defined in the architecture as beinge model dependent and effectively, a 360/virtual machine model was defined and the operation of various x'83' features were defined that were specific to the cp/67 (and later vm/370) virtual machine model.

just accessing bits in the PSW isn't an issue ... 24bit address psw with ILC/CC/PM were available as part of BAL/BALR and were set with SPM

http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/7%2e5%2e72

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

John Mashey's greatest hits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John Mashey's greatest hits
Newsgroups: comp.arch
Date: Fri, 02 Feb 2001 17:02:50 GMT
jfc@mit.edu (John F Carr) writes:
Based on my experience, IBM policy is that it is OK to leak information to and through trade magazines. About 10 years ago I asked the IBM representative at MIT Project Athena about their hot, new, unannounced RISC workstation. He told me he couldn't leak the information to me directly but he could give me a copy of the magazine on his desk which had a good article on the RS/6000.

was it charlie (CAS) ... aka of compare&swap?

there was this joke about competitive analysis and leaked information ... that the company had so many (in some cases, competitive) projects going on ... nobody was really sure which ones might ever actually make it to first customer ship (FCS) ... any leaked information was more likely to severely confuse any industry watchers. At one time, there were supposedly 450+ different individuals that were required to sign-off on any product announcement (and any one could veto, which then would require escalation to override).

leaking wasn't ok ... it just was being so big and so many people watching that there were bound to be some leaks which then got written up and published (one source is like references to various flights i.e. the austin/sanjose nonstops; and cleaners being paid for stuff left behind on the plane).

minor historical note ... in the '80s one of my kids was at UT and working for AA in austin ... and submitted the "suggestion" for the first(?) austin/sanjose nonstop.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

[OT] How old are us guys? (was: First OS?)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT] How old are us guys? (was: First OS?)
Newsgroups: alt.folklore.computers
Date: Fri, 02 Feb 2001 17:35:10 GMT
Jim Esler writes:
I was the dishwasher until I went to college. --

i was a dishwasher even after i went to college (dining hall opened at 6am, so had to get up at 5am to be on duty) ...

random ref:

https://www.garlic.com/~lynn/2001b.html#27

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

John Mashey's greatest hits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John Mashey's greatest hits
Newsgroups: comp.arch
Date: Fri, 02 Feb 2001 20:56:44 GMT
"Stephen Fuld" writes:
OK, I'll bite! In the late 1960s-early 1970s I worked on a "true" SMP Univac 1108. The multiple processors (OK, we only had two and the max supported was 3) shared acces to the memory and were run by a single copy of the OS. Each of its processors was treated identically. Each could do I/O. At the time, the IBM shop next door had "clusters" of processors that shared peripherals but not memory (Attached support processors - ASP) and I believe there was a some model of the 360 that supported "attached proccesors" which were sort of like the predecessors of numeric co-processors (no I/O capabilities). What am I missing?

360/65 came in two-processor SMP version, sharing common memory and both being able to do I/O. The 360/65 caveat was that the I/O subsystem hardware was not shared. The configuration could be cleaved and each processor run as totally independent operation with half the memory, its own power supply, i/o subsystem, etc. In order to "share" I/O ... the implementation relied on the same hardware that was used to create "clusters" of 360s ... i.e. 2-4 "tailed" devices and/or controllers. For devices that were only connected to a single channel, an I/O initiation might have to be queued for the appropriate processor with unique connectivity to the device.

The 1966 enhancement to the 360/65 ... the 360/67 had a "channel controller" for multiprocessor environment (designed for 2-4 processors, but I know of no 4 processor configurations that were built and I only know of one three processor configuration that was built). The channel controller provided all processors to all I/O channels (somewhat minimizing the need to have each processor with its own I/O channel to each controller/device). The channel controller also was a performance boost implementing independent paths to memory (single processor configuration had single ported memory where there could be simultaneous processor & I/O contention for memory bus).

SMP synchronization was supported by the standard 360 instruction TS ... test and set for atomic operation.

Some drawbacks was that the OS/360 implementation of SMP support on the 360/65 had a spin-lock to enter the kernel.

Both TSS/360 (shipped code) for 360/67 smp and cp/67 (available but not standard shipped code) had much finer grain locking implementation. I believe MIT Lincoln Labs had one of the first 360/67 two-processor SMPs. The only 3-processor 360/67 that I'm aware of was for Lockheed (I believe for manned orbital lab. approx. '68).

The fine-grain locking work on CP/67 by CAS ... led to the definition for the compare&swap instruction (mnemonic chose because it was charlie's initials). CAS became part of the standard 370 architecture ... but a prereq was that a programming model had to be first be devised for CAS with appicability to non-SMP configurations (i.e. the stuff that was originally generated for the CAS programming notes showing use of atomic CAS for threaded code, enabled for interrupts, running in single-processor & SMP configurations).

random refs:
https://www.garlic.com/~lynn/93.html#14
https://www.garlic.com/~lynn/94.html#02
https://www.garlic.com/~lynn/94.html#28
https://www.garlic.com/~lynn/94.html#45
https://www.garlic.com/~lynn/98.html#8
https://www.garlic.com/~lynn/98.html#16
https://www.garlic.com/~lynn/99.html#88
https://www.garlic.com/~lynn/99.html#89
https://www.garlic.com/~lynn/99.html#176
https://www.garlic.com/~lynn/99.html#203

370s continue the 360 SMP design ... two processors, each with their own independent I/O capability. In the early '80s IBM introduced the 3081 SMP ... it wasn't a traditional IBM two-processor configuration because it was packaged in the same box and shared some common components that precluded it being operated as two independent processors. It was possible to combine two 3081 processors into a four processor 3084 (where it was possible to decouple the two 3081s and operate them independently).

for dates of some 360 models:


https://web.archive.org/web/20010218005108/http://www.isham-research.freeserve.co.uk/chrono.txt

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

[OT] Currency controls (was: First OS?)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT] Currency controls (was:  First OS?)
Newsgroups: alt.folklore.computers
Date: Fri, 02 Feb 2001 21:08:38 GMT
javnews@earthlink.net (John Varela) writes:
On Fri, 2 Feb 2001 15:30:37, Lars Poulsen wrote:

Last weekend I listened to a former Pan-Am captain who blamed the failure of Pan-Am on unfair competition from national airlines [1], including currency controls. He told the group that France limits (present tense!) the amount of money a resident can take out of the country, and that the purchase of a ticket on a foreign airline counts against the currency limit while a ticket on Air France does not. I found that hard to believe of any western European country at any time in the last 30 or 40 years, but had no facts to argue with him.

So: are there currency controls in effect in western Europe today?

[1] I thought Pan-Am's principal problem was that they were until near the end barred from carrying domestic traffic, while domestic airlines were given overseas routes. Thus one could fly, say, American from Topeka to Frankfurt, but to fly Pan-Am one had to change terminals at JFK.


in the early 80s, i used to fly monday night non-stop red-eye from SFO to JFK at least once a month (and come back friday afternoon). PanAm, TWA, and American had flights all about the same time. I don't know what started it ... but PanAm ten sold the "pacific" to United (including many of the planes ... PanAm 747 "clippers" got repainted in United colors). The press releases at the time were that PanAm felt that it could be more profitable if it concentrated on expanding into the east-coast/European traffic (and get out of the pacific & the west coast). The crash sort of ground all of that to a halt.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

John Mashey's greatest hits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John Mashey's greatest hits
Newsgroups: comp.arch
Date: Fri, 02 Feb 2001 21:42:55 GMT
"Stephen Fuld" writes:
You're right, since the 1108 did not have cache, some of the problems went away. Univac's first cached CPU was the 1100/80 in 1978 (?). It could also be SMP'd. They solved the cache coherency problem by having the cache in a separate cabinet, with interfaces for all the CPUs and arbitration, etc. Since there was only one copy of the data in cache, no coherency problems :-).

the higher end 370 machines (155, 165, 158, 168) and the 303x machines in the 70s had cache and SMP tended to run the cache synchronization at higher rate that the processor ... and slow down the processor by 15% (compared to uniprocessor) to allow for cache synchronization and cross-cache invalidation signals (i.e. raw hardware speed of a two-processor SMP was calculated at 2*.85 of a single processor, not taken into account any actual cache line churn and software overhead).

In the early '80s, the 3081 was supposed to (initially) come in a two processor configuration. However, TPF (follow-on to ACP, specialized IBM operating system used by airline industry and some financial operations) didn't have multiprocessor support and needed flat-out performance. A slightly stripped down 3081 was eventually offered called the 3083 with a single processor running 15% faster than a standard 3081 processor (the 3081 had introduced a new "IBM" SMP concept because it wasn't a standard "IBM" two-processor configuration, i.e. it couldn't be cleaved into two independent operating single processors which was characteristic of all the standard IBM 360, 370, & 303x SMPs).

One of the features for 3033 ... which had an SMP impact was the re-appearance of the IPTE instruction (invalidate page table entry). While IPTE, ISTE, ISTO were in the original 370 architecture ... they were dropped because one of the original models in the 370 line had difficulty with the implementation (so it was dropped from implementation in all 370 models). 370 virtual memory management (as shipped) had to get by with the PTLB instruction (purge look aside buffer). The re-appearance of the IPTE instruction in the 3033 was defined that not only did it turn on the invalid bit in the page table entry and selectively invalidate any associated entry in the hardware look-aside buffer ... but it would do it for all TLBs in a SMP complex.

The issue going from standard two-processor to the 3084 four-processor were significant to performance because rather than getting cache-invalidate signals from one other processor, it was now possible to get cache invalidates from three other processors.

Some operating system activity in the 3081 time-frame was system enhancements to make sure nearly all kernel storage allocation was done on cache-line boundaries and in cache line increments (at least minimized independent processors operating on different data that happened to be co-located in the same cache line).

I alwas believed that the 370 SMP issues with cross-cache invalidates was one of the main issues behind 801 activities stead fastly not supporting cache synchronization. It wasn't until the break for 601 with Motorola, et all that saw cache synchronization efforts. The only RIOS implementation was OAK using RIOS 0.9 chip and four processors in a box. Shared memory was achieved by flagging segments as either shareable or not-shareable. Bus activities involving memory flagged as shareable bypassed cached (there was no associated memory line caching).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why SMP at all anymore?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why SMP at all anymore?
Newsgroups: comp.arch
Date: Fri, 02 Feb 2001 22:01:50 GMT
jonah@dlp.CS.Berkeley.EDU (Jeff Anderson-Lee) writes:
It struck me as I was reading through posts on old SMP designs that current processors are potentially bus/memory latency/bandwidth limited. So what is the benefit of doing SMP anymore? Seemingly it just puts more pressure on an already scarce resource. But people keep making SMP designs (at least dual/quad P-III/alpha boards).

So what am I missing? How does the cache help enough to avoid problems? Or are most workloads just not bandwidth limited yet? Seemingly dual processors could help reduce context save/restore in come cases where a few jobs are passing data through a pipe or shared memory, but I'd guess that SMP would be hitting the wall soon.

[And no, this isn't a homework troll. :-)]


note that there is a difference between memory latency limited and memory bandwidth limited. both SMP and out-of-order execution are technques of getting more out of a memory subsystem where the overall system is memory latency limited (but there still remains available memory bandwidth).

early '70s saw some work on a "3081-like" 360/195 SMP ... i.e. not everything in the complex was doubled. The problem then was pipeline drain typically because of branch stall. The solution was to add dual i-stream support with dual instruction counters, dual set of registers, etc and pipeline operation with one bit tag indicating i-stream. There was lower probability of the 195 pipeline draining (because of branches) with two i-streams feeding it (instead of just one).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

John Mashey's greatest hits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John Mashey's greatest hits
Newsgroups: comp.arch
Date: Sat, 03 Feb 2001 01:18:11 GMT
Anne & Lynn Wheeler writes:
the higher end 370 machines (155, 165, 158, 168) and the 303x machines in the 70s had cache and SMP tended to run the cache synchronization at higher rate that the processor ... and slow down the processor by 15% (compared to uniprocessor) to allow for cache synchronization and cross-cache invalidation signals (i.e. raw hardware speed of a two-processor SMP was calculated at 2.85 of a single processor, not taken into account any actual cache line churn and software overhead).

i should say that they started out more like a 10% penalty for cache synchronization (i.e. hardware of two processor was closer to 1.8 times performance of a single uniprocessor rather than 1.7 times).

all of the 360s, 370s, etc were (very) strong memory consistency. I was fortunate to be the rep to some SCI activities early on ... scalable coherent interface with weak(er) memory consistency and directory-based cache management ... used (at least by) DG, Convex (exemplar), and sequent for their scalable processor designs. somewhere in the archives i probably have SCI drafts (in addition to SCI definition for asynchronous operation implementing memory consistency, there are also definitions for implementing low-latency asynchronous i/o operations).

random refs:
http://sdcd.gsfc.nasa.gov/ESS/eazydir/inhouse/mobarry/mobarry_sc96/index.html
https://web.archive.org/web/20010418013541/http://sdcd.gsfc.nasa.gov/ESS/eazydir/inhouse/mobarry/mobarry_sc96/index.html
http://www.hcs.ufl.edu/carrier/
http://www.dg.com/about/html/sci_interconnect_chipset_and_a.html
https://web.archive.org/web/20010803142306/http://www.dg.com/about/html/sci_interconnect_chipset_and_a.html
http://sunrise.scu.edu/WhatIsCoherence.html
https://web.archive.org/web/20020713112901/http://sunrise.scu.edu/WhatIsCoherence.html
http://wwwbode.informatik.tu-muenchen.de/events/scieurope2000/index.html
http://sci.web.cern.ch/SCI/
https://web.archive.org/web/20021016160814/http://sci.web.cern.ch/SCI/
http://hmuller.home.cern.ch/hmuller/sci.htm
https://web.archive.org/web/20020118150218/http://hmuller.home.cern.ch/hmuller/sci.htm
http://ei.cs.vt.edu/~history/Parallel.html
http://www.sequent.com/hardware/numaq2000/
http://www.nersc.gov/~jed/talks/net-tutorial/sld062.htm
http://www.mpi.nd.edu/downloads/mpidc95/abstracts/html/fleischman/
https://web.archive.org/web/20020324095607/http://www.mpi.nd.edu/downloads/mpidc95/abstracts/html/fleischman/
http://www.scizzl.com/

and at the other end ... random ref:
https://www.garlic.com/~lynn/99.html#182
https://www.garlic.com/~lynn/2000b.html#45
https://www.garlic.com/~lynn/2000c.html#9
https://www.garlic.com/~lynn/2000c.html#21
https://www.garlic.com/~lynn/2000e.html#22

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

John Mashey's greatest hits

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John Mashey's greatest hits
Newsgroups: comp.arch
Date: Sat, 03 Feb 2001 16:23:46 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
I don't understand the 7.7 at all. Was it a typo?

finger check ... reference was to previous note that was 2 times .85 ... i.e. and was attempting to say 1.8 instead of 1.7
A colleague of mine studied the 370/165 microcode, and found a loophole. I wrote some code to test his hypothesis, and it was really there. We never did report it to IBM, on the grounds that we regarded any program that could suffer from it as being broken beyond redemption. If you started an I/O operation into memory, waited until one byte was changed (in a spin loop), and then updated a byte a short distance on, you could get the cache out of step with the memory. I forget the exact constraints, but they were pretty tight.

if i had to guess, 165 i/o microcode had fetched the cache line and had a local copy someplace ... possibly the distance was more than a doubleword and less than a cache line.
As that was the worst memory/cache inconsistency he found, I can agree that its consistency can be regarded as strong ....

there were other funnies. 360 instructions did bounds checking (starting & ending addresses) before starting instruction. 370 introduced "long" instructions that were suppose to incrementally execute (effectively a byte at a time). "move character long" could specify a zero length "from" argument, a zero pad character and up to a 16mbyte length "target" argument ... which could be used to do things like clear a region of memory (and test amount of memory available). 370/125 & 370/115 incorrectly implemented the "long" instruction microcode to do the bounds checking prior to starting the instruction. do bounds checking first met then none of the instruction was executed rather than at the point/address where there was some sort of storage access issue.

the "long" instructions were also defined as interruptable and restartable (i.e. i/o interrupt could interrupt a "long" instruction ... in which case progress to date would be reflected in the registers and then the instruction could be restarted from the point it was interrupted).

in the smp world, there are number of instructions that fetch, update, and store the results ... but not defined to be atomic like test&set and compare&swap. MVS kernel made extensive use of immediate instructions to update flags (like or-immediate & and-immediate) or even the "execute" versions of same (i.e. load the argument for the or-immediate instructed into a register and use the "execute" instruction to execute the or-immediate ... i.e. the argument of the or-immediate instruction is taken form the register rather than directly from the instruction). Because their use was so pervasive, MVS petitioned that the or-immedate and and-immediate instructions be implemented as atomic (even tho the architecture specifically stated that they are non-atomic). This wasn't a problem with os/360 since it used a spin-lock to enter the kernel ... you wouldn't encounter kernel code on both processors with the same instruction interleaving access to the same byte ... but MVS kernel going to finer-grain locking got into a problem.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

First OS?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First OS?
Newsgroups: alt.folklore.computers
Date: Sat, 03 Feb 2001 22:26:05 GMT
Brian Inglis writes:
The fact that it had console, reader, punch, printer peripheral assignments validates that some part of its design was based on CP/67 or later versions of the IBM VM Control Program. Anyone remember enough of both to say how much of a connection there was, other than the name?

I never saw enuf of CP/M to form an opinion. I seem to remember some people that I knew to have worked on CP/67 saying they had done something or another in conjunction with CP/M ... as well as ???? (something MP?).

In any case, not too long ago ... I ran across somebody's statement that one of the V-something? real-time systems appeared to be substantially a port of the VM/370 network monitor to C ... the C code followed the same logic as the 360 assembler code and the comments were supposedly directly from the 360 assembler down to the same misspellings.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

John Mashey's greatest hits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John Mashey's greatest hits
Newsgroups: comp.arch
Date: Sun, 04 Feb 2001 16:00:44 GMT
Bruce Hoult writes:
At least one of the Power features listed above is shared with the PDP-11, namely that subroutine calls store the return address in a register rather than on the stack. The PDP-11 let you use any register (but I seem to recall that R6 was traditional) while the Power/PowerPC uses a single Link register. Combined with a protocol that passes arguments in registers this is great for making small leaf functions fast (and use no stack at all). It also helps you do threaded interpreters, and helps unify CPS (Continuation-Passing Style) and "normal" code.

standard 360 instructions for subroutine call were BAL/BALR (branch and link and branch and link register). In fact, there has been sporadic criticism of the 360-genre of doing things that way and not having any stack support at all.

Typical use


                L    R15,=A(subrout)
BALR R14,R15

&

BAL  R14,subrout

... where R14 & R15 are symbolics for registers 14 & 15

Bits 32-63 of the 360 PSW (program status word) were stored in the first argument:


bits
32-33      ILC        ... instruction length code
34-35      CC         ... condition code
36                    ... fixed-point overflow mask
37                    ... decimal overflow mask
38                    ... exponent underflow mask
39                    ... significant mask
40-63                 ... instruction address (of following instr)

subroutine return typically returned on the address in R14 after first setting the condition code (w/o having changed and/or needing to restore the program interrupt masks, although some subroutines might have found it necessary to change one or more of the program interrupt masks and then restore them).

some of the 360 dates.


Ann.  FCS
IBM DOS/360       6???? 6????     SCP FOR SMALL/INTERMEDIATE SYSTEMS
IBM OS/360        64-04 6????     PCP - SINGLE PARTITION SCP FOR S/360
IBM S/360-30      64-04 65-05 13  SMALL; 64K MEMORY LIMIT, MICROCODE CTL.
IBM S/360-40      64-04 65-04 12  SMALL-INTERMED.; 256K MEMORY LIMIT
IBM S/360-50      64-04 65-08 16  INTERMED.-LARGE
IBM S/360-60      64-04  N/A      LARGE - NEVER SHIPPED
IBM S/360-62      64-04  N/A      LARGE - NEVER SHIPPED
IBM S/360-70      64-04  N/A      LARGE - NEVER SHIPPED
IBM S/360-92      64-08           VERY LARGE SCIENTIFIC S/360
IBM S/360-91      64-11 67-10     REPLACES 360/92
CDC 6800          64-12           LARGE SCIENTIFIC SYSTEM - LATER 7600
IBM OS/360        65-?? 68-??     MFT - FIRST MULTIPROGRAMMED VERSION OF OS
IBM 2314          65-?? 65-04     DISK: 29MB/DRIVE, 4DR/BOX REMOV. MEDIA $890/MB
IBM S/360-65      65-04 65-11 07  MAJOR LARGE S/360 CPU
IBM S/360-75      65-04 66-01 09  LARGE CPU; NO MICROCODE; NOT SUCCESSFUL
IBM S/360-95      65-07 68-02     THIN FILM MEMORY - /91, /92 NOW RENAMED /91
IBM S/360-44      65-08 66-07 11  INTERMED. CPU,;SCIENTIFIC;NO MICROCODE
IBM S/360-67      65-08 66-06 10  MOD 65+DAT; 1ST IBM VIRTUAL MEMORY
IBM PL/I LANG.    66-?? 6????     MAJOR NEW LANGUAGE (IBM)
IBM S/360-91      66-01 67-11 22  VERY LARGE CPU; PIPELINED
IBM PRICE         67-?? 67???     PRICE INCREASE???
IBM OS/360        67-?? 67-12     MVT - ADVANCED MULTIPROGRAMMED OS

where ("Ann." is announce, & FCS is first customer ship) from:

https://web.archive.org/web/20010218005108/http://www.isham-research.freeserve.co.uk/chrono.txt

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

John Mashey's greatest hits

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John Mashey's greatest hits
Newsgroups: comp.arch
Date: Sun, 04 Feb 2001 16:41:39 GMT
Anne & Lynn Wheeler writes:
standard 360 instructions for subroutine call were BAL/BALR (branch and link and branch and link register). In fact, there has been sporadic criticism of the 360-genre of doing things that way and not having any stack support at all.

standard OS/360 register convention was

r15     linkage "to" address
r14     linkage "from" address
r13     "save" area
r12     "basic" module addressing

os/360 module convention was the calling module supplied a "savearea" pointer in R13 that could be utilized by the subroutine.

typical entry into a subroutine or module called for


STM  R14,R12,12(R13)

save registers r14 & r15, followed by r0-r12 starting at displacement 12 bytes off R13

then


LR   R12,R15

move the linkage "to" address into the standard module base addressing

then if the called module expected to do any calling of other modules itself, it would do


Getmain

system call for storage allocation of a new savearea (returned in R1)

ST    R1,8(R13)
ST    R13,4(R1)
           LR    R13,R1

forward/backward chain the saveareas and switch to the new save area. A non-reentrant subroutine (i.e. not expecting to be called before exiting might use a static data area inside the subroutine for a savearea rather than dynamically allocating one).

at module exit. If a dynamic savearea had been allocated it would do something like


          LR    R1,R13
L     R13,4(R13)
          FREEMAIN

otherwise, it just would backchain the saveareas.

And then typically set the return condition code and then do a


LM    R14,R12,12(R13)
BR    R14

The above, including the condition code setting was frequently done by standard assembler macro:

         RETURN  R14,R12,RC=0

or

         RETURN  R14,R12,RC=(15)

i.e. set the return condition code restoring R14 thru R12 and then return.

examples of 360 asembler


http://www.cuillin.demon.co.uk/nazz/trivia/hw/hw_assembler.html
https://web.archive.org/web/20021119223305/http://www.cuillin.demon.co.uk/nazz/trivia/hw/hw_assembler.html
http://members.tripod.co.uk/nvcs/Samples/nvcsget.htm
https://web.archive.org/web/20020622062618/members.lycos.co.uk/nvcs/Samples/nvcsget.htm
http://jm.acs.virginia.edu/~drs8h/articles/storupdt.txt
http://www.itc.virginia.edu/~drs8h/articles/idcupdt.txt
http://www.kcdata.com/~pagarner/pgprexit
http://webpunkt.com/piotr/sju/cus1122/program1/SAMPLE

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

what is interrupt mask register?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what is interrupt mask register?
Newsgroups: comp.arch
Date: Tue, 06 Feb 2001 18:18:06 GMT
Sander Vesik writes:
That's exactly why comparing to scsi is bad.

the comparison wasn't a technology issue it was with respect to how disk drives were attached to the system and the error recording methodology for that attachment and trying to relate it to some more commonly available commputing systems and their disk attachment methodology.

obviously there may be major technology implementation differences between how disks are attached to mainframes and how disks may be connected to servers ... and the difficulty of relating mainframe implementation technology to various workstation/server implementation technology.

so the abbreviated analogy didn't work ... possibly a long winded analogy.

Imagine all the industry of all workstations and (non-mainframe) servers manufactured.

Imagine that eash of these computers do local error recording.

Imagine that all the operations responsible for these installed computers forwards a copy of the error reports in machine readable format (supporting a number of different methodologies) to an industry reporting service.

The industry report service aggregates the information from the error reports for the whole industry and publishes industry-wide reporting information.

Imagine that one of the manufactures selected some fairly common recoverable error condition (for the analogy ... you choose which ever one it is, something that shows up relatively frequently on error log) that is included in the error report and logging ... and decided that they wanted to engineer their next product so that the total number of errors of this type reported for a period of a year for all machines across all machines bought and installed by customers was less than 5.

Imagine that the manufactur then had several people who's fulltime job it was to follow the industry reports and compare engineering predictions against actually reported values.

Imagine that these people employed full time by the manufactur, found that instead of 3-5 total errors (aggregate qacross all machines for the a period of a year) there were 15-20 such reported errors.

Imagine that the manufactur then took a signficant course of action to find and correct whatever problem was responsible for the difference between the expected 3-5 total aggregate errors across all machines for a period of a years and the reported 15-20 total aggregate errors across all machines for a period of a year.

For purposes of the analogy, the particular type of error chosen isn't so important other than it should be a relatively common error. For purposes of the abbreviated analogy the component the SCSI-bus component in server/workstation implementations roughly correspond to the component under discussion in the mainframe scenerio (regardless of whether SCSI-bus can be re-engineered to meet such a requirement).

Using the SCSI-bus component as an analogy, then the manufactur would redesign and re-implement that component in an attempt to achieve the goal of 3-5 total aggregate errors per year for all machines. If necessary eliminating all SCSI-bus support in all of their products and replacing it with something else that does meet the requirement (and in such case, over a period of a couple years, all manufactures elminate support for SCSI-bus in all of their products in order to also meet the same requirement). Of if the SCSI-bus is retained then re-engineering the SCSI-bus so it, in fact, would meet the requirement.

The original posting & quick analogy wasn't to actually compare the current mainframe component hardare against the current SCSI-component ... it was sufficient to show that the SCSI-component possibly hasn't yet been re-engineered to meet the same objectives, that possibly workstation/server manufactures don't have dedicated people tracking industry wide reporting for all machines, and/or that the workstation/server industry doesn't support a industry-wide service which collects all error reports for all machines and generates summary reports.

I would claim that for the purpose of the matter at hand that the SCSI comparison was actually quite good ... because it does serve the same functional purpose as the mainframe component under discussion and the industry hasn't re-engineered it so that there is an aggregate of of 3-5 total errors across all machines over a period of a year. It would appear that the SCSI component, in fact has a signficantly higher error rate.

The analogy wasn't for the purposes of looking at the reasons for the errors, the analogy was about industries tolerance for errors, and typically with very low tolerance for something comes extensive Q&A and auditing procedures supported by the industry to verify/authenticate error quality.

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

First OS?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First OS?
Newsgroups: alt.folklore.computers
Date: Wed, 07 Feb 2001 19:42:47 GMT
Eric Chomko writes:
I would believe you if the IBM 5100 system had an OS and the first two characters in its name were 'CP'. Fact is that IBM had to go shopping for an 8 bit OS for the IBM PC because they didn't have one.

the people i heard talk about doing something with CP/M (& MP/M), while having worked with CP/67 and VM/370 ... I have no knowledge that they were ever directly associated with IBM.

Around that time, the VM Waterloo Share tape had possibly more lines of code than were in the ibm distribution for the vm kernel (i.e. & there were possibly many more times non-IBMers writing code for vm than there were IBMers writing code for VM).

I was once involved in a 20 person collection of people on the west coast (growing to 40) that was going to do software for the ibm/pc; it was sometime before the ibm/pc had been announced and we contacted boca about allowing us to be responsible for doing the software. boca said that it was just fine ... that they were in the hardware business and not doing software and it was just fine if we took responsibility for doing software. about once a month ... we re-verified that it was ok if we took responsibility for software and boca said just fine.

after working in this manner for six months or so, boca called to say that they "owned" all responsibility for both hardware and software inside ibm and if their were any people interested in doing software related stuff for the ibm/pc they had to transfer to boca.

I've seen a number of projects that were subcontracted to an outside organization with the "responsible" organization owning the contract rather than allow another internal organization have responsibility.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

First OS?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First OS?
Newsgroups: alt.folklore.computers
Date: Wed, 07 Feb 2001 20:46:38 GMT
Anne & Lynn Wheeler writes:
I've seen a number of projects that were subcontracted to an outside organization with the "responsible" organization owning the contract rather than allow another internal organization have responsibility.

slightly related (some of the same people on the west coast) but different circumstances and different outcome

https://www.garlic.com/~lynn/2000g.html#40

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

what is interrupt mask register?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what is interrupt mask register?
Newsgroups: comp.arch
Date: Wed, 07 Feb 2001 21:00:26 GMT
Sander Vesik writes:
But isn't this totally misleading and invalid comparison? Among all the the other things, not all "scsi errors" are errors related to the scsi channels themselves at all. IIRC it also covers: any time somebody tried to access a cdrom drive with no media inside which also is used for detecting if there is media inside if a drive dies while being connected, you get loads of them if there is a buffer overrun in writing a cd and loads of others.

Global logging of that technology just doesn't make sensxe.


as an aside ... in an error sensitive market segment, the hardware and the reports would be engineereed so that normal operational indicators are filtered out from what are considered errors and the engineering and reporting would also be able to classify the remaining errors as to type & category.

as mentioned there was specific new engineering done so that a particular kind of error in this class of components was done with the expectation that total aggregate errors (of the specific type) were reduced to 3-5 across all machines for a period of a year.

the really remarkable thing in an error sensitive market segment that not only is the engineering important but as important (or possibly more so) is an industry service that is able to audit and report errors across all vendor hardware.

In that sense it is analogous to TPC and/or SPECmarks in the performance and/or thruput sensitive market or the Common Criteria for the security sensitive market segment (i.e. industry service providing auditable results as to different vendors standing with regard to the variable of interest to the market). There is also nothing that precludes there being overlap between the performance/thruput sensitive market segment, the error sensitive market segment and/or the security sensivite market segment.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PC Keyboard Relics

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC Keyboard Relics
Newsgroups: alt.folklore.computers
Date: Wed, 07 Feb 2001 21:06:46 GMT
jchausler writes:
I remember it on 029 keypunch keyboards. I don't recall what or if I did anything with it as I was not normally using machines which understood EBCDIC but an older code. Does anyone know what combination of holes were punched in a card to get the "not" character? I can then check my UNIVAC 1108 card code to internal (fielddata?) table.

from gx20-1703-7 trusty greencard

ebcdic NOT is a x'5F' & 11-7-8 punch

BCD x'5F' is a triangle shaped symbol with 7-track tape encoding 'B 8421'

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PC Keyboard Relics

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC Keyboard Relics
Newsgroups: alt.folklore.computers
Date: Thu, 08 Feb 2001 17:48:36 GMT
"Ville Jorma" writes:
I think that the System Request key had some kind of function in early versions of OS/2 but I'm not shure. Anyone remember?

Microsoft CodeView 1.0 used SysRq equally to Ctrl-Break. It often worked even if the target programme tried to disable breaking.


sysreq from 327x terminals ...

excerpt from
http://tools.ietf.org/html/rfc1576.txt

3270 terminals in the IBM SNA network environment have two sessions with the host computer application. One is used for communicating with the host application, the other is used for communicating with the SSCP (System Services Control Point) that links the terminal with the appropriate host computer. For the purposes of TN3270, this distinction is not apparent or relevant since there is actually only a single telnet session with the host computer or server. On an IBM SNA network, the 3270 terminal has a special key that toggles between the two sessions (SYSREQ). A brief discussion on how some telnet servers deal with this is included.

random other refs:
http://www.cfsoftware.com/a/log/a-dianew.htm
https://web.archive.org/web/20020307071843/http://www.cfsoftware.com/a/log/a-dianew.htm
http://www.di3270.com/news/tn3270bs.htm
https://web.archive.org/web/20020504045955/http://www.di3270.com/news/tn3270bs.htm
http://support.banyan.com/3270SNA/3270SNA4.htm
https://web.archive.org/web/20010420232549/http://support.banyan.com/3270SNA/3270SNA4.htm
http://www.rs6000.ibm.com/doc_link/en_US/a_doc_lib/3270hcon/hconugd/Cust_HCON_sysmgmt_3270Host.htm
http://www.nsainc.com/products/mainframe/dsnstn.html
http://www.austin.ibm.com/doc_link/en_US/a_doc_lib/3270hcon/hconugd/Keyboard_3270Host.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM 705 computer manual

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 705 computer manual
Newsgroups: alt.folklore.computers
Date: Fri, 09 Feb 2001 15:15:57 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
For that matter, can anyone offer information (or informed speculation) about the decision sometime around 1970 (?) to use Courier as the typeface for some of the hardware and software documentation? Monospaced fonts have their place, but not as the one-and-only body text for a long publication. The impression given by the manual design was one of cheapness (in all senses of the word).

starting with CP/67, some number of manuals started being kept in script and printed on 1403 with TN chain and then reproduced (& film ribbon for "high" quality printing). script was runoff like language done at csc for cms by madnick at least by 67 (, possibly 66, I didn't see cp/67 until jan. '68).

As CP/67 (and then VM) & CMS perculoated thruout the company, more and more work, development, products and documentation was done on it, resulting in more documentation being run off on 1403s or 2741s (and then sent for reproduction).

"G", "M", & "L" (also at CSC), then added GML (acronym for generalized markup langugate, but actually their initials) support to script in the early '70s (eventually begatting SGML, HTML, XML, etc).

HONE (first CP/67 platfrom migrating to VM/370) support for all branch offices and field support started in the early '70s ... with customer contracts, proposals, and additional documentation all being done in script (with lots of boiler-plate stuff provided by hdqtrs).

It wasn't until 6670s & 3800s that script got proportional font capability.

random refs:
https://www.garlic.com/~lynn/2000c.html#30
https://www.garlic.com/~lynn/2000e.html#0
https://www.garlic.com/~lynn/2000e.html#1

There was some limited projects that had connected photo offset printers, like at ykt research

https://www.garlic.com/~lynn/2001.html#2

I also have copy of the Share "Towards More Usable Systems: The LSRAD Report" ... Large Systems Requirements for Application Development, Dec. 1979; printed with proportional fonts. The acknowledgments


Dana M. Becker          Bell Labs, Naperville
Jay A. Jonekait         Tymshare; Cupertino
Julie A. Lovinger       General Motors Research, Warren
Bruce S. Marshal        Perkin-Elmer; Danbury
Jay Michlin             Exxon, Florham Park
William Pashwa          Bell Labs, Piscataway
Marc Sewell             BCS, Seattle
Carol A. Schneier       IBM, DSD, POK
Richard W. SUko         IBM, DSD, POK
Albert B. Wilson        BCS, Seattle

The LSRAD task force would like to thank our respective employers for
the constant support they have given us in the form of resources and
encouragement. We further thank the individuals, both within and
outside SHARE Inc, who reviewed the various drafts of this report. We
would like to acknowledge the contribution of the technical editors,
Ruth Ashman, Jeanine Figur, and Ruth Oldfield, and also of the
clerical assistants, Jane Loverlette and Barbara Simpson.

Two computer systems proved invaluable for producing this
report. Draft copies were edited on the Tymshare VM system. The final
report was produced on the IBM Yorktown Heights experimental printer
using the Yorktwon Formatting Language under VM/CMS.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Stealth vs Closed

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Stealth vs Closed
Newsgroups: comp.security.firewalls
Date: Fri, 09 Feb 2001 17:33:06 GMT
Christer Palm writes:
Frank S wrote: >I should comment that I think such "scanning" should be, and will >become, illegal. It is the same as someone walking down the >street and trying to open the door of each car he passes. > >Or... is it more like walking down the street and LOOKING at the door of >each car he passes?

Actually It's more like walking down the street asking each car if the door is open. If it's stupid enough to say yes, well...


standard says that response should return an ICMP port not available ... that is what is mostly called closed. stealth doesn't return anything ... which can be considered not according to standard.

it is more like a blind person walking down the street and banging numerous times on every car until you hear something that sounds like a door and then checking to see if the door opens. This can leave residual damage ... like scratches ... or possibly dents if a crowbar is being used to do the banging (even if the person never discovers a door ... or is able to open any door discovered).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Kildall "flying" (was Re: First OS?)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Kildall "flying" (was Re: First OS?)
Newsgroups: alt.folklore.computers
Date: Fri, 09 Feb 2001 20:09:07 GMT
Eric Chomko writes:
Maybe it was Monterey, which is next to Pacific Grove. Salinas is about 10 miles from Monterey, inland from the Monterey peninsula. PG is where DRI was located, I believe.

The article I read was sketchy at best.

Eric


i thot it was on monterey side of the airport ... there is something of a hi-tech corridor of companies along a strip there. I had a number of visits to a company located there not too long ago and supposedly the claim was that the complex had previously been a/the DRI location.

good part of pacific grove is monterey presidio and defense language institute and then houses ... and then asilomar (with 26 mile drive on the other side). You take the route around thru monterey along the bay, or go further south 101 and 68 over the hill ... or go thru middle of monterey over the hill, skirting the presidio.

PG is also a reference to NPG school in downtown monterey.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Kildall "flying" (was Re: First OS?)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Kildall "flying" (was Re: First OS?)
Newsgroups: alt.folklore.computers
Date: Fri, 09 Feb 2001 20:11:52 GMT
Eric Chomko writes:
Okay, how about Pacific Grove? Find out how John Denver died. Was it REALLY engine trouble in his ultra-light?

Eric


I don't know ... we were on the beach near the front of asilomar and could see it start down ... didn't really know what was going on until we heard it later on the news while we were eating dinner at the red lion in carmel.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Stealth vs Closed

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Stealth vs Closed
Newsgroups: comp.security.firewalls
Date: Fri, 09 Feb 2001 20:26:40 GMT
Philip Stromer writes:
That's certainly a charitable description for a hacker. :-)

some use canes, some use crowbars, and some use pretty big sledgeahmmers (and some travel down the street with a crane and a wrecking ball). soundproofing things so they don't know when they are actually hitting a car may metigate some distruction but won't necessarily eliminate it.

the idea of ICMP port-not-available is a standard protocol operation in the scenerio where a person actually is expecting a specific server service and is told that it doesn't exist and/or not operational at this moment.

not responding at all is, at least partially, based on theory that they get bored and go bother somebody else.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM 705 computer manual

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 705 computer manual
Newsgroups: alt.folklore.computers
Date: Sat, 10 Feb 2001 15:20:35 GMT
jsaum@world.std.com (Jim Saum) writes:
A lot of the 360 hardware doc was formatted by more traditional means, e.g., the 360 POP was conventionally typeset and offset-printed.

- Jim Saum


in the 68-70 period, as cp/67 become more prevalent (i was at undergraduate at university ... which was the 3rd cp/67 installation in jan. '68, and part of the CP/67 announcement at the spring '68 share meeting in houston) stuff started transition to script; by 70 or 71 the "red book" was in script with conditionals for generating either the "red book" or POP ... i.e. the "red book" sort of had the POP at its core ... but unannounced instructions and lots of engineering, design, and trade-off considerations only showed-up when the "red book" version was printed.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why SMP at all anymore?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why SMP at all anymore?
Newsgroups: comp.arch
Date: Sat, 10 Feb 2001 17:25:34 GMT
Casper.Dik@Holland.Sun.Com (Casper H.S. Dik - Network Security Engineer) writes:
Cray != Cray. The Cray division that designed the E10K had precious little to do with the any other Cray divsion, so to claim that it is a "Cray" is stretching the truth as much as that it is a "in-house Sun" or "SGI" design. This division of Cray used to be Floating Point Systems, and they did SPARC systems with vector FP units (IIRC) before they were acquired by Cray.

random stuff from the archives
Date: Sat, 16 Jan 88 09:03:30 CST From: smu!leff@uunet.UU.NET (Laurence Leff) Subject: Review - Spang Robinson Supercomputing V1/N3

Summary of The Spang Robinson Report on Supercomputing and Parallel Processing Volume 1 No. 3

Minisupercomputers, The Megaflop Boom (Lead Story)

There have been a total of 850 minicomputer systems installed since 1981. (This article considers Floating Point Systems general purpose version of array processor the first entrant to this market)

Possible new entrants to this race are Supertek (a Cray-compatible product), Cydrome's Dataflow-based system, Quad Design (a spin-off from Vitesse and still hunting for venture capital), Gould Computer Systems Division, Celerity, Harris (rumors only), Digital Equipment (project in force but no announcements as to exact nature). They estimate sales for this year at between 275 million and 300 million.

50 percent of minisupers customers required DEC system compatibility. 10 percent required Cray compatibility while more buyers were concerned with SUN compatability with Cray compatibility.

The Consortium for Supercomputer Research estimates a total of 500 Cray-1 equivalent units by 1992 world-wide for academic research and development. Seventy percent of all applications were migrated form VAX-class machines, 12 percent from workstations, 10 percent from mainframes and eight percent from Cray machines.

_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+

Shorts; Cray Research has authorized to sell a X-MP14 to the Indian Minsitry of Science and Technology

NEC announced the sale of single processor SX-2 to the Bangalor Science Research center.

MASSCOMP received a large contract to sell its units to the government, presumably the National Security Agency.

Three PhD's and a PhD candidate are part of Parasoft, a consulting company specializing in software for hypercube architectures.

The Consortium for Supercomputer Research has released Volume II of the series, "An Analysis and Forecast of International Supercomputing." It concludes that supercomputer can cost its owner more than 50 million dollars over the first five years not counting applications.

Encore Computer Corporation is using VMARK's PICK compatible applications environment on its Multimax line.

Amdahl's 1400E now runs at 1700 MFLOPS and is thus the highest single processor performance in the industry.

Ridge's new 5100 has 2MFLOP performance on the LINPACK benchmark and 14 million Whetstones per second.

MASSCOMP now runs 8 processors and thus is a 20 MIP machine.

Numerical Algorithms Corporation has released a version of its NAG Fortran Library for the 3090 Vecctor facility.
^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&^&
This issue also has a table of Superminicomputers listing various information. Here are the number of installations:

Alliant        171
Celerity just shipping
Convex         200
ELXSI           80
FPS            365
Gould            6
Multiflow        5
Scientific      25
  Computing
Supertek    not shipping yet

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

I am fed up!

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I am fed up!
Newsgroups: alt.folklore.computers
Date: Mon, 12 Feb 2001 16:14:04 GMT
Dennis Ritchie writes:
So X.25 in fact did have a noticeable influence on the link-level layer distributed with UUCP by 1979 (and which was indeed part of the transport and session-layer for early Usenet). Chesson's paper isn't around, although the one by Nowitz and Lesk on UUCP in general is, in the Unix 7th Edition manual:

i've got hardcopy some place (just can't find where) ... quick search engine yields

http://www.uucp.org/papers/chesson.html

I somewhat worked with greg in late '80s & early '90s as a corporate rep to the XTP technical advisery board.

random ref:
http://www.mentat.com/xtp/xtp-overview.html
https://web.archive.org/web/20020102112427/http://www.mentat.com/xtp/xtp-overview.html
http://www.ca.sandia.gov/xtp/
https://web.archive.org/web/20020209014400/http://www.ca.sandia.gov/xtp/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Checkpoint better than PIX or vice versa???

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Checkpoint better than PIX or vice versa???
Newsgroups: comp.security.firewalls
Date: Wed, 14 Feb 2001 16:06:47 GMT
charles.0272@worldnet.no.spam.att.net (Charles Wilt) writes:
Well ICSA certainly considers application gateways, or proxies, to be a type of firewall. In fact, if I'm remember correctly ;-), the term firewall was originally applied to an application gateway. I believe that the dynamic and statefull inspection packet filtering firewalls are a much more recent development.

circa ''95 or so ... firewalls were application gateways ... big thing was attempting to catch buffer overrun exploits and sanitizing permissable types of requests.

packet filtering routers tended to only handle simple packet filtering with static rules although the rules in some cases might not be very simple. cisco tended to be at the much more complex end of the scale and well known problem was that it was relatively common to find that the sense had been inverted (permiting instead of denying).

The big deal at the time was at least filtering ip-address spoofing (not allowing internet origin packets with internal ip source address). there was talk at the time about ISPs doing inverse/similar filtering ... i.e. not allow incoming packets with source address different than what was assigned to their client/customer (but there was some issue about commonly used ISP-routers having sufficient horse power to implement such a practice).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Disks size growing while disk count shrinking = bad performance

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disks size growing while disk count shrinking = bad performance
Newsgroups: comp.arch.storage
Date: Wed, 14 Feb 2001 20:39:00 GMT
Malcolm Weir writes:
Yeah, me too! I have an application that uses disk as a buffer between an incoming data stream and an off-line archive. It is getting somewhat annoying to have to buy huge amounts of additional space since I cannot reduce the number of spindles...

>Are others just biting the bullet and buying lots more disks >to keep the disk number the same to maintain performance?

Bluntly, yes. The good news is that the cost per MB is shrinking, so the investment remains the same, and you simply end up with "wasted" space for the same dollars...


not a new problem. at one time we actually looked at an engineering tweak for "high-performance" drives that you could charge extra for (circa approx. 1980 about the time of the introduction of the 3380). The idea is that avg. arm access is based on using the whole platter, using only half the platter would make the avg. arm access look better (as well as reduce arm contention) and allow charging an extra 10% for the feature. this was for operations that didn't have enuf control to do their own management of arm performance & contention

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

monterey's place in computing was: Kildall "flying" (was Re: First OS?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: monterey's place in computing was: Kildall "flying" (was Re: First OS?)
Newsgroups: alt.folklore.computers
Date: Thu, 15 Feb 2001 20:17:04 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Actually Eric, I do not get the sense that these are all spook issues. I did a session at the Asilomar conference last year on secrets, declassified former top-secrets at John's request. I gave examples of redacting, brought our Enigma, and so forth. I don't have a clearance. I refuse one. The spooks had different classification policies from the nuclear (DOE and DTRA) communitiee, and those apparently differ from the CBW community, and there must be a slew of other centers of practice.

all sigops used to be held at asilomar ... and then one meeting there was this midnight session in one of the side cotteges where the east-coasties claimed it wasn't fair to college students from east coast colleges ... and the meetings started wandering all over.

Last asilomar sigops i was at was early '90s. had a wonderful evening session taking over the monterey acquirium. I remember having an hr or two running discussion (argument?) with one of the current leading m'soft cluster people. At the time he was at DEC and was strongly claiming that it wasn't practical to do business-critical clustering on open/commodity platforms.

my wife and I had been running our own high-availability cluster multiprogramming skunk works for a couple years and we took the exact opposite position.

random refs:
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Disks size growing while disk count shrinking = bad performance

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disks size growing while disk count shrinking = bad performance
Newsgroups: comp.arch.storage
Date: Fri, 16 Feb 2001 00:13:01 GMT
"Bill Todd" writes:
Only if there are virtually no accesses to the colder data on the rest of the disk (which Lynn wasn't assuming was present, but my explanation did). In real-world use, biasing the clumped active data more toward the outside edge as the inactive data became relatively colder would make some sense, but given that full-stroke seeks are well into the linear-latency seek region (last I looked at the numbers, anyway) the only data you ever want a full stroke away from the most-active data is data that will almost never be accessed (especially if the most-active data consumes only a small fraction of the total disk capacity, since the latency difference between smallish seeks of slightly differing distances is minimal, so much of the latency actually being head-settling time - though if your active data is heavily large-transfer-oriented, the better bandwidth on the outer tracks also has some value, in contrast to database-like applications where the transfer time is swamped by the access latency).

we actually had some IP that reduced full seek trace in real time and could be used for all sort of purposes. there was big cache simulation project that look at memory/cost/benefit trade-off from real-world data (i.e. track cache, device cache, controller cache, etc). One of the things that we showed was that for the 3880-13 controller "track" cache which was using some numbers claiming a 90% hit rate ... that it was all coming from the cache performing a track buffer function (i.e. sequential access with 10 records per track, reading one record at a time, on the first record read, the cache preread the whole track and then got 9 hits on the subsequent reads, the same result would have been obtained with a track buffer per drive and no large cache would be needed).

there was application that would look at data re-org for load-balancing and minimal arm seek. turns out a lot of real-world access is very bursty, i.e. it is actually possible to have multiple different high-use data on the same drive because the access patterns don't overlap (i.e. some random arm motion followed by very little motion for extended period). The problem here is that there can be non-linear effects with increase in concurrent load ... akin to cache thrashing effects ... i.e. loose locality of disk arm access with contention for the same arm by different applications accessing different data.

the application supported adding drives, taking away drives, moving from one kind of drive to another kind of drive. in moving from 3350 to 3380, the built-in model showed that (on the avg) that if you filled completely a 3380 with the data migrated from 3350s, overall performance would degrade (even tho the 3380 was significantly faster than the 3350; the problem of course: the increase in storage capacity was larger than the increase in performance/thruput). Something similar was observed in earlier migration from 3330 to 3350.

this has been on ongoing problem since (at least) the 60s.

In any case, the model showed that the approx. threshold was to only fill the 3380s to 80% (with 3350 data) in order to give the same performance as obtained with 3350 configuration. of course it was possible to populate the remaining 20% with really cold data ... but frequently administrative people many times couldn't reconcile the two issues .... a "waste" of 20% disk space improved overall system thruput by a significant percent ... and that the overall value of the increased system thruput was worth at least an order of magnitude more than the value of the "wasted" disk space.

Tweaking the engineering so that a special model of the disk only had a subset of the capacity and which was charged more for, addressed some of the administrative vision deficiencies .... and it was very important in these scenerios that the device cost more so that there was perceived increased value. We weren't actually allowed to do it, but it addressed a real-world operational problem/opportunity.

random refs:
https://www.garlic.com/~lynn/94.html#43
https://www.garlic.com/~lynn/95.html#8
https://www.garlic.com/~lynn/93.html#28
https://www.garlic.com/~lynn/2001.html#58

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

z/Architecture I-cache

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z/Architecture I-cache
Newsgroups: bit.listserv.ibm-main
Date: Fri, 16 Feb 2001 00:30:18 GMT
Chris_Blaicher@BMC.COM (Blaicher, Chris) writes:
Eric,

The answer to your concern is, it depends.

If you build it once and use it many times, you will only hit the over head once. Now if you build this generated code, and you build it non-reentrant where ,let's say, you have a save/work area at the end of it, then you will hit the problem.

A cache line can exist in both the I-cache and the D-cache but only as long as the D-cache line is not modified. All those constants that you have in your reentrant programs do not cause a problem because they do not get changed. (Another reason to group all the constants and LTORGs together is that you will have fewer duplicated cache lines.)

John McKowen, and all the others that have a bunch of non-reentrant code, really should think about converting them sooner, rather than latter, to reentrant.

Another thing that is aggravating this problem a little is that the cache line length for the z-Series is 256 bytes, where before it was 128 bytes. It is probably a small grain of salt in the wound, but it is one more grain.


note that MVS (& other systems) had a lot of work done on them in the 3081/3084 time-frame to allocate on cache-line boundaries and in multiples of cache-lines. the issue was big performance hit between different processors accessing different data that happen to co-exist in the same cache line (lots of cross-cache invalidation moving data back & forth between the different processor caches). some of the cache synchronization/performance issues between I&D caches are the same as cache synchronization/performance issues between different processor caches.

in the risc/harvard architectures with different I&D caches, store-into data cache, and no I&D cache synchronization ... there are specific instructions ... typically used by at least loaders that invalidate i-cache lines and push data-cache lines to memory ... so that when a loaded program is finally branched to ... its instructions that started life in the data-cache are flushed to memory so that an i-cache miss will pick it up.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Java as a first programming language for cs students

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Java as a first programming language for cs students
Newsgroups: alt.folklore.computers
Date: 15 Feb 2001 18:23:29 -0700
Anne & Lynn Wheeler writes:
+ + + Ford's Fundamental Feedback Formula + + +

ot ... but my all-time favorite feedback ... boyd & OODA-loops

... misc. ref from the archive
Austin American-Statesman, Monday, Jan. 23, 1989, pg. 4 (of the new capital business section),

"Successful Players Must Learn To Maneuver Quickly"
On Excellence - Tom Peters ...

Retired Air Force Col. John Boyd, generaly considered one of the most inventive military strategists of this century, would fancy Carter's style. Boyd, a founder of what is loosely called the Military Reform Movement, was a Korean War fighter ace. He attempted in the 1950s to determine why the kill rate in Korea by our F-86, when up against the Russain-built MiG-15, was about 10 to 1. Though the MiG accelerated faster, climbed quicker and negotiated tighter turns, the F-86's high-powered hydraulic system allowed the plane to reverse a turn, shift and dive away or snap backward more rapidly than its opponent. This superior "transition performance" caused the enemy pilot to become increasingly disoriented, performing more and more inappropriate responses until he eventually set himself up for a kill. This revelation inspired Boyd to reassess 2,000 years of military history, including battles such as Marathon, Cannae, Thremopolae and the German success against the French in 1940. He distilled these lessons into what is now commonly called the Boyd Cycle, or OODA-Loop (for Observation, Orientation, Decision, and Action). Boyd says the victors go through the OODA-Loop faster than the vanquished. By "destroying the enemy's world view," winners cause losers to respond incorrectly to events.


....

& the War, Chaos, and Business web pages:
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/

random other refs:
https://www.garlic.com/~lynn/94.html#8
https://www.garlic.com/~lynn/99.html#120
https://www.garlic.com/~lynn/2000c.html#85
https://www.garlic.com/~lynn/2000e.html#33
https://www.garlic.com/~lynn/2000e.html#34
https://www.garlic.com/~lynn/2000e.html#35
https://www.garlic.com/~lynn/2000e.html#36

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Java as a first programming language for cs students

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Java as a first programming language for cs students
Newsgroups: alt.folklore.computers
Date: Fri, 16 Feb 2001 02:23:17 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Anne & Lynn Wheeler (lynn@garlic.com) writes: >ot ... but my all-time favorite feedback ... boyd & OODA-loops ... Seems like required reading for the marketing/sales groups.

one time trying to sponsor one of his talks, it was suggested to me that maybe the only people present should the competitive analysis people.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Java as a first programming language for cs students

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Java as a first programming language for cs students
Newsgroups: alt.folklore.computers
Date: Fri, 16 Feb 2001 19:11:25 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Seems like required reading for the marketing/sales groups.

one marketing version ...
http://www.worksys.com.au/agile.htm
https://web.archive.org/web/20020204160744/http://worksys.com.au/agile.htm

the us new & report article during desert storm referred to the (then) crop of major & cols. as boyd's jedi knights

a couple views from chuck spinney
http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html|
http://home.twcny.rr.com/dysonm/ooda_loop.html
https://web.archive.org/web/20020122113552/http://home.twcny.rr.com/dysonm/ooda_loop.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

10 OF THE BEST

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 10 OF THE BEST
Newsgroups: alt.folklore.computers
Date: Fri, 16 Feb 2001 20:01:11 GMT
Charles Richmond writes:
I agree...except this does not address the "dumbing down" of "adult" books. A lot of "adults" think that books are better if they are smaller and contain less information. This leads to bad editions of once useful books...where a lot of content was omitted, and the print made larger...to make the subject seem easier. This actually only destroys all the good information and examples one needs to understand the subject matter IMHO...

in the early 90s we were working with some people from one of the large western land grant universities and they made the observation that in the previous 20 years, college text books had gone thru two "dumbing-down" iterations.

random other refs:
https://www.garlic.com/~lynn/2000f.html#11

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Original S/360 Systems - Models 60,62 70

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Original S/360 Systems - Models 60,62 70
Newsgroups: bit.listserv.ibm-main
Date: Fri, 16 Feb 2001 22:22:35 GMT
PA7280@UTKVM1.UTK.EDU (Ben Alford) writes:
Thanks to someone on the list I've found a neat article by Gene Amdahl, Gerrit Blaauw, and Fred Brooks, republished from one of the 1964 IBM Journal of R & D titled "Architecture of the IBM System/360." (see http://www.research.ibm.com/journal/rd/441/Amdahl.pdf )

It refers to the original 6 models of the S/360 as the 30, 40, 50, 60, 62 and 70. I knew about the 30-50, 44, 65, 67, 75, 85, 91, 95, etc.

What happened to the 60, 62 & 70? Thanks!


60->65 62->67 70->75

my impression was upgrade in memory technology was the transition. i don't know what was planned for the originals ... but (at least) 65/67 resulted in 750mics core memory.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

weather biasing where engineers live (was Re: Disk power numbers)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: weather biasing where engineers live (was Re: Disk power numbers)
Newsgroups: comp.arch
Date: Sat, 17 Feb 2001 17:58:28 GMT
Eric Smith <eric-no-spam-for-me@brouhaha.com> writes:
A search on www.weatherbase.com does show that my idea of dry is clearly biased:

relative humidity yearly monthly avg Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Denver morning: 66 62 66 67 66 70 68 68 68 66 63 66 63 Denver evening: 39 49 44 39 35 38 34 33 34 32 35 47 50

San Francisco morning: 85 81 83 81 82 89 89 92 93 87 81 82 80 San Francisco evening: 66 63 63 61 61 68 72 74 73 66 60 63 63

(They don't have figures for San Jose, and I don't know how those would compare to San Francisco.)


On an approach to SFO one time, somebody up in the cabin gave an interesting commentary about the weather around sanfran. Seems that as the south valley heats up (i.e. gilroy, garlic capital of the world, etc), the air rises. This creates lower pressure which attempts to "suck in" air to replace it. Because of the range of hills on both sides of the valley, the first significant place the eco-system can obtain incoming air is the break and golden gate bridge. As a result there is a natural air-conditioning effect pulling cool ocean air thru the golden gate and past sanfran to display the hot air rising in the south valley (the south valley opened up and so the mass of hot air raising between the two line of hills was much larger than the mass of air rising further up valley). it has somewhat natural feedback system since the hotter it gets in the area, the more hot air rises in the south valley and the more cool air is pulled in thru the golden gate to keep sanfran cool (which could keep sanfran 20 degrees cooler than san jose).

there can be boundary conditions where it has been cool in the area and tempurature rises dramatically and sanfran gets some hot days until the air condition effect kicks in.

there are secondary effects between the bay and south valley also.

I use to periodically do some work for Santa Teresa Labs ... which is about 5-6 miles south of where i lived at the time ... and I would ride my bike. Got head-winds both in the morning going south and coming back north in the evening. At night, the bay was hotter than the south valley and so air flowed from the south valley to the bay. Around 11am the equation would switch and the south valley would be pulling air off the bay (& if the differential was large enuf, would in turn, pulled air past the golden gate bridge). This would continue on into the evening (causing slight cooling effect on san jose compared to south valley).

The 11-noon period was also frequently when san jose airport would switch the landing/taking-off pattern.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Z/90, S/390, 370/ESA (slightly off topic)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Sat, 17 Feb 2001 23:13:28 GMT
MichelC@jeeves.be (Michel Castelein) writes:
S/370 --> XA:
- With XA, the I/O configuration is managed by the hardware - the I/O-engine (Channel Subsystem) is renamed into DYNAMIC Channel Subsytem(DCS) - instead by MVS (running on the CPU[s]). This causes the Start-I/O (SIO) machine instruction to be replaced by Start Subchannel (SSCH).
- XA introduces bimodal addressing mode (AMODE), i.e. 24- en 31-bit virtual addresses. Real storage is accessed by means of 31-bit addresses.


note that the 360/67 had both channel controller (i.e. all processors accessed all channels) and 32-bit addressing. both were dropped in the 370.

The channel controller was partially re-introduced in the 303xs (before 3081) ... most significant in the 158; i.e. the 158->3031, 168->3032, and in effect the 3033 was 168 wiring diagram remapped to new chip technology.

each 303x channel controller was a 158 processor with the 370 instruction set removed (leaving only the i/o channel support). The significant effect on the 158->3031 was that the same 158 processor was no longer doing double duty executing 370 instruction set as well as all the channel code (effectively had a dedicated processor for doing each function). Some benchmarks showed 158->3031 with 40% thruput improvement.

3033 also introduced another anomoly, "sort-of" 26-bit real addressing. The 370 pagetable entry had two unused bits. It was possible to configure 3033 with more than 16mbytes of real storage. Channel command IDALs could address more than 16mbytes but standard 370 instructions couldn't. It was possible to read/write pages in the greater than 16mbyte region ... and it was possible for virtual address pages to have their corresponding real pages above the 16mbyte line (with relocation hardware providing the 24bit virtual address translation to 26bit real address). Standard instructions were all 24bit (whether real or virtual mode).

308x/XA got dedicated real-time processors for doing the I/O as well as a change in the I/O architecture that allowed out-board queuing of requests. SCH allowed the outboard engine to mask various transient SIO busy conditions (i.e. queue and redrive when available) ... eliminating a number of interrupts and the associated processing (as well as the bad effects of asynchronous interrupts on cache utilization). Having an outboard real-time engine handling queuing ... also shortened device, controller, & channel redrive latency (given MVS interrupt processing & I/O scheduling a significant lapse could occur between when a device finished the previous operation and the next operation was actually started). Note that 370 did have SIOF added to SIO (elapsed time for SIO instruction required signling all the way out to the device, elapsed time for SIOF instruction only required interaction with channel processing).

The XA support for multiple channel paths were also off-loaded into the real-time processors which had a much better algorithm/implementation that what had been in the 370 code. I once did an highly optimized 370 code implementation for multiple channel paths that eliminated a significant amount of the difference between the standard 370 implementation and the XA real-time processor implementation (i.e. XA was more than just moving the existing MVS 370 implementation into an additional processor).

This ran into a new problem (eliminate one bottleneck and run into another). The 3880 disk controller had a very significant additional processor overhead when switching activity paths. A really optimal channel balancing involving 3880 controller could actually degraded overall performance because of increased controller overhead. As a result something akin to activity-path affinity for 3880 had to be done with sub-optimal channel balancing in order to achieve optimal system thruput. This wasn't a real-time vis-a-vis non-real-time issue, this was some intelligence that rather than starting a new operation down an available path ... it might slightly delay in the hopes that an existing busy path would become available.

another characteristic was channel command processing latency. typical sequence of disk commands were position arm, select head, select record, read/write. it was possible to do position arm, select head, select record N, read/write, select record N+1, read/write ... so forth in a single revolution.

however, if you wanted to read record N+1 on a different platter (but at the same arm postiion) the sequence would be: position arm, select head X, select record N, read/write, select head not-X, select record N+1, reade/write. The channel command latency for processing select head not-X was sufficient that record N+1 had already rotated passed the head and two revolutions were required instead of one.

For some applications this was a common enuf throughput issue that they would format the tracks with a dummy "filler" record between standard data records. This allowed an additional rotational delay between the end of data record N and the start of data record N+1 that the select head opperation could finish before record N+1 had rotated passed the head.

So to calibrate, start with a dummy filler record of 1 byte and run the test and gradually increase the dummy filler record size until the test completes in a single revolution instead of two revolutions (actually do a couple thousand in sequence, tests involving two revolutions take twice as long as tests involving single revolution).

Turns out that 148s, 4341s, & 168s all calibrated at the same sized filler record. 158s calibrated with a filler record nearly three times larger (indicating a significantly longer command processing latency in 158 channels). As might be expected, all 303Xs calibrated at same sized filler record as 158 (basically since all 303Xs and the 158 shared same channel processing). Interesting was that 3081s calibrated with same filler record (or larger) as 158 (indicating 3081 actually had slightly longer channel command processing latency than 158/303x).

random refs:
https://www.garlic.com/~lynn/99.html#7
https://www.garlic.com/~lynn/2000.html#78
https://www.garlic.com/~lynn/2000d.html#7
https://www.garlic.com/~lynn/2000d.html#21
https://www.garlic.com/~lynn/2000d.html#23
https://www.garlic.com/~lynn/2000c.html#74
https://www.garlic.com/~lynn/2000d.html#82
https://www.garlic.com/~lynn/93.html#14
https://www.garlic.com/~lynn/2000e.html#57
https://www.garlic.com/~lynn/2001b.html#35
https://www.garlic.com/~lynn/94.html#00
https://www.garlic.com/~lynn/2000d.html#61
https://www.garlic.com/~lynn/2000d.html#82
https://www.garlic.com/~lynn/99.html#190

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Digital signature w/o original document

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Digital signature w/o original document
Newsgroups: sci.crypt
Date: Sat, 17 Feb 2001 23:24:11 GMT
"David Sowinski" writes:
I am interested in generating a digital signature that can later be verified without the original document. I recall coming across a homomorphic encryption/signature scheme awhile back, but cannot find much information on it now. Does anybody know if this is possible?

possibly not what you are thinking of ... but there are a number of financial transactions that have been defined where a document is generated and then digital signed and the document dissolved and the digital signature appended to a standard existing financial transaction.

the recepient is expected to be able to exactly reconstruct the original document ... this can be because of information in a regular part of the transaction and/or data known to be in the possession of the recepient ... in order to verify the digital signature.

a flavor of this has been done for a mapping of the recently passed X9.59 payment (for all electronic retail payments) standard to existing ISO8583 payment-card-based networks.

random ref:
https://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Z/90, S/390, 370/ESA (slightly off topic)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Mon, 19 Feb 2001 04:25:44 GMT
"The Bakers" writes:
I know this probably belongs more in alt.comp.folklore but I was hoping that there was some 1130-s/360-s/370 history here too :-)

I never programmed a 1130. CSC had a 2250m4/1130 (i.e. 1130 as controller, 2250m1 had its own hardware controller and 360 channel attached ... which I did do some programming for ... i.e. a hacked version of CMS edit for early full-screen editor).

spacewar (from pdp1) had been ported to the csc 2250m4/1130. keyboard was split left/right for player1/player2. my kids sometimes played it on the weekends.

cpremote was done for CSC 1130/360 interconnect ... grew into vnet and the internal network (as well as bitnet). the internal network was larger than the internet until about '85 (when number of internet connected workstations started to takeover minis & mainframes in numbers).

an early version of the 5100 did have a 1130 emulator to run apl\1130

http://www.brouhaha.com/~eric/retrocomputing/ibm/5100/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Z/90, S/390, 370/ESA (slightly off topic)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Mon, 19 Feb 2001 04:39:38 GMT
nospam@nowhere.com (Steve Myers) writes:
In addition, there was some capability to run a DOS type system under MVS, I think it was, on some of the 370 machines.

standard on several machines ... a Boeing SE did a port of CP/67 to 370 using the facility ... before virtual memory and paging was available on 370. It used base/limit relocation addressing (i.e. contiguous region of storage). Vague recollection ... instruction/feature mnemonic was DIL(?). Might consider it an early ancestor to LPARs.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

7090 vs. 7094 etc.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 7090 vs. 7094 etc.
Newsgroups: bit.listserv.ibm-main
Date: Mon, 19 Feb 2001 16:57:53 GMT
"Larry" writes:
Wasn't it HASP that Simpson and Crabtree, and a couple of others, did instead of ASP?

yep, random HASP ref.

https://www.garlic.com/~lynn/94.html#18

Simpson went to white plains (made ibm fellow about 76) and had a project called RASP (note there were a couple different RASP's in that time frame) before going to dallas w/Amdahl (made Amdahl fellow in fall of '79). His RASP had somewhat the flavor of the current incarnation of gnosis (keykos).

My wife was the "catcher" in g'burg for JES3 ... before going to POK to be in charge of loosely-coupled architecture. originated Peer-Coupled Shared Data in pok ... original basis for IMS hot-standby and then parallel sysplex. also had some affect on trotter/3088.

last time we ran into crabtree, he was in atlanta in charge of mid-range system application software.

random ref:
https://www.garlic.com/~lynn/2000f.html#68
https://www.garlic.com/~lynn/2000f.html#69
https://www.garlic.com/~lynn/2000f.html#70
https://www.garlic.com/~lynn/2000f.html#71

my first share was spring '68 in houston ... & of course HASP people were there.

hasp started a tradition of sign-a-long nights at share. I have an old Share Songbook from the mid-70s.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Z/90, S/390, 370/ESA (slightly off topic)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Mon, 19 Feb 2001 19:03:50 GMT
Bill Becker writes:
Steve, how's your VM trivia memory? Have you ever heard of "The Wheeler Scheduler?"

I always thot that ibm would remember me more for having originated the ibm pcm controller business than the dynamic adaptive control algorithms

random refs:

https://www.garlic.com/~lynn/96.html#30
https://www.garlic.com/~lynn/96.html#37
https://www.garlic.com/~lynn/99.html#67
https://www.garlic.com/~lynn/99.html#70
https://www.garlic.com/~lynn/2000c.html#36
https://www.garlic.com/~lynn/2000c.html#37
https://www.garlic.com/~lynn/2000f.html#6
https://www.garlic.com/~lynn/2001.html#5

misc. other tidbits about the resource manager.

it was the first "charged-for" SCP product (i.e. a software component that was part of the system control program that had a price-tag). I got to spend something like 6 months working with the business people developing the methodology and process for SCP charging.

another characteristic was that iniitially the resource manager contained a lot of kernel restructuring code ... including the basis for SMP support. In the next release of VM, all the restructuring code was removed from the resource manager and dropped into the base kernel ... as part of deliverying SMP support in the base product (although I had the HONE system running SMP support on the previous release of VM).

random refs:

https://www.garlic.com/~lynn/2000c.html#68
https://www.garlic.com/~lynn/2000d.html#10
https://www.garlic.com/~lynn/2000e.html#6
https://www.garlic.com/~lynn/2000e.html#7
https://www.garlic.com/~lynn/2000e.html#8
https://www.garlic.com/~lynn/95.html#5

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Z/90, S/390, 370/ESA (slightly off topic)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Mon, 19 Feb 2001 19:22:49 GMT
vrenios@enuxsa.eas.asu.edu (Alex Vrenios) writes:
Wasn't the IBM 1800 the scientific equivalent of the 1130? I seem to recall that it was. There was one at the U of I (at the Chicago Circle Campus) in 1972. I saw a demo that an SE brought in, named TILT. We turned off the lights, started the program, and watched the register lights panel put on a show: A single lit bulb would shoot across the panel, then another (in the opposite direction) and another until it was a chaos of simulated motion. I can't remember all the antics he put it though, but it ended with a flurry of activity followed by the word TILT displayed on the panel's lights at the end. Pretty nifty for those days.

also System/7 and the series/1.

there was recent thread on system/7 and series/1 in alt.folklore.computers recently; random refs

https://www.garlic.com/~lynn/2001.html#62
https://www.garlic.com/~lynn/2001.html#65
https://www.garlic.com/~lynn/2001.html#68
https://www.garlic.com/~lynn/2001.html#72

only 1800 i remember seeing was at amoco research in tulsa in the early '70s ... they were transferring a number of oil-related 1800 applications to cms.

cambridge had a system/7 that was programmed to do some communication application ... somewhat akin to earlier stuff I had done with Interdata/3 (originating the 370 controller pcm business).

the cambridge system/7 contingent .... before either 3705 or series/1 were announced lobbied strenuously that peachtree (processor used in the series/1) should also be used for the 3705 instead of uc.5.

uc.5 was extremely limited but used in a number of low-function stuff ... although it was also used in 8100 (my wife once did an technical audit of 8100 & uc.5 for Bob Evans).

random refs:

https://www.garlic.com/~lynn/99.html#63
https://www.garlic.com/~lynn/99.html#108
https://www.garlic.com/~lynn/99.html#239
https://www.garlic.com/~lynn/2000b.html#66
https://www.garlic.com/~lynn/2000b.html#67
https://www.garlic.com/~lynn/2000b.html#69
https://www.garlic.com/~lynn/2000b.html#79

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Disks size growing while disk count shrinking = bad performance

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disks size growing while disk count shrinking = bad performance
Newsgroups: comp.arch.storage
Date: Mon, 19 Feb 2001 19:28:08 GMT
dsiebert@excisethis.khamsin.net (Douglas Siebert) writes:
Second, I think you'd probably dump the robot. The robot came about because tape drives are expensive, while slots to hold a tape are not. But all the disk needs to run is an IDE slot and small bit of power. I'll bet its cheaper to have a rack that is all IDE slots as compared to having a robot to move drives around. The upcoming serial ATA spec would be perfect for this, since it provides for hot swap.

i would claim that robots came about because operators and manual mounting was expensive, inefficient, and error prone (I once lost some really important data that had been backed up in triplicate on three different tapes ... because some operator had mounted all three tapes as scratch).

i've seen large commerical operations with tens of terabytes of spinning disk that have possibly 10-50 times as much data in tape robots silos.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Inserting autom. random signature

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Inserting autom. random signature
Newsgroups: gnu.emacs.gnus
Date: Mon, 19 Feb 2001 20:31:29 GMT
Marcus Raitner writes:
Not quite what your are looking for, but you should have a look at "random-sig" from Matt Simmons page

http://home.netcom.com/~simmonmt/

HTH,

Marcus

P.S. The following signature was produced with the package mentioned above.


the was a package done some time ago that called unix "zippy" that randomly selected stuff from a zippy file. the caveat was that the software (yow) used a 16bit (unix) random function. it worked fine if your file of random quotations to be selected was less than 64kbytes ... but if it was larger you only got stuff from the first 64kbyte of the file.

see zippy/yow in the emacs documentation ... or try m-x yow.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Inserting autom. random signature

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Inserting autom. random signature
Newsgroups: gnu.emacs.gnus
Date: Mon, 19 Feb 2001 20:48:24 GMT
Anne & Lynn Wheeler writes:
the was a package done some time ago that called unix "zippy" that randomly selected stuff from a zippy file. the caveat was that the software (yow) used a 16bit (unix) random function. it worked fine if your file of random quotations to be selected was less than 64kbytes ... but if it was larger you only got stuff from the first 64kbyte of the file.

see zippy/yow in the emacs documentation ... or try m-x yow.


here is defining signature hook:

(setq mail-signature-hook 'mail-signature-hook-fn)

here is a signature hook with my own yow file

(defun mail-signature-hook-fn () (interactive) (goto-char (point-max)) (progn (shell-command "yow -f ~/myfile.yow" t) t))

for the above, I modified the yow source because my file of sayings was much larger than the zippy file.

this was the original zippy implementation

(progn (insert "Words of wisdom from Zippy: \n") (shell-command "yow" t) t))

example post (from the past)

https://www.garlic.com/~lynn/93.html#5

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Z/90, S/390, 370/ESA (slightly off topic)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Mon, 19 Feb 2001 21:37:36 GMT
Bill Becker writes:
Steve, how's your VM trivia memory? Have you ever heard of "The Wheeler Scheduler?"

other resource manager tidbits

the month before the resource manager was announced and released, the cambridge site was in the category of applications/software where employees that developed products that were licensed to customres got a reward of the first two months licensing fees (for the first two years of the product). two weeks prior to the announce and availability of the resource manager, cambridge was reclassified so that any further products that came out of cambridge, the responsible employees were no longer eligible for the first two months licensing fees.

i was the person responsible for the resource manager and was the person that did design, development, product test, business cases, pricing, documentation, and field support (carry-over from the previous category, i was 1st, 2nd, 3rd, & only product support for the first six months of the product life, which was customary under the previous program guidelines where the employee(s) also got the first two months license fee of every install for the first two years).

in any case, the monthly license fee was $999 and well before the product had been available for six months, it had passed the 1000 license level.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Disks size growing while disk count shrinking = bad performance

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disks size growing while disk count shrinking = bad performance
Newsgroups: comp.arch.storage
Date: Tue, 20 Feb 2001 02:38:57 GMT
Paul Rubin <phr-n2001@nightsong.com> writes:
But it's different. Most disk drives will work fine till they eventually fail catastrophically. Tapes tend to fail more gracefully. You can clone them once they start getting more soft errors. Sometimes they do fail catastrophically but not as often as disks do. With your data on non-redundant tapes, you're taking a chance, but less of a chance than with discs.

especially the tape helical scan technology uses techniques similar to CDROMs ... low level reed-solomon ECC encoding with very wide interleaving. helical scan also tends to have redundant recording.

basically handle long sequences of contiguous bits all in error and half (or more) of the total bits in a region in error. part of the interleaving strategy is the various failure modes for sequences of contiguous error bits.

the traditional cdrom example is take a nail and scratch across the recording surface of the CD.

my biggest problem with tapes have been operator error.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

36-bit MIME types, PDP-10 FTP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 36-bit MIME types, PDP-10 FTP
Newsgroups: alt.sys.pdp10,alt.folklore.computers
Date: Tue, 20 Feb 2001 04:52:19 GMT
Ric Werme writes:
I have a copy of the conversion docs people used to implement the first TCP/IPs at work. I think the date is 1978 or 1980. I don't think there was ever a "switchover", for a while I think the protocols and hardware were supported both. However, I was completely uninvolved with all that.

The NCP addresses were 8 bits - 6 bits for IMP node number and 2 bits for hode hanging off the IMP. E.g. at C-MU we had one IMP and two PDP-10's names CMU-10A and CMU-10B. The other two ports were unconnected, though C.mmp (multimioi processor) may have been connected after I left.


cut-over was jan 1st, 1983 ...

random refs ... including some announcements before & after cut-over:
https://www.garlic.com/~lynn/2000e.html#18
https://www.garlic.com/~lynn/internet.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Disks size growing while disk count shrinking = bad performance

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disks size growing while disk count shrinking = bad performance
Newsgroups: comp.arch.storage
Date: Tue, 20 Feb 2001 04:55:19 GMT
Malcolm Weir writes:
I need to store 26MB/sec, 3600 seconds per hour, 24 hours per day, for 30 days. One day 31, I delete it. The chance of needing to replay that data is slim, but obviously critical when I do need to replay it.

we were once called in to look at an application that data acquisition continuous for years at a time ... although after a little processing it was reduced to about 8mbyte/sec (not the 26mbytes/sec you are looking at), continuous, sustained, effectively for possibly years at a time that had to be stored. started with striping across 10 disks (excess capacity because several applications would be accessing the data after it had been acquired ... and before it was aged out to some sort of secondary storage (some relief if the applications weren't processing until after acquisition had moved on to the next set of ten drives.

this was late '80s and for some space station iteration. problem may have eased over the years as it is taking longer to get a space station online; although data acquisition requirements may have kept pace with disk technology. nasa continues to be active in mass storage conferences and drive RFPs in the area.

say using ten existing 40gbyte drives (ten-drive set: 400gbyte), 8mbytes per second works out to about 50,000 seconds or almost 14 hrs ... say 12hrs with various filesystem and operational (overhead) characteristics. month of data then involves 60 10-drive sets ... or 600 drives. Keeping six months online for various application activities, etc is 360 10-drive sets, or 3600 drives.

"parity" drive for each ten-drive set adds 10% to the total number of disks (say 4000 drives for six months of data). Mirroring each 10-drive set doubles the number of drives ... six months of data then involves 7200 drives

Even at drives with 800,000 MTBF ... with 7200 drives ... & simple uniform distribution then say half dozen or so drives fail each month (out of 7200).

Maybe a ten-drive set per tower or rank drawer. Say six drawers per rack ... then there are 1200 racks for 7200 drives ... good-sized room.

Implementation could reasonably use a 6+1 strategy where data was kept online for six months but there was an extra month of disks to handle archiving of data in parallel with new data acquisition. I think hope is that secondary storage becomes available with some really dense holographic or atomic recording.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Z/90, S/390, 370/ESA (slightly off topic)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Tue, 20 Feb 2001 14:40:56 GMT
vrenios@enuxsa.eas.asu.edu (Alex Vrenios) writes:
I know that the differences between an emulator and a simulator are fuel for a holy war so I won't argue beyond this post. It might help me, however, if you quote your source. (You might also consdider the above definition in light of the terminal emulation software that IBM sells to make your PC look like a 3270.)

note that 360s & 370s, themselves were mostly 360/370 simulators, i.e. the native computer engine had "microcode" that simulated 360/370s. the various 360 non-360 architecture hardware emulators were frequently a different "microcode" simulating that architecture.

as to 3270 emulation, there is frequently a DCA-adapter card which is the stuff that handles the 3270 coax protocol.

note recent posting to this thread describing 158 "horizontal microcode" engine with both 370 microcode and channel hardware microcode ... and that 303x channel directors were 158 engines w/o the 370 microcode and 3031 was 158 engine w/o the channel hardware microcode.

refs:
https://www.garlic.com/~lynn/2001b.html#69

simulators for 370/370 tend to be in the 10:1 thruput range ... i.e. full instruction simulation of 360 running on 360 (and various vertical microcode engines) do ten instructions for every simulated instruction. vs/repack (a product from CSC) did 360 full instruction simulation for tracing instruction & storage refs ... and then used the information for optimizing the application. It ran at 10:1 ratio.

370/145/148 were one of the most frequently microcoded engines. the "APL" microcode ran APL at 10 faster than native(?) 370 (APL on 145 w/microcode assist ran about same thruput as APL on 168 w/o microcode assist).

ECPS for 138/148 got a 10:1 performance boost for 370 instructions sequences dropped into "microcode"

refs:
https://www.garlic.com/~lynn/94.html#21
https://www.garlic.com/~lynn/94.html#27
https://www.garlic.com/~lynn/94.html#28

random other refs:
https://www.garlic.com/~lynn/94.html#7
https://www.garlic.com/~lynn/2000.html#12
https://www.garlic.com/~lynn/2000c.html#50
https://www.garlic.com/~lynn/2000c.html#76

there were other kinds of "microcode" stuff. for the boston programming center's conversational PL/I there was a microcode assist for the 360/50.

things got a bit more interesting when you got to 3081 and there was support for paging microcode from the service processor's disk drive.

total aside, then for 3090 the service processor was a pair of 4361s running a highly modified version of VM/370 release 6 ... using CMS & IOS3270 for the service panels.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Disks size growing while disk count shrinking = bad performance

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disks size growing while disk count shrinking = bad performance
Newsgroups: comp.arch.storage
Date: Tue, 20 Feb 2001 15:01:07 GMT
Anne & Lynn Wheeler writes:
Even at drives with 800,000 MTBF ... with 7200 drives ... & simple uniform distribution then say half dozen or so drives fail each month (out of 7200).

Maybe a ten-drive set per tower or rank drawer. Say six drawers per rack ... then there are 1200 racks for 7200 drives ... good-sized room.


oops ... six drawers @10drives ... is 60 drives per rack ... that should be 120 racks not 1200 racks. also it reads more like six or so drives could fail in a month ... not necessarily that there would be six drives failing every month.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

what makes a cpu fast

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what makes a cpu fast
Newsgroups: comp.arch
Date: Tue, 20 Feb 2001 22:13:51 GMT
cecchi@signa.rchland.ibm.com (Del Cecchi) writes:
And since each plant has a local copy of the mask, before the wafers are exposed the operator has to check if it is the current copy or if there is an updated copy available. This is done at IBM by an internal network called VNET. --

Del Cecchi cecchi@rchland


note that DNS works that way also .... with a rather large hierarchical organization for caching local copies ... and if there isn't a local copy ... contacting some higher authority ... until possibly you arrive at the root domain name system authorities for the internet. DNS rules about use of local copies is much more fuzzy with time-out rule parameters on the locally cached information.

various distributed file systems tended to keep track of who had local copies and invalidation signals were sent out.

directory based memory consistency ... i have one of gustuvson's early copies of the SCI directory-based cache managment from when I was going to HiPPI, FCS, and SCI meetings.

convex, DG, and sequent all used SCI for their NUMA implemenations.

http://www.SCIzzL.com/

updated reference on SCI for NUMA


http://www.SCIzzL.com/ccNUMA_tutorial.html
https://web.archive.org/web/20020208072800/http://www.scizzl.com/ccNUMA_tutorial.html

random ref:
https://www.garlic.com/~lynn/96.html#25
https://www.garlic.com/~lynn/96.html#8
https://www.garlic.com/~lynn/98.html#40

from long ago and far away


From: DBG@SLACVM.SLAC.Stanford.EDU
Subject: Some online SCI documents

-
What is SCI?
The Scalable Coherent Interface, IEEE Std 1596-1992.
-
a STANDARD to enable smooth system growth with
MODULAR COMPONENTS from many vendors
1 GigaByte/second/processor system flux,
DISTRIBUTED SHARED MEMORY, & optional
CACHE COHERENCE, directory-based, &
MESSAGE PASSING mechanisms too.
SCALABLE from 1 through 64K processors.
16-bit 2ns unidirectional data links, for building
SWITCH NETWORKS of finite VLSI, and cheap
REGISTER INSERTION RINGS for PCs, workstations, etc.
Serial FIBER OPTIC links, Co-axial links too.
INTERFACE MECHANISMS to other buses (P1596.1 VME Bridge in progress),
an I/O and CSR ARCHITECTURE (IEEE Std 1212-1991) shared with
Futurebus+ (IEEE Std 896.x-1991) and SerialBus (P1394).
-
Current status:  the base standard is now approved by the IEEE as
IEEE Std 1596-1992. It originally went out for official ballot in
late January 91. Voters approved it by a 92% affirmative vote that
ended April 15. Final corrections and polishing were done, and
the revised draft was recirculated to the voters again and passed.
Draft 2.00 was approved by the IEEE Standards board on
18 March 1992. Pre-publication copies of the standard are
available from the IEEE Service Center, Piscataway, NJ, (800)678-4333.


Commercial products to support and use SCI are already in final design and simulation, so the support chips should be available soon, 3Q92. - SCI-related documents are available electronically via anonymous FTP from HPLSCI.HPL.HP.COM, except for a few documents which are paper only. Online formats are Macintosh Word 4 (Compacted,self expg) and PostScript. The PostScript includes Unix compressed and uncompressed forms. Paper documents can be ordered from Kinko's 24hr copy Service, Palo Alto, California, (415)328-3381. Various payment forms can be arranged. Newcomers should order the latest mailing plus the package NEW, which contains the most essential documents from previous mailings. SCI depends on the IEEE 1212 CSR Architecture as well, so you will also need a copy of that, which is available from the IEEE Service Ctr. - Send your name, mailing address, phone number, fax number, email address, to me and I will put you on a list of people to be notified when new mailings are available; you will also be listed in an occasional directory of people who are participating in or observing SCI development. - Contact: - David B. Gustavson IEEE P1596 Chairman Stanford Linear Accelerator Center Computation Research Group P.O.Box 4349, Bin 88 Stanford, CA 94309 415-926-2863 or dbg@slacvm.slac.stanford.edu

An SCI Extensions Study Group has been formed to consider what SCI-related extensions to pursue and how to organize them into standards. Related standards projects: 1212: Control and Status Register Architecture. This specification defines the I/O architecture for SCI, Futurebus+ (896.x) and SerialBus (P1394). Chaired by David V. James, Apple Computer, dvj@apple.com, 408-974-1321, fax 408-974-0781. An approved standard as of December 1991. Being published by the IEEE. P1596.1: SCI/VME Bridge. This specification defines a bridge architecture for interfacing VME buses to an SCI node. This will provide early I/O support for SCI systems via VME. Products are likely to be available in 1992. Chaired by Bjorn Solberg, CERN, CH-1211 Geneva 23, Switzerland. bsolberg@dsy-srv3.cern.ch, ++41-22-767-2677, fax ++41-22-782-1820. P1596.2: Cache Optimizations for Large Numbers of Processors using the Scalable Coherent Interface. Develop request combining, tree-structured coherence directories and fast data distribution mechanisms that may be important for systems with thousands of processors, compatible with the base SCI coherence mechanism. Chaired by Ross Johnson, U of Wisconsin, ross@cs.wisc.edu, 608-262-6617, fax 608-262-9777. P1596.3: Low-Voltage Differential Interface for the Scalable Coherent Interface. Specify low-voltage (less than 1 volt) differential signals suitable for high speed communication between CMOS, GaAs and BiCMOS logic arrays used to implement SCI. The object is to enable low-cost CMOS chips to be used for SCI implementations in workstations and PCs, at speeds of at least 200 MBytes/sec. This work seems to have converged on a signal swing of 0.25 V centered on +1 V. Chairman is Stephen Kempainen,National Semiconductor, 408-721-2836, fax 408-721-7218. asdksc@tevm2.nsc.com P1596.4: High-Bandwidth Memory Interface, based on SCI Signalling Technology. Define a high-bandwidth interface that will permit access to the large internal bandwidth already available in dynamic memory chips. The goal is to increase the performance and reduce the complexity of memory systems by using a subset of the SCI protocols. Started by Hans Wiggers of Hewlett Packard, current chairman is David Gustavson, Stanford Linear Accelerator Center, 415-961-3539, fax 415-961-3530. P1596.5: Data Transfer Formats Optimized for SCI. This working group has defined a set of data types and formats that will work efficiently on SCI for transferring data among heterogeneous processors in a multiprocessor SCI system. The working group has finished, voting to send the draft out for sponsor ballot. Chairman is David V. James, Apple Computer, dvj@apple.com, 408-974-1321, fax 408-974-0781.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/


previous, next, index - home