List of Archived Posts

2006 Newsgroup Postings (05/01 - 05/13)

The Pankian Metaphor
Sarbanes-Oxley
The Pankian Metaphor
Spoofing fingerprint scanners - NEWBIE()
Mainframe vs. xSeries
Transition of platforms in british education
The Pankian Metaphor
Transition of platforms in british education
Heating
Hadware Support for Protection Bits: what does it really mean?
Hadware Support for Protection Bits: what does it really mean?
Google is full
Mainframe near history (IBM 3380 and 3880 docs)
Multi-layered PKI implementation
Value of an old IBM PS/2 CL57 SX Laptop
rexx or other macro processor on z/os?
blast from the past on reliable communication
blast from the past on reliable communication
blast from the past on reliable communication
blast from the past on reliable communication
blast from the past on reliable communication
blast from the past on reliable communication
virtual memory
Virtual memory implementation in S/370
Virtual memory implementation in S/370
Benefits of PKI - 5,000 nodes organization
11may76, 30 years, (re-)release of resource manager
Really BIG disk platters?
virtual memory
Which entry of the routing table was selected?
virtual memory
virtual memory
virtual memory
virtual memory
TOD clock discussion
TOD clock discussion
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Mon, 01 May 2006 08:49:50 -0600
jmfbahciv writes:
How was that done? Was there a preprocessor that pretended to execute the control file and logged all devices needed? How did this determine core usage?

trivial preprocessor would open some number of default files with some amount of default allocation ... and then hope that most applications would never exceed the default ... somewhat like a lot of desktop applications. lots of the "system monitors" from the period implemented that sort of paradigm for one reason or another. A fairly representative case that is still around is CICS that provides support for "light-weight" transaction subsystem environment. Most CICS light-weight stuff only require a trivial amount of system resources (much more akin to many interactive commands in other kinds of interactive systems). CICS acquires a large block of system resources and then manages them internally for light-weight transaction operation (w/o having to go thru the normal system resource management processes for every light-weight transaction).

part of all this was starting way back in the days of real storage paradigm and much less resources. major applications could require both tapes as well as disks to be mounted on drives ... potentially all available drives (nearly all available resources). with PCP (no multiprogrammer) ... there was the real storage not required by the system, application either fit or it didn't.

later MFT & MVT supported multi-programming. regions with max amount of allocated real storage was defined. "jobs" had max real storage requirement which then mapped to a job "class" that would feed to processing "region" that allowed at least that amount of real storage.

the "x37" processing aborts (abends) were when resources ran out. default was to take default system termination. however, applications could set up traps for nearly every kind of abort and attempt remedial action.

this was more typical for service oriented applications that provided online support ... and had application specific programming for recovery from numerous kinds of anomolous operation ... as part of providing availability. other events were things that had specific deadlines and requirements to be run ... say like periodic payroll for large organization ... where tens or hundreds of thousands (or even millions) of people were expecting to be paid on specific date.

a typical payroll application might have various kinds of processing plus a sort phase before printing checks. a mission critical application might have a B37 abort trap provided (exhaust all available disk space). The B37 trap might do things like invoke system backup/archive/removal of non-critical files (i.e. the files would still be listed in system directory but have been moved to some other media). Then the B37 trap would restart processing in attempting to complete.

ten years ago, i did a comparison of moving such an application to a platform that had much less mission-critical derived infrastructure. put in into regular use ... but one run happened to exhaust available disk space (ran into disk full condition) during sort processing. the system file processing reflected end-of-file to sort ... which then continued processing using abbreviated set of entries. The final result had a file listing only about ten percent of the total people ... but completed w/o any obvious error condition.

... a little drift about CICS subsystem monitor. it had been developed at a customer site in the midwest (possibly utility company or some such). IBM decided to turn it into standard product offering. A few customers were selected to "beta-test" the product before general availability. One was the university library that had a ONR grant to do an online "card" catalog. As a result, one of the other things I got to work on as an undergraduate was debugging various early CICS problems (primarily related to the library attempting to define CICS configruation & use that hadn't been done before). misc. past posts mentioning CICS:
https://www.garlic.com/~lynn/94.html#33 short CICS story
https://www.garlic.com/~lynn/97.html#30 How is CICS pronounced?
https://www.garlic.com/~lynn/98.html#33 ... cics ... from posting from another list
https://www.garlic.com/~lynn/99.html#58 When did IBM go object only
https://www.garlic.com/~lynn/99.html#130 early hardware
https://www.garlic.com/~lynn/99.html#218 Mainframe acronyms: how do you pronounce them?
https://www.garlic.com/~lynn/2000b.html#41 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000c.html#35 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000c.html#45 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#52 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#54 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001.html#51 Competitors to SABRE?
https://www.garlic.com/~lynn/2001.html#62 California DMV
https://www.garlic.com/~lynn/2001d.html#56 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001g.html#24 XML: No More CICS?
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#49 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001j.html#16 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#24 Parity - why even or odd (was Re: Load Locked
https://www.garlic.com/~lynn/2001j.html#25 Parity - why even or odd (was Re: Load Locked
https://www.garlic.com/~lynn/2001k.html#51 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#20 mainframe question
https://www.garlic.com/~lynn/2001l.html#43 QTAM (was: MVS History)
https://www.garlic.com/~lynn/2001m.html#43 FA: Early IBM Software and Reference Manuals
https://www.garlic.com/~lynn/2001n.html#0 TSS/360
https://www.garlic.com/~lynn/2001n.html#11 OCO
https://www.garlic.com/~lynn/2001n.html#36 Movies with source code (was Re: Movies with DEC minis)
https://www.garlic.com/~lynn/2001n.html#62 The demise of compaq
https://www.garlic.com/~lynn/2002.html#1 The demise of compaq
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#45 VM and/or Linux under OS/390?????
https://www.garlic.com/~lynn/2002d.html#19 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002e.html#62 Computers in Science Fiction
https://www.garlic.com/~lynn/2002g.html#78 Is it safe to use social securty number as intranet username?
https://www.garlic.com/~lynn/2002h.html#63 Sizing the application
https://www.garlic.com/~lynn/2002i.html#9 More about SUN and CICS
https://www.garlic.com/~lynn/2002i.html#11 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#50 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#50 SHARE Planning
https://www.garlic.com/~lynn/2002j.html#53 SHARE Planning
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#68 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003.html#13 FlexEs and IVSK instruction
https://www.garlic.com/~lynn/2003e.html#69 Gartner Office Information Systems 6/2/89
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
https://www.garlic.com/~lynn/2003i.html#5 Name for this early transistor package?
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003j.html#24 Red Phosphor Terminal?
https://www.garlic.com/~lynn/2003j.html#68 Transactions for Industrial Strength Programming
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003l.html#9 how long does (or did) it take to boot a timesharing system?
https://www.garlic.com/~lynn/2003n.html#29 Architect Mainframe system - books/guidenance
https://www.garlic.com/~lynn/2003p.html#23 1960s images of IBM 360 mainframes
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004.html#51 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004c.html#10 XDS Sigma vs IBM 370 was Re: I/O Selectric on eBay: How to use?
https://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
https://www.garlic.com/~lynn/2004g.html#54 effeciently resetting a block of memory
https://www.garlic.com/~lynn/2004m.html#40 Result of STCK instruction - GMT or local?
https://www.garlic.com/~lynn/2004m.html#61 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#0 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#5 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#9 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#16 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#4 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#5 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004p.html#15 Amusing acronym
https://www.garlic.com/~lynn/2005d.html#22 Latest news about mainframe
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005m.html#9 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#44 TLAs - was summit else entirely
https://www.garlic.com/~lynn/2005o.html#45 Article: The True Value of Mainframe Security
https://www.garlic.com/~lynn/2005o.html#46 Article: The True Value of Mainframe Security
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
https://www.garlic.com/~lynn/2005s.html#26 IEH/IEB/... names?
https://www.garlic.com/~lynn/2006c.html#0 Multiple address spaces
https://www.garlic.com/~lynn/2006f.html#22 A very basic question

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Sarbanes-Oxley

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Sarbanes-Oxley
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 01 May 2006 09:49:27 -0600
Chris Mason wrote:
How about passing laws which allow scrutiny of an enterprise's operation much more easily that is currently allowed? That could so increase the deterrent effect that less law-breaking resulted. So by passing such laws some of "them" will be stopped. Historically over the twentieth century, isn't this what has happened?

my claim
https://www.garlic.com/~lynn/2006.html#12a sox, auditing, finding improprieties
https://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#58 Sarvanes-Oxley

was that much of the audit paradigm is looking for inconsistencies ... and that if the auditor is only looking at information coming from a single source .... say the corporate IT operation ... then a reasonably intelligent fraud operation can leverage the IT operation to guarantee that all the information looked at by the auditors is consistent.

frequently, a basic security premise is multiple independent operations. Corporate IT operation can collapse everything into a single operation/source ... which can be leveraged to invalidate basic auditing assumptions. MORE of the same stuff from the same source ... isn't going to create multiple, independent information sources (corporate IT operation can be leveraged to generate as much consistent stuff as needed).

there has been some claims that this inherit flaw in the current auditing operations is somewhat implicitly recognized ... and that is why sarbanes-oxley also has the section on informants (hoping that other sources of information will come forward that can highlight inconsistencies via other means).

any implicit assumption about independent information sources as part of auditing paradigm (looking for inconsistencies) ... might also imply that audit would have to find other methods of coming up with independent information sources. for instance if one corporation lists operations with another corporation ... that the information from all corporations involved in such interactions might need to be validated for consistency. however, that somewhat is a change to the current auditing paradigm.

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Mon, 01 May 2006 12:55:54 -0600
jmfbahciv writes:
This was always an incorrect assumption. We considered operators to be human.

ref:
https://www.garlic.com/~lynn/2006h.html#56 The Pankian Metaphor

operators were humans that did a certain set of operations with respect to application execution .... but it was very structured subset ... since they weren't actually the humans directly responsible and/or necessarily really familiar with the application. with limited storage space ... the operators definitly were used to mount tapes and disks on the limited number of drives (i.e. there were significantly larger number of application tape reels and disk packs than there were drives).

i've often commented that one of the transitions for time-sharing services to 7x24 operations in the late 60s and early 70s was automating numerous operator functions ... and running the machine room "lights out" for 3rd and 4th shifts.
https://www.garlic.com/~lynn/submain.html#timeshare

also, several years ago, the person running a major financial processing operation commented that the two major things responsible for 100percent availability over the previous several years were
• ims hot-standby • automated operator

as hardware and software reliability has improved ... the major remaining source of faults were human (operator) mistakes and physical infrastructure failures and natural disasters. the ims hot-standby allowed them to have replicated operation at three geographically separated sites.

when my wife had been con'ed into doing a stint in POK in charge of loosely-coupled architecture ... she had developed peer-coupled shared data architecture. there wasn't a lot of initial uptake until the ims hot-standby work and then later parallel sysplex stuff.
https://www.garlic.com/~lynn/submain.html#shareddata

later when we were doing the high availability product (ha/cmp)
https://www.garlic.com/~lynn/subtopic.html#hacmp

we had coined the terms disaster survivability and geographic survivability to differentiate from disaster/recovery.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Spoofing fingerprint scanners - NEWBIE()

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Spoofing fingerprint scanners  - NEWBIE()
Newsgroups: comp.security.misc
Date: Mon, 01 May 2006 13:25:48 -0600
Sebastian Gottschalk writes:
Not actually. It's neither a reliable or efficient improvement over one factor authentication and clearly doesn't reach two factor ~. Especially due to error rates.

But "eye" is a good keyword. Iris scanning actually fulfills the something you are factor mantra.


some number of atm operators have been looking at both fingerprint scanning and iris scanning, in place of PIN for two-factor authentication.

from three-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


PIN is a shared-secret something you know in conjunction with the card something you have.
https://www.garlic.com/~lynn/subintegrity.html#secret

the issue is that shared-secret something you know paradigm has been grossly overworked/over-stressed ... as a result there are some statistics that at least 1/3rd of debit cards have PINs written on them. there is assumption with multi-factor authentication regarding whether they are subject to independent vulnerabilities and exploits. obviously writing PIN on the card defeats any assumptions about multi-factor independent vulnerability related to lost/stolen card.

the argument allowing a user to choose fingerprint (something you are) in lieu of PIN (something you know) authentication ... is whether it easier for a crook with a lost/stolen card to "lift" a PIN written on the card and replay the PIN at a terminal ... vis-a-vis "lifting" some possible fingerprint on the card and replay the fingerprint at a terminal (even allowing a customer to choose a finger that is least likely to have been used in handling their card).

misc. past posts mentioning fingerprint vulnerability vis-a-vis debit cards that have PIN written on them:
https://www.garlic.com/~lynn/99.html#165 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#167 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#172 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/aadsm10.htm#biometrics biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio2 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio3 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#bio6 biometrics
https://www.garlic.com/~lynn/aadsm15.htm#36 VS: On-line signature standards
https://www.garlic.com/~lynn/aadsm19.htm#5 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#47 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#41 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/2002g.html#72 Biometrics not yet good enough?
https://www.garlic.com/~lynn/2002h.html#6 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#8 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#41 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002o.html#62 Certificate Authority: Industry vs. Government
https://www.garlic.com/~lynn/2002o.html#63 Certificate Authority: Industry vs. Government
https://www.garlic.com/~lynn/2002o.html#64 smartcard+fingerprint
https://www.garlic.com/~lynn/2002o.html#65 smartcard+fingerprint
https://www.garlic.com/~lynn/2002o.html#67 smartcard+fingerprint
https://www.garlic.com/~lynn/2003o.html#44 Biometrics
https://www.garlic.com/~lynn/2005g.html#54 Security via hardware?
https://www.garlic.com/~lynn/2005i.html#22 technical question about fingerprint usbkey
https://www.garlic.com/~lynn/2005i.html#25 technical question about fingerprint usbkey
https://www.garlic.com/~lynn/2005m.html#37 public key authentication
https://www.garlic.com/~lynn/2005o.html#1 The Chinese MD5 attack
https://www.garlic.com/~lynn/2005p.html#2 Innovative password security
https://www.garlic.com/~lynn/2005p.html#25 Hi-tech no panacea for ID theft woes
https://www.garlic.com/~lynn/2006d.html#31 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006e.html#21 Debit Cards HACKED now
https://www.garlic.com/~lynn/2006e.html#30 Debit Cards HACKED now
https://www.garlic.com/~lynn/2006e.html#44 Does the Data Protection Act of 2005 Make Sense

other past posts about skimming exploits of magstripe plus PIN (or any other relatively static authentication data that can be subject to replay attack) ... also invalidating any assumptions about multi-factor authentication independent vulnerabilities/exploits/threats
https://www.garlic.com/~lynn/aadsm17.htm#13 A combined EMV and ID card
https://www.garlic.com/~lynn/aadsm17.htm#25 Single Identity. Was: PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#42 Article on passwords in Wired News
https://www.garlic.com/~lynn/aadsm18.htm#20 RPOW - Reusable Proofs of Work
https://www.garlic.com/~lynn/aadsm19.htm#5 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm20.htm#41 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm22.htm#20 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#23 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#29 Meccano Trojans coming to a desktop near you
https://www.garlic.com/~lynn/aadsm22.htm#33 Meccano Trojans coming to a desktop near you
https://www.garlic.com/~lynn/aadsm22.htm#34 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#39 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#40 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#45 Court rules email addresses are not signatures, and signs death warrant for Digital Signatures
https://www.garlic.com/~lynn/aadsm22.htm#47 Court rules email addresses are not signatures, and signs death warrant for Digital Signatures
https://www.garlic.com/~lynn/aadsm23.htm#2 News and Views - Mozo, Elliptics, eBay + fraud, naive use of TLS and/or tokens
https://www.garlic.com/~lynn/2003o.html#37 Security of Oyster Cards
https://www.garlic.com/~lynn/2004g.html#45 command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
https://www.garlic.com/~lynn/2004j.html#12 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004j.html#13 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004j.html#14 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004j.html#35 A quote from Crypto-Gram
https://www.garlic.com/~lynn/2004j.html#39 Methods of payment
https://www.garlic.com/~lynn/2004j.html#44 Methods of payment
https://www.garlic.com/~lynn/2005o.html#17 Smart Cards?
https://www.garlic.com/~lynn/2005p.html#2 Innovative password security
https://www.garlic.com/~lynn/2005p.html#25 Hi-tech no panacea for ID theft woes
https://www.garlic.com/~lynn/2005q.html#11 Securing Private Key
https://www.garlic.com/~lynn/2005t.html#28 RSA SecurID product
https://www.garlic.com/~lynn/2005u.html#13 AMD to leave x86 behind?
https://www.garlic.com/~lynn/2006d.html#31 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006d.html#41 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006e.html#3 When *not* to sign an e-mail message?
https://www.garlic.com/~lynn/2006e.html#4 When *not* to sign an e-mail message?
https://www.garlic.com/~lynn/2006e.html#10 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006e.html#21 Debit Cards HACKED now
https://www.garlic.com/~lynn/2006e.html#24 Debit Cards HACKED now
https://www.garlic.com/~lynn/2006e.html#30 Debit Cards HACKED now
https://www.garlic.com/~lynn/2006e.html#44 Does the Data Protection Act of 2005 Make Sense
https://www.garlic.com/~lynn/2006g.html#38 Why are smart cards so dumb?
https://www.garlic.com/~lynn/2006h.html#13 Security
https://www.garlic.com/~lynn/2006h.html#15 Security
https://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe vs. xSeries

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe vs. xSeries
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 02 May 2006 11:42:11 -0600
Steve O'Hara-Smith writes:
Perhaps an effect similar to the one that happened at Cambridge, the Titan was heavily secured and attempting to crack it was encouraged to the extent that success was something to openly boast about (and probably got you a job with a first task to close the hole you used and the entire family of related holes). Later the Titan was replaced by a 370 and cracking it was laughably easy - word went out that cracking the 370 was not considered clever (look in these places for well known techniques) and anyone caught trying it was apt to lose all access privileges.

Result very few people attempted to crack the 370 mainly because it was too easy and documented and therefore no challenge (although pretty much every user had some of the useful libraries of tools that depended on walking through the security mechanisms as though they weren't there - but these tools were reliable and didn't risk crashing the system).


misc. historical references of titan & cambridge univ.
http://www.cl.cam.ac.uk/Relics/chron.html
http://www.cl.cam.ac.uk/UoCCL/misc/EDSAC99/history.html
http://www.cs.man.ac.uk/CCS/res/res22.htm

it was a 370/165 installed in 1971 (pre 370 virtual memory).

following has early 70s time-line (at the end of the post)
https://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches flused on a page fault?

retrofitting virtual memory hardware to 165 ... to turn it into 165-II, cost a couple hundred thousand.

the cambridge references somewhat mentions difficulty in using (card-punch) JCL and/or "TSO" ... which appears to be motivating factor in the university developing the "phoenix command language".

something similar might be said of Stanford Univ. with Orvyl and Wylbur ... minor reference:
http://www-db.stanford.edu/pub/voy/museum/pictures/IBM.html
http://www.stanford.edu/dept/its/communications/history/mainframe/timeline.html

os/360 (or its 370 descendants) weren't particularlly noted for integrity infrastructures. in fact, hasp for much of its early life
https://www.garlic.com/~lynn/submain.html#hasp

was started as a normal application, "took over" the interrupt vector and intercepted all the interrupts to filter the stuff it was emulating. part of the issue was that most batch operations were closed operations and the "online" operations tended to be applications that provided only limited and very controlled feature/function.

by comparison, Univ. of Mich. had done MTS (michigan terminal system) utilizing virtual memory on 360/67 ... and then ported to 370 when 370 virtual memory feature became available.

also, there were the commercial time-sharing services using cp67/cms (on 360/67) later moved to virtual memory 370s
https://www.garlic.com/~lynn/submain.html#timeshare

which required a very high level of privilege isolation and protection. similarly, the "cambridge" science center
https://www.garlic.com/~lynn/subtopic.html#545tech

provided online access to various internal corporate operations (including corporate hdqtrs business planning that used the system to analyze the most sensitive of corporate information) as well as providing online access to various professors and students in the boston area (aka the other cambridge).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Transition of platforms in british education

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Transition of platforms in british education
Newsgroups: alt.folklore.computers
Date: Tue, 02 May 2006 21:48:20 -0600
CBFalconer writes:
No need for an antique house for that. Just be a child of the depression, and be used to using blankets, cats, etc. in winter. The first step in the morning is to hop from chair to chair to the kitchen, and start a fire in the wood stove. Prop up the oven door and sit on that until things take hold. You also learn to leave the water running. Windows are opaque from December through March.

wasn't depression just rural ... late 50s. house was heated by wood stove and one of my jobs was to keep the wood bin filled and also get up at 5.30am and restart the fire (i had an after school job ... so i would typically chop/saw wood after dinner for an hour or two).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Wed, 03 May 2006 10:30:59 -0600
jmfbahciv writes:
Did you use any kind of programming to study this stuff? Note: I know I know nothing about this and have no idea what questions to ask and probably won't understand the answers.

re:
https://www.garlic.com/~lynn/2006i.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/subtopic.html#hacmp

lots of hard work and experience. one of the things done was detailed vulernability analysis of tcp/ip ... looking both at standards documents (RFCs) and code examination.

for a little drift ... see my IETF RFC index
https://www.garlic.com/~lynn/rfcietff.htm

identified several operational things ... i.e. not coding bugs ... design/implementation problems that could result in live operational failures.

having studied both standards and code from the standpoint of detailed vulnerability analysis help later ... story about problem that the largest (at the time) online service provider was experiencing:
https://www.garlic.com/~lynn/2005c.html#51 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2006e.html#11 Caller ID "spoofing"

part of the experience was having developed an online failure diagnostic tool in the early 80s
https://www.garlic.com/~lynn/submain.html#dumprx

that attempted to also build a library of failure signatures that could be automatically scanned for (as part of the diagnostic process). this also evolved into looking for common failure features and/or characteristics for grouping types of failures. this was widely deployed thru-out the corporation ... both for internal datacenters and people responsible for shooting customer problems.

also part of the experience as based on having redone the I/O supervisor for the disk engineering and product test labs in bldg. 14 & 15
https://www.garlic.com/~lynn/subtopic.html#disk

they had development hardware that operated in very strange ways and potentially generated more errors in a few minutes that normal production devices would generate in a year. it was a very operating system hostile environment. attempts at using a standard MVS system in that environment resulting in 15 minute MTBF testing just a single development device. I had to completely rethink the whole operating system approach with assumption that it was operating in an extremely hostile environment ... and eventually produced a bullet-proof operating system where they could concurrently test half-dozen or more development devices.

also in that time-frame we were enhancing the HONE time-sharing service (internal time-sharing services that provided world-wide support for all corporate marketing, sales, and field people). all the US hone datacenters had been consolidated in northen cal. in the mid-70s (and there were alos a growing number of cloned datacenters world-wide).
https://www.garlic.com/~lynn/subtopic.html#hone

there was a lot of work that went into attempting to make the HONE service continuous. After a couple years, because of evironmental concerns (earthquakes), the US HONE center was replicated first in Dallas (and then a 3rd in boulder) with fall-over and load-balancing across the distributed centers.

in the same period, there was similar work on high availability by various other corporations providing commercial time-sharing services using the same platform
https://www.garlic.com/~lynn/submain.html#timeshare

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Transition of platforms in british education

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Transition of platforms in british education
Newsgroups: alt.folklore.computers
Date: Wed, 03 May 2006 14:39:47 -0600
Anne & Lynn Wheeler writes:
wasn't depression just rural ... late 50s. house was heated by wood stove and one of my jobs was to keep the wood bin filled and also get up at 5.30am and restart the fire (i had an after school job ... so i would typically chop/saw wood after dinner for an hour or two).

this post somewhat jogged my memory ... i remember the wood stove brand was siegler. slightly smaller than present day kitchen range. external was light weight brown tin metal with interior air space between the outer covering and the actual stove ... reduced being burned touching the stove. it was fed/loaded from the side.

i have some vague recollection of siegler stove advertisements from the 50s with some jingle.

i tried search engine on siegler ... came up with antigue siegler wood stoves, as well as some number of kitchen stoves and oil burning stoves. so far haven't found anything that looks like siegler wood stove from the 40s or 50s.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Heating

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Heating
Newsgroups: alt.folklore.computers
Date: Wed, 03 May 2006 16:57:57 -0600
CBFalconer writes:
Having moved recently I now have a well insulated home. I heated it this winter (unusually mild) on about 2 cords [1] of wood in a small airtight stove. When I wished I could let the furnace and thermostat take over. That used about 75 gallons [2] of kerosene.

i remember as a kid that we got a lot of stuff that was effectively tree trunks cut to stove length ... but not split (although old enuf so that it was dry ... no longer green) it was delivered by dump trunk in a big pile. I had to use sledge and wedges to split the trunk pieces into (typically) thirds or quarters. then i could use an axe to split into even smaller pieces for the stove. I also got some number of smaller tree limbs that I cut to length and then split. mostly pine and had a lot of chimney crud (some amount of chimney cleaning and had to be really careful about chimney fires).

1950s stuff ... house hardly insulated and, as previous post mentioned, siegler wood stove. went through several cords in a winter (although at the moment I don't recollect typical number).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hadware Support for Protection Bits: what does it really mean?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hadware Support for Protection Bits: what does it really mean?
Newsgroups: comp.arch,comp.sys.ibm.pc.hardware.chips,alt.os.development,fa.linux.kernel
Date: Wed, 03 May 2006 19:18:00 -0600
"Maxim S. Shatskih" writes:
Setting the dirty bits on DMA is usually the OS's task.

the changed and reference bits are properties of the physical instance of the page.

in the original 360 "key" structure ... cp67 emulated cms protected shared pages by fiddling the storage protect keys (part of the original 360 architecture) ... where the current executing state is carried by the PSW (program status word) which could include a "key" state. each storage area could have an associated key value. for a storage to occur the PSW (application execution) key value had to match the storage key value in order for a storage operation to complete (aka not only did storage areas have reference and change state ... but each storage area could also have store and optionally fetch protection ... which also had to be checked on each instruction operation). the supervisor could disable store (& optional fetch) protection by setting the PSW key value to zero (for privilege kernel/supervisor code).

so certain CMS virtual memory pages were defined as shared. the cp67 kernel ... behind the scenes fiddled both the non-shared and shared page protection keys as well as fiddling the PSW for CMS execution ... so that all stores to protected shared pages would fail.

along comes 370 ... and in the original 370 virtual memory architecture, a shared segment protection feature was defined. for the morph of cp67/cms (from 360/67) to vm370/cms (370), the cms layout structure was re-organized so all "shared" pages were located in a 370 segment that could be defined as shared across multiple address spaces ... and in each virtual address space table entry ... the segment protect bit was turned on (preventing all instruction executing in those virtual memories from being able to store in that segment range of virtual addresses).

at that time, there was going to still be the "key" based storage protection (inherited from 360), the page change and reference state bits as well as the new 370 virtual memory storage protect mechanism.

some amount of past posts mentioning storage protect operations:
https://www.garlic.com/~lynn/93.html#18 location 50
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/99.html#94 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2000c.html#18 IBM 1460
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003m.html#15 IEFBR14 Problems
https://www.garlic.com/~lynn/2004c.html#33 separate MMU chips
https://www.garlic.com/~lynn/2004h.html#0 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004q.html#82 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#84 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005h.html#17 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries

so several models are chugging along with their hardware implementations. the 370/165 engineers then raise an issue ... they are something like six months behind schedule designing and building the 165 virtual machine hardware retrofit (370/165s were initially shipped w/o virtual memory capability). we have an escalation meeting with architecture and various groups in POK. the 370/165 engineers claim that they can make up the six months if they can drop several of the 370 virtual memory features from the implementation (which also means that all of the other 370 models that have already implemeted the features will need to remove them from their implementations). it was eventually decided to go with dropping the features so 370/165 engineers can gain back six months.

one of the features that had to be dropped from the original 370 virtual memory architecture was the original virtual memory segment protection. that created a problem for CMS ... since the whole CMS shared page memory protect had been rebuilt around have segment protection (i.e. lots of different applications could share exact duplicate physical image of a page w/o fear that one application would trounce on it and impact applications running in other virtual memory ... i.e. virtual memory was being used for partitioning and isolation).

so, vm370/cms group was forced then to retrofit the key-based storage protection hack that was used in cp67/cms to vm370/cms shared segment protection.

we go forward a couple years. several of the 370 models come out with instruction microcode performance assists for vm370/cms operation. one of the instruction assisted is the PSW and storage key management instructions. however, the assist rules don't have provisions for the fiddling done to the PSW and storage keys by the original hack from CP67 (i.e. if cms applications were to be run with the hardware performance assists they would loose protection of their shared pages).

somebody comes up with a bright idea. at that moment, vm370/cms environments were only single processor machines and cms only had only defined 16 shared pages. the idea was that cms applications would be run with the hardware performance assist (with storage protection actually disabled). then every time before the underlying kernel did a task switch from one virtual address to a different virtual address, the dispatcher would scan the shared cms pages (that previously had been storage protected) for the change bit. any time such a "protected" shared page was found to be dirty/changed ... the physical copy was flushed and the PTE was marked invalid. The switched-to address space would never see any changes made by an application running in a different address space. The pages were no longer actually physically protected from stores ... however, the scope of any such stores was very limited.

so about the time they were ready to ship this bright new idea ... cms added support for greatly increasing the number of shared pages. the original idea was the overhead of scanning the dirty bits on 16 shared pages on every task switch was less than the performance improvement gained by using the microcode hardware assist. The problem was that by the time the support shipped, there was always a minimum of 32 shared pages to scan (and frequently a large number more) and the trade-off was no longer valide.

the other problem was adding support for multiprocessing. the original bright idea was based on the fact that while the application was running, it basically had exclusive control and use of the pages. with multiprocessor support, that was no longer true, there was potentially concurrent access to shared pages by equivalent of one application for each processor. so the bright idea had to be fiddled, as part of multiprocessor support, a unique set of shared pages were defined for each processor. now as part of doing a task switch, the dispatcher had to scan the shared pages from the previous task looking for modifications (and then flushing and invalidating as needed). in addition, the dispatcher now had to fiddle the virtual memory tables to the new, switched-to task ... so its virtual memory tables were point to the set of shared pages that were specific to the real processor that the task was being dispatched on.

of course, eventually sharing protection re-appeared as part of the architecture implementation shipped to customers.

misc. past posts on the whole gengre of the shared page fiddling (and emulating storage protection by scanning for changed pages and discarding the changed image ... forcing the page having to be refreshed from disk)
https://www.garlic.com/~lynn/2000.html#59 Multithreading underlies new development paradigm
https://www.garlic.com/~lynn/2003d.html#53 Reviving Multics
https://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?
https://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004p.html#9 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004p.html#10 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004p.html#14 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#20 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#61 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005e.html#53 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#46 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#9 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005j.html#39 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005o.html#10 Virtual memory and memory protection
https://www.garlic.com/~lynn/2006.html#13 VM maclib reference
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hadware Support for Protection Bits: what does it really mean?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hadware Support for Protection Bits: what does it really mean?
Newsgroups: comp.arch,comp.sys.ibm.pc.hardware.chips,alt.os.development,fa.linux.kernel
Date: Wed, 03 May 2006 19:37:16 -0600
"Maxim S. Shatskih" writes:
Setting the dirty bits on DMA is usually the OS's task.

so there is also a separate issue with cp67 and vm370 providing emulated virtual machines.

there are the real storage changed and reference bits associated with the real physical page.

the real kernel uses the real storage changed bits to determine whether the physical instance in real memory is the same or different than the image on disk. when the virtual copy is brought in from disk, the real change bits are set to zero indicating that the copy hasn't been changed since reading from disk. subsequently the application or i/o operations may alter the virtual copy in real memory ... which means that it is no longer the same as the copy on disk.

now there is a problem with simulating a virtual machine environment. the pages in the virtual machine address space have changed and referenced bits managed by the kernel running in the virtual machine. the virtual machine kernel may read a virtual page of its disk into a page ... which changes the real instance of the page with respect to the real kernel. the kernel in the virtual machine will then clear its version of the change (& reference) bit to zero (indicating that the copy in storage is the same as the copy on its disk).

similarly, the real kernel may remove a virtual machine page from real memory to disk. when it brings that virtual machine page back into real memory, it will reset the changed (and reference) bits indicating that the virtual page in storage is still the same as the copy on the real kernel's page disk.

we have one set of real changed and reference bits ... but two different kernels attempting to use them for tracking state about two different things (whether there has been a change from the copy on the virtual kernels paging disk as well as whether there has been a change from the copy on the real kernels paging disk).

so the real kernel maintains two sets of "shadow" changed and reference bits, one set for the virtual machine kernel and one set for the real machine kernel. whenever the real kernel changes the real changed and reference bits ... the values for the real page are OR'ed with the value in the virtual machine kernel shadow bits, the real changed and reference bits are cleared to zero ... and the desired value is assigned to the real kernel's shadow bits.

whenever the virtual machine kernel changes a page's reference and change bits ... the values are OR'ed with the value in the real machine kernel shadow bits, the real change and reference bits are cleared to zero ... and the desired value is assigned to the virtual machine kernel's shadow bits.

whenever either kernel interrogates changed & reference bits ... it does it first by first OR'ing the values for the real page with its shadow bits maintained for that particular kernel.

In effect, only an virtual application instruction explicit store alteration event is used to turn on the real dirty/changed bits (occurring when the instruction modifies something in the range of storage). Administrative management of the bits then never turn on the bits ... it ONLY zeros the real bits (after first integregating and OR'ing the bits as appropriate with the appropriate software maintained shadow bits).

some past posts discussing management of shadow change & reference bits as part of virtual machine emulation:
https://www.garlic.com/~lynn/95.html#2 Why is there only VM/370?
https://www.garlic.com/~lynn/2005h.html#17 Exceptions at basic block boundaries

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Google is full

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google is full
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 04 May 2006 08:51:21 -0600
Phil Payne wrote:
OK - here's another link -
http://www.theregister.co.uk/2006/05/04/google_bigdaddy_chaos/

It seems that everyone insists on reinventing wheels. Apart from anything else, this is a classic capacity planning failure. I wonder what would happen to any zSeries capacity planner whose work was so bad that the CEO had to apologise in the New York Times?

Google might have gotten lots of cheap MIPS on its distributed platforms, but it obviously didn't get the scalability it requires and now it has a computing system that can't keep up with its business plan. When was the last time this happened to a zSeries user? Scalability? I think that's one of the boxes zSeries can tick.


i was at presentation several months ago about some of the google activity. one claim was that they had reduced the cost of supercomputer by at least 2/3rds. this is a supercomputer defined as a GRID-type operation with lots and lots of MIPS and DISK packed into small space. They had fined tuned the packaging and construction and were able to do it for 1/3rd the cost that you would normally pay for racks & racks of densely packed processors and disk drives (tens of thousands of each)

i've always seen a fairly high percentage of hits on our garlic web pages from search engines ... as well as other sources ... eserver magazine even did article on our garlic web pages last year
https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/

more recently somebody in comp.arch commented on the subject of "garlic" coming up in dog&pony show at almaden
https://www.garlic.com/~lynn/2006h.html#35 64-bit architectures & 32-bit insturctions

I've even noticed that the rate of google hits seemed to have doubled over the past several months. I've somewhat suspected that the search engines were using our garlic pages as test cases because of the extremely high proportion of hrefs. the ietf rfc index
https://www.garlic.com/~lynn/rfcietff.htm

and the merged taxonomies and glossaries html files
https://www.garlic.com/~lynn/index.html#glosnote

are extremely dense with hrefs (primarily because the information is maintained in a very complex knowledge base and the html files are generated by an application).

Mainframe near history (IBM 3380 and 3880 docs)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe near history (IBM 3380 and 3880 docs)
Newsgroups: bit.listserv.ibm-main,comp.sys.unisys,comp.lang.asm370,comp.sys.ibm.sys3x.misc,alt.folklore.computers
Date: Thu, 04 May 2006 20:07:35 -0600
Chuck Stevens wrote:
The US Veterans Administration Data Processing Center in Austin, Texas had *at least* three -- maybe at one point five -- of these beasts (one on a 360/40, two on a 360/65, maybe two more at one point on a 360/50)..

the university i was at had 2321 attached to 360/67 (that ran as 360/65 most of the time). The univ. library had gotten an ONR grant to work on stuff like online library card catalog. the project also got selected to be beta-test site for cics. i remember getting to shoot some early cics bugs ... including a bdam open bug (site where cics had been developed appeared to have been using bdam in very specific way ... and the library chose to use different set of bdam features).

the sounds of 2321 at boot/ipl was something like whirl, kerchunk, kerchunk, whirl, .... as it went thru reading volsers.

remember that in the BBCCHHR DASD addressing ... the "BB" were for the 2321 bin number.

Multi-layered PKI implementation

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: linux.debian.user
Subject: Re: Multi-layered PKI implementation
Date: Sun, 07 May 2006 11:45:04 -0700
James Westby wrote:
In a slighty simplified view of X.509 each party has a certificate stating who they are, and they have a key that ties them to it. They then have a Certificate Authority sign this certificate after a process of verifying the information. They can then present this certificate to anybody, no matter whether they have ever had any contact with them before, and that person can verify the identity of the first person by checking the signature of the CA on the certificate. This then moves the trust from the person presenting the certificate to the CA.

verifying a digital signature with a public key is a form of something you have authentication ... aka it assumes that the entity has access and use of the respective private key.

in the straight-forward implementations, the public key is registered with the relying party in lieu of a password (i.e. the public key is used to verify a digital signature for something you have authentication, instead of matching a supplied password for something you know authentication). this could be kerberos where public key is registered instead of a password
https://www.garlic.com/~lynn/subpubkey.html#kerberos
or radius
https://www.garlic.com/~lynn/subpubkey.html#radius

credentials, certificates, licenses, diplomas, letters of credit, letters of introduction, etc. has been used for centuries in the offline world for relying parties who had no other mechanism for determining the validiting of some information.

digital certificates (and PKI) are electronic analogy for providing some equivalent mechanism for an electronic, offline world ... where the relying party has no prior information and/or any means for determining information about some new stranger they will be dealing with.

the relying party as a local table of public keys belonging to certification authorities in order to authenticate communication from these entities called certification authorities. the certification authorities provide something called digital certificates (which the relying party authenticates by validating the digital signatures using a public key from the relying party's table of trusted public keys). these digital certificates typically contain information about some other party "bound" to their public key. the relying party, when dealing with total strangers (where the relying party has no prior knowledge about the stranger and/or has any online capability for obtaining information about the stranger) will use the information in attached digital certificates to obtain information about the stranger.

a stranger sends you some digitally signed communication with an attached digital certificate. you authenticate the digital certificate by verifying the certification authorities digital signature (from you table of trusted public keys). you then take the stanger's public key from the appended digital certificate to authenticate the communication (by verifying the digital signature with the supplied public key from the appended digital certificate).

the relying party then can take the remaining certified information from the appended digital certificate for making decisions about authorization, permissions, etc. i.e. the purpose of PKI, certification authorities and digital certificates is so a relying party can make decisions about how to treat first-time communication from a total stranger, purely based on information from the appended digital certificate.

A simple certification authority PKI is already providing a single level of trust indirection, the relying party has the public keys of the certification authorities it already trusts (in the relying party's repository of trusted public keys), and the relying party uses those public keys to validate & authenticate digital certificates from the certification auhority. then the relying party uses the certified information from digital certificate to decide on how to deal with complete stranger (decide on permissions and/or authorizations).

It is then a fairly simple step to go from one-level trust indirection to a hierarchy of certification authorities ... where the certification authority "trusted" by the relying party digitally certifiies a digital certificate telling the relying party to accept digital certificates issued by another "stranger" certification authority (the trusted certification authority effectively instructs the relying party to treat "stranger" certification authorities as authorized ... i.e. accept their digital certificates).

basically the premise is that the relying party makes their decisions about permissions, privileges, authorizations, etc ... based on the information in the digital certificate i.e. the primary purpose of the digital certificate isn't to authenticate ... but to convey information about stangers that allows a relying party to make decisions about permissions and privileges w/o resorting to, or requiring any other resources. A simple PKI is a one level hierarchy of digital certificate permissions. A N-level PKI hierarchy is basically a N+1 level of hierarchical digital certificate permissions.

so some early 90s x.509 identity certificates were based on the premise that a certificate authority would be able to package enuf information about a party so that a relying party (dealing with the entity for the first time) could make decisions about permissions, privileges, authorizations and/or other decisions with regard to how they treated a stranger. This goes back to the digital certificate emulation of the physical credentials, certificates, licenses, etc. that have served the world for centuries ... allowing relying parties to decide how to deal with entities that were otherwise comple strangers.

Some of the 3rd party certification authorities then were looking at increasing the perceived value of the identity certificates by grossly overloading them with personal information ... hoping to appeal to broader relying party market segments.

In the mid-90s, several institutions began realizing that identity certificates, grossly overloaded with personal information, represented significant privacy and liability issues. Also some of the permission information specification could represent security vulnerabilities. As a result you saw them retrenching to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

where the basic, unique digital certificate information was some form of account number or database record location (along with a the public key). after the relying party verified the digital signature, the relying party would then look up all the necessary information about the entity in their repository. However, this is basic violation of the principles behind the centuries old credential/certificate/license paradigm where information was being provided to strangers that had no other means of obtaining the information.

in this scenario it was trivial to demonstrate if the relying party was going to access their own information about the party (or possibly make some online query for the same information), then the digital certificate was redundant and superfluous. this was the certificate-less paradigm where a person already has information about the other entity
https://www.garlic.com/~lynn/subpubkey.html#certless

a somewhat facetious example (of much of the current PKI use) is where spouses are required to have their marriage certificates tatooed on their respective forwards in order to determine who their respective spouse is. nominal certificates (including marraige) are for presenting to strangers to establish certain priviliges and/or permissions. it nominally wouldn't be necessary to repeatedly present your copy of the marraige certificate to your spouse in order to establish your position as their spouse.

the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

and it was shown that it could be done with using digital signature for (something you have) authentication w/o appending digital certificates (by having the public key stored in much the same way that mother maiden names, PINs, passwords and other information used for authentication is stored).

part of this issue was that some of the attempts at certificate-based financial certificates had digital certificate sizes in the 4k-12k byte range (even for the abbreviated relying-party-only certificates). However, the typical payment transaction size is in the 60-80 byte range. Appending 4k-12k byte redudnant and superfluous digital certificates was creating a payload bloat of 100 times (i.e. the transaction payload was being increased by two orders of magnitude).

Somewhat in response, there was a financial industry effort to define a "compressed" certificate format ... hoping to get redundant and superfluous digital certificates into the 300 byte range (payload bloat of only five times instead of 100 times) by eliminating non-unique fields in the certificates. I pointed out that you could go even further, you could eliminate all fields in the digital certificates that you knew to already be in possession of the relying parties. Since it was trivial to show that the relying party could have already stored ALL information that might be found in a digital certificate ... it was possible to demonstrate compressing a digital certificate to zero bytes. rather than talking about eliminating the overhead of appending redundant and superfluous digital certificates, it was possible to demonstrate the efficiency of appending zero byte digital certificates.

Basically, as the world has gone more online and the network has become more and more ubiquitous ,the market segment where a relying party doesn't already have information about another entity (and/or can't obtain it via online facelities) has rapidly shrunk. As a result you have seen PKIs moving into more and more no-value market segments ... where the business processes can't cost justify the relying party having their own information (about the parties they are dealing with) and/or can't justify the cost of doing online transactions to obtain the information.

Value of an old IBM PS/2 CL57 SX Laptop

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Value of an old IBM PS/2 CL57 SX Laptop
Newsgroups: alt.folklore.computers
Date: Sun, 07 May 2006 14:50:59 -0600
greymaus writes:
Further thought, eBay may be a trap waiting for the unwary, but Paypal is getting more and more popular, and being banned from there (for technical reasons) is the equivelent of an alcoholic being banned from his local bar, embarrassing but good for ones finances.

somewhat related blog
https://www.financialcryptography.com/mt/archives/000711.html

to some extent paypal was providing online accounts where they then turned around and do a real payment operation thru the real payment infrastructure ... and charging a premium over and above what they had to pay for doing the real payment transaction.

part of this was financial institutions doing real payments are fairly heavily regulated and have lots of reporting and operational requierments. having a separate operation for online stuff, may be able to operate with much less overhead and restrictions. however, they are vulnerable to real financial institutions moving into the market space and eliminating the middle man (and the associated extra intermediary overhead charges) ... aka offer the same function directly for approx. the same amount they are currently charging the middle man.

you see part of this characteristic in some manufacturing operations that offer consumer goods and also provide financing for the purchases. in several cases, the financial arm is making nearly all of the profit and the actual manufacturing is operating close to a break-even or even in the red. however, the two different organizations are frequently kept at arms length in order to avoid subjecting the manufacturing to the regulations and reporting overhead that the financial organization is subject to (there have been jokes that financial services are the only thing keeping some amount of the rust belt afloat).

there were also some number of digital cash operations floated in the 90s (attempting to address portions of the online payments market segment) ... however, doing detailed business process analysis turned up that some number of them were just excuses for acquiring the float.

some digital payment infrastructure was set up where the consumer had to transfer money into the online payment infrastructure ("stored-value" and various other kinds of similar mechanisms). the money in this infrastructure wasn't earning the consumers any interest ... it was all going to the sponsoring infrastructure (several stored-values, gift cards, etc, work that way).

Several central banks took a look at many of these operations and eventually made a statement that they would allow the institutions to retain the float during the inception period (long enough until the institutions had covered the cost of the initial deployment and other startup costs). However, after a couple of years, these institutions would be expected to start paying the consumer interest for the money on deposit in these accounts.

The ruling that there would no longer be the financial bonanza (from the float on the money in these accounts) significantly reduced the interest in developing and deploying these types of operations.

a couple past posts mentiong the float incentive behind many of the digital cash operations:
https://www.garlic.com/~lynn/aadsmore.htm#eleccash re:The Law of Digital Cash
https://www.garlic.com/~lynn/aadsm6.htm#digcash IP: Re: Why we don't use digital cash
https://www.garlic.com/~lynn/aadsm7.htm#idcard2 AGAINST ID CARDS
https://www.garlic.com/~lynn/aadsm21.htm#1 Is there any future for smartcards?
https://www.garlic.com/~lynn/2004j.html#12 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

rexx or other macro processor on z/os?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: rexx or other macro processor on z/os?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 07 May 2006 15:19:24 -0600
Paul Gilmartin wrote:
Was this, then, even before passed data sets existed? There was no other way to pass data from one job step to another? (Did jobs even have multiple steps?)

this was standard operation for "sysgen". you would punch up 40-100 cards which specified a lot of parameters for the system configuration (specified as assembler macros). this was "stage-1" sysgen.

based on parameters and a punch of other stuff, the assembler macros would "PUNCH" a couple thousand cards (frequently around a box). This became "stage-2" sysgen. This was job card, a whole series of job step EXEC cards and all the necessary control & specification statements for each EXEC step. it wasn't that it was a passing data ... it was a whole new job stream that had been generated automagically by the assembly of the stage-1 sysgen deck.

this was done in dedicated scheduled time (typically at least a shift or two), booting/ipl a special "starter" os/360 (PCP) system that handled first the stage-1 sysgen followed by the stage-2 sysgen deck. A single stage-2 sysgen deck (couple thousand cards) would be a single job with several score job/EXEC steps.

as i've mentioned before, when I was an undergraduate in the late 60s ... starting with mft sysgen for os/360 release 11, I manipulated stuff so that I could do stage-1 sysgen in standard production job stream, get the resulting output for stage-2 sysgen and completely re-organize it. I worked it so I could submit it to hasp/mft production system with most EXEC steps organized as individual JOBs ... but carefully controlling their sequence of execution. The sequence of JOBs (and things like the order of move/copy statements and other stuff ... within job/EXEC steps) were organized to carefully control the physical placement of datasets and PDS members for optimal seek ordering on the newly generated system disks.

recent post commenting that they finally offered being able to explicitly place VTOC in release 15/16 ... something that I had been asking for since I started careful disk layout with release 11 system generation.
https://www.garlic.com/~lynn/2006h.html#57 PDS Directory Question

for other topic drift, several collected posts about early use of rex (long before it was renamed rexx and released to customers)
https://www.garlic.com/~lynn/submain.html#dumprx

blast from the past, reliable communication

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: blast from the past, reliable communication
Newsgroups: alt.folklore.computers
Date: Sun, 07 May 2006 16:07:10 -0600
a recent blog entry on standard tcp/ip not being adequately reliable for financial operations:
https://www.financialcryptography.com/mt/archives/000714.htm

and some of my comments
https://www.garlic.com/~lynn/aadsm23.htm#21 Reliable Connections are not

as mentioned in the above, we were also on the xtp technical advisery board
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

and one of the other participants was naval surface warfare center.

for a blast from the past, a short summary presented on NSWC requirements from mar89; implicit in this is very high availability and integrity requirements:


Several objectives from the Navy Surface Warfare Cneter (NSWC)
requirement study, considering transport service requirements for
naval platforms.

Several scenarios require low latency, high level of concurrenty,
response to operator input, and movement of very large files to
graphic displays. Transport service througput requirements are
estimated as shown:

Button & Trackball actions      50 bytes @ 2/sec           100 ms max
Menu selections                 100 bytes @ 1/sec          500 ms max
File service requests           200 bytes @ 3/min

Track file updates              1mbyte @ 1/2sec
Indicator & cursor control      50 bytes per action        100 ms max
Pixel images                    1 mbyte per action         500 ms max
File server accesses            up to 100 mbyte per action

... snip ...

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

blast from the past on reliable communication

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: blast from the past on reliable communication
Newsgroups: alt.folklore.computers
Date: Sun, 07 May 2006 23:14:35 -0600
another historical item from the period; part of old corporate XTP position memo

Date: 1 September 89, 12:08:14 EDT
To: distribution

Communications Standards Development: ------------------------------------

Position - Oppose for OSI standardization

In reviewing the drafts of the XTP protocol it was noted that it is very narrow in scope and does not fit into the structure of the OSI Reference Model.

Other concerns include the possibility of a 6th Transport Class to have to handle (Classes 0 - 4 are outlined in ISO 8073). It is also very doubtful that we would be able to push a protocol into ISO that spans multiple layers of the OSI Reference Model.

The high-bandwidth requirements of emerging communications technologies will require a close examination of the Basic Reference Model to determine what changes are required to keep it in pace with developments.

To this end a study project has been initiated in the ANSI committee X3S3.3 (OSI Network and Transport) to examine the requirements placed on Network and Transport Layer protocols in support of very high-speed networking. The existing OSI protocols will be analyzed with respect to these requirements. The output will be specific proposals for modifications to existing OSI standards and/or new OSI protocols. XTP techniques, along with those of other protocols, would be candidates for study.

Research:
--------

Position - Oppose for standardization

The key concern is that existing OSI Network/Transport functions should meet high-speed requirements with the proper optimizations. Effort should be spent in this area before jumping into development of new protocols.

XTP is really more of a OSI Layer 2 protocol rather than a Layer 3/4 protocol.

xxxxxx has been attending the ANSI X3S3.3 meetings to understand the XTP direction and offer guidance on high-speed protocol development.


... snip ... top of post, old email index, HSDT email

past posts mentioning x3s3.3 & OSI:
https://www.garlic.com/~lynn/99.html#114 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/2001e.html#24 Pre ARPAnet email?
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#26 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#46 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#49 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#50 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002q.html#4 Vector display systems
https://www.garlic.com/~lynn/2003d.html#27 [urgent] which OSI layer is SSL located?
https://www.garlic.com/~lynn/2003g.html#45 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003l.html#47 OSI not quite dead yet
https://www.garlic.com/~lynn/2003n.html#36 Cray to commercialize Red Storm
https://www.garlic.com/~lynn/2004c.html#52 Detecting when FIN has arrived
https://www.garlic.com/~lynn/2004e.html#13 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2004f.html#18 layered approach
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2004q.html#34 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#44 How many layers does TCP/IP architecture really have ?
https://www.garlic.com/~lynn/2005.html#9 OSI - TCP/IP Relationships
https://www.garlic.com/~lynn/2005.html#29 Network databases
https://www.garlic.com/~lynn/2005.html#45 OSI model and SSH, TCP, etc
https://www.garlic.com/~lynn/2005d.html#11 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005e.html#39 xml-security vs. native security
https://www.garlic.com/~lynn/2005j.html#33 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005n.html#25 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005n.html#52 ARP routing
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005s.html#1 Type A ,Type B
https://www.garlic.com/~lynn/2005t.html#0 TTP and KCM
https://www.garlic.com/~lynn/2005u.html#18 XBOX 360
https://www.garlic.com/~lynn/2005u.html#52 OSI model and an interview

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

blast from the past on reliable communication

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: blast from the past on reliable communication
Newsgroups: alt.folklore.computers
Date: Tue, 09 May 2006 10:22:05 -0600
... some more, this time from oct89, also mentions NSWC (naval surface warfare center) and NOSC (naval ocean systems center):
An intense effort is centered on the evaluation of multicast strategies. In coordination with NSWC, UVA researchers have collected a library of multicast information, based on mechanisms, applications, group management, metrics, and multicasting with XTP. A report, "Strategies for Multicasting", is being written.

In parallel, another study is evaluating aspects of protocols that allow latency control for real-time control systems operating over LANs. A survey of priority mechanisms has led to the concept of "importance" of dynamic message delivery. One comment is that "importance" doesn't carry the baggage of "priority". The Sort Field in XTP is under careful study, with a UVA report resulting, "Making XTP Responsive to Real-time Needs".

An XTP Tutorial is being written, and will be available to the TAB soon.

A contract from SPerry Marine calls for development of SEAnet, a real-time LAN for commercial ships.

Sperry Marine is also sponsoring a SAFENET I test system, with XTP running over an 802.5 token ring.

NOSC is sponsoring a SAFENET II effort, attaching VME systems with FDDI.

NSWC is sponsoring performance measurements, LAN recommendations, and prototyping.

NASA Johnson is sponsoring an evaluation of Space Station networking, prototyping, and analysing tradeoffs between ISO protocols and real-time systems.


... snip ...

ref:
https://www.garlic.com/~lynn/2006i.html#16 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006i.html#17 blast from the past on reliable communication

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

blast from the past on reliable communication

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: blast from the past on reliable communication
Newsgroups: alt.folklore.computers
Date: Tue, 09 May 2006 11:32:37 -0600
Brian Inglis writes:
AFAIR they weren't very usable even in non-real-time systems, unless you had the same vendor's hardware and software at both ends, and not always even then!

remember this was the era of gosip and numerous gov. mandates to convert everything to iso/osi as well as gov. mandates to eliminate all the internetworking and tcp/ip stuff.

misc. past posts mentioning gosip (GOVERNMENT OPEN SYSTEMS INTERCONNECTION PROFILE)
https://www.garlic.com/~lynn/99.html#114 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/99.html#115 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/2000b.html#0 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
https://www.garlic.com/~lynn/2001e.html#17 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#32 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001i.html#5 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#6 YKYGOW...
https://www.garlic.com/~lynn/2002g.html#21 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
https://www.garlic.com/~lynn/2002m.html#59 The next big things that weren't
https://www.garlic.com/~lynn/2002n.html#42 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2003e.html#71 GOSIP
https://www.garlic.com/~lynn/2003e.html#72 GOSIP
https://www.garlic.com/~lynn/2003o.html#68 History of Computer Network Industry
https://www.garlic.com/~lynn/2004c.html#52 Detecting when FIN has arrived
https://www.garlic.com/~lynn/2004e.html#13 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2004f.html#32 Usenet invented 30 years ago by a Swede?
https://www.garlic.com/~lynn/2004q.html#44 How many layers does TCP/IP architecture really have ?
https://www.garlic.com/~lynn/2004q.html#57 high speed network, cross-over from sci.crypt
https://www.garlic.com/~lynn/2005.html#29 Network databases
https://www.garlic.com/~lynn/2005.html#45 OSI model and SSH, TCP, etc
https://www.garlic.com/~lynn/2005d.html#11 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005e.html#39 xml-security vs. native security
https://www.garlic.com/~lynn/2005u.html#52 OSI model and an interview
https://www.garlic.com/~lynn/2005u.html#53 OSI model and an interview
https://www.garlic.com/~lynn/2006f.html#26 Old PCs--environmental hazard

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

blast from the past on reliable communication

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: blast from the past on reliable communication
Newsgroups: alt.folklore.computers
Date: Tue, 09 May 2006 13:22:22 -0600
Anne & Lynn Wheeler writes:
remember this was the era of gosip and numerous gov. mandates to convert everything to iso/osi as well as gov. mandates to eliminate all the internetworking and tcp/ip stuff.

re:
https://www.garlic.com/~lynn/2006i.html#19 blast from the past on reliable communication

while there have been some issues with internet and tcp/ip
https://www.garlic.com/~lynn/internet.htm
https://www.garlic.com/~lynn/subnetwork.html#internet

at least ietf rfc
https://www.garlic.com/~lynn/rfcietff.htm

requires at least two interoperable implementations before progress on standards process.

osi could pass as an iso standard w/o requiring proof that it was feasable or could even be implemented.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

blast from the past on reliable communication

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: blast from the past on reliable communication
Newsgroups: alt.folklore.computers
Date: Tue, 09 May 2006 13:33:03 -0600
Anne & Lynn Wheeler writes:
Communications Standards Development:
------------------------------------

Position - Oppose for OSI standardization

In reviewing the drafts of the XTP protocol it was noted that it is very narrow in scope and does not fit into the structure of the OSI Reference Model.


ref:
https://www.garlic.com/~lynn/2006i.html#17 blast from the past on reliable communication

a few past telling of tale about communications group idea of "high-speed" (during at least the mid 80 to early 90s):
https://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2003m.html#59 SR 15,15
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005n.html#25 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005r.html#9 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor

in that period we were already operating our high-speed backbone
https://www.garlic.com/~lynn/subnetwork.html#hsdt

had been told we couldn't bid ond nsfnet1 & nsfnet2 (precursor to modern internet) although audit by NSF stated that what we had running was at least five years ahead of all bid submissions to build something new
https://www.garlic.com/~lynn/internet.htm#nsfnet
https://www.garlic.com/~lynn/internet.htm#0

had come up with 3-tier architecture and were out making presentations to customer IT executives
https://www.garlic.com/~lynn/subnetwork.html#3tier

and had started work on our ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

the communication group were out pushing SAA ... at least some of which could be construed as attempting to put the client/server genie back into the bottle and return to the days of PCs accessing servers via terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation

we were taking all sorts of heat from various factions in the communications group over at least HSDT and 3-tier architecture.

other posts in this thread
https://www.garlic.com/~lynn/aadsm23.htm#21 Reliable Connections are not
https://www.garlic.com/~lynn/aadsm23.htm#24 Reliable Connections are not
https://www.garlic.com/~lynn/2006i.html#16 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006i.html#18 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006i.html#19 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006i.html#20 blast from the past on reliable communication

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 10 May 2006 10:14:52 -0600
"Doug MacKay" writes:
The swap storage you're thinking of is only a small piece of a virtual memory system. If it was removed from a virtual memory system you'd still have a virtual memory system ;)

see melinda's paper on the development of virtual memory and virtual machines in the 60s
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

part of it was mit had chosen ge for ctss follow-on multics.

the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

had added custom virtual memory to 360/40 and built cp/40 supporting virtual machines and paging/swapping.

the 360/67 was official product with virtual memory support, built for tss/360. however, the science center ported cp/40 from the 360/40 to 360/67 renaming it cp/67.

univ. of michigan also developed MTS (michigen terminal system) that supported virtual memory on 360/67 (also paging/swapping).

however, there was at least one effort that modified os/360 to utilize the 360/67 virtual memory hardware w/o providing paging/swapping support. os/360 was nominally a "real address" operating system where applications were allocated contiguous areas of real memory. long running applications frequently resulted in storage fragmentation, there would be enough real memory to run an application but not located contiguously. the use of virtual memory hardware allowed discontiguous real storage to be appear to be contiguous.

for some drift ... misc posts about doing page replacement algorithms in the 60s (i.e. deciding which pages should be moved between real memory and disk).
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Virtual memory implementation in S/370

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Virtual memory implementation in S/370
Newsgroups: alt.folklore.computers
Date: Wed, 10 May 2006 16:33:18 -0600
Marten Kemp writes:
The recent thread about virtual memory sparked a (kind of) idle question: why did the implementation in the S/370 have a two-level scheme (segment and page)? My original thought was that it facilitated definition of discontiguous parts of an address space.

ref:
https://www.garlic.com/~lynn/2006i.html#22 virtual memory

370 virtual memory architecture segment & page tables. 360/67 provided for both 24-bit virtual addressing and 32-bit virtual addressing (in its segment/page virtual memory structure).

the introduction of virtual memory on 370 was just 24-bit virtual addressing with segment and page tables, originally with 1mbyte and 64kbyte segment options as well as 4kbyte and 2kbyte page options.

the original 370 virtual memory architecture also had segment protection and some selective invalidate instructions. these additional features were dropped to help get the virtual memory hardware retrofit to 370/165 back on schedule.

in the morph from cp67/cms to vm370/cms, cms had been restructured to take advantage of "shared segments" (single copy of the same virtual memory page in physical storage for multiple different virtual address spaces) with the segment protect feature. when segment protect was dropped from 370 virtual memory architecture, the protection of these shared pages had to be reworked.

more than 24-bit virtual addressing wasn't re-introduced until 370-xa with the 3081.

I had done a page-mapped filesystem for CMS, which included compatibility for existing CMS filesystem API
https://www.garlic.com/~lynn/submain.html#mmap

It also included a number of extended features allowing page-mapped objects (shared & non-shared) to appear at arbitrary virtual address locations. the same shared (segment) object could even appear at different virtual addresses in different virtual address space. this required a lot of fiddling with the prevalent 360/370 software use of address constants. Lots of posts mentioning the fiddling with address constants
https://www.garlic.com/~lynn/submain.html#adcon

Only a small subset of the virtual memory object support was ever released called DisContiguous Shared Segment (DCSS). The generalized virtual memory object support and the page mapped filesystem support was never released.

various posts mentioning the segment protect issue:
https://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004p.html#9 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004p.html#10 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004p.html#14 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2005c.html#20 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#61 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005e.html#53 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#46 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#9 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005j.html#39 A second look at memory access alignment
https://www.garlic.com/~lynn/2005o.html#10 Virtual memory and memory protection
https://www.garlic.com/~lynn/2006.html#13 VM maclib reference
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past
https://www.garlic.com/~lynn/2006i.html#9 Hadware Support for Protection Bits: what does it really mean?

...

and misc. past posts mentioning DCSS:
https://www.garlic.com/~lynn/2001c.html#2 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2003f.html#32 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#27 SYSPROF and the 190 disk
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2003o.html#42 misc. dmksnt
https://www.garlic.com/~lynn/2004d.html#5 IBM 360 memory
https://www.garlic.com/~lynn/2004f.html#23 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004l.html#6 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004m.html#11 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#11 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2005b.html#8 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005e.html#53 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#30 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005t.html#39 FULIST
https://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on z/VM V5.1
https://www.garlic.com/~lynn/2006.html#13 VM maclib reference
https://www.garlic.com/~lynn/2006.html#17 {SPAM?} DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#18 DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#19 DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#28 DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#31 Is VIO mandatory?
https://www.garlic.com/~lynn/2006.html#35 Charging Time
https://www.garlic.com/~lynn/2006b.html#4 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006b.html#7 Mount a tape
https://www.garlic.com/~lynn/2006f.html#2 using 3390 mod-9s

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Virtual memory implementation in S/370

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Virtual memory implementation in S/370
Newsgroups: alt.folklore.computers
Date: Wed, 10 May 2006 17:16:10 -0600
Marten Kemp writes:
The recent thread about virtual memory sparked a (kind of) idle question: why did the implementation in the S/370 have a two-level scheme (segment and page)? My original thought was that it facilitated definition of discontiguous parts of an address space.

re:
https://www.garlic.com/~lynn/2006i.html#22 virtual memory
https://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370

i had also done page migration as well as "table" migration ... which were released in my resource manager product ... the blue letter product announcement gives product release as 11may76 ... 30 years ago tomorrow. partial reproduction of the resource manager blue letter:
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

page migration would look make judgments about different speed paging devices ... and if it found "high" speed paging devices filling up, it would start looking for idle virtual pages (resident on "high" speed devices), that it could migrate to slower speed devices.

when real storage started getting constrained ... it would also look for idle portions of virtual address spaces. each 64kbyte virtual segment consumed ten bytes of real storage, 2bytes for the page table entry and 8bytes of adminstrative stuff (shadow storage protect keys, and location on paging device for the virtual page). for "idle" segments, it would turn on the invalid bit in the segment table entry, write the administrative stuff to special disk locations, and then discard the real memory for the page table and administrative stuff ... typically picking up 160bytes of real storage per 64kbyte of idle virtual address space or 2560bytes of real storage per 1mbyte of idle virtual address space ... little over 4kbytes of real storage per 2mbytes of idle virtual address space.

the defined virtual address space might or might not be contiguous ... but the segment table could have large gaps in the pointers to page tables ... potentially because the space wasn't defined in the particular virtual address space ... or because the virtual address space area was deamed to be idle at the moment and the associated tables had been removed from real storage.

the administrative table containing the disk backing store address for virtual pages (and the shadow storage protect keys) was called a SWAPTABLE ... so the feature allowing the SWAPTABLE to be removed from real storage was called SWAPTABLE migration or paging SWAPTABLEs.

a few other posts mentioning SWAPTABLE migration:
https://www.garlic.com/~lynn/2006.html#19 DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#35 Charging Time
https://www.garlic.com/~lynn/2006.html#36 Charging Time

...

and a few posts mentioning shadowing process:
https://www.garlic.com/~lynn/2005h.html#17 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2006i.html#10 Hadware Support for Protection Bits: what does it really mean?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Benefits of PKI - 5,000 nodes organization

From: lynn@garlic.com
Newsgroups: microsoft.public.security
Subject: Re: Benefits of PKI - 5,000 nodes organization
Date: Thu, 11 May 2006 08:48:25 -0700
S. Pidgorny <MVP> wrote:
PKI mostly gives intangible benefits by reducing risks associated with weak authentication systems and data integrity. Does that count as practical?

validating digital signatures with public keys provide for checks against data modification and impersonation with secret-based authentication mechanisms.

originator computes hash of some message/document and encodes the hash with their private key; the message/document then can be transmitted along with the appended digital signature

the recipient/relying party recomputes the hash on the message, decodes the digital signature with the originator's public key and compares the two hashes. if they are the same, the recipient/relying party can assume that the message/document hasn't been modified since it was digitally signed and that the originator has access to and use of the corresponding private key ... aka something you have authentication ... from 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


the shared-secret authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#secret

has the same value used for both origination and verification in authentication paradigm. that means that access to the value used for the verification also enables impersonation. fufthermore since it is a "static value" it is subject to skimming/harvesting/evesdropping vulnerabilities and replay attacks
https://www.garlic.com/~lynn/subintegrity.html#harvest

since public/private key uses a different value for verification and origination, the exposure for impersonation is significantly reduced. since the digital signature can be different on every use, it is a countermeasure to skimming/harvesting/evesdropping and replay attacks.

the original kerberos pk-init draft
https://www.garlic.com/~lynn/subpubkey.html#kerberos

called for registering public keys in lieu of passwords and using digital signature verification for authentication. this was a purely non-PKI, certificate-less operation
https://www.garlic.com/~lynn/subpubkey.html#certless

it was only later that there was pressue to add PKI-mode of operation with certificates to the kerberos pk-init draft ... creating a duplicate and parallel administrative infrastructure. now there has to be both the kerberos administration and registration infrastructure of the allowed entity, in addition to the PKI/certificate administration and registration infrastructure of the allowed entity.

the same can be said of RADIUS ... the other major authentication infrastructure in use for internet and distributed environments. the straight-forward solution is to register the entity with RADIUS including their public key, in lieu of a password.
https://www.garlic.com/~lynn/subpubkey.html#radius

the other is to have duplicate entity registration and administration infrastructures, one for RADIUS ... and a separate, duplicate registration and administration infrastructure for the PKI/certificate operation.

a few recent postings discussing some of the issues in more detail
https://www.garlic.com/~lynn/aadsm22.htm#8 Kama Sutra Spoofs Digital Certificates
https://www.garlic.com/~lynn/aadsm22.htm#17 Major Browsers and CAS announce balkanisation of Internet Security
https://www.garlic.com/~lynn/aadsm22.htm#18 "doing the CA statement shuffle" and other dances
https://www.garlic.com/~lynn/aadsm22.htm#19 "doing the CA statement shuffle" and other dances
https://www.garlic.com/~lynn/aadsm23.htm#3 News and Views - Mozo, Elliptics, eBay + fraud, naive use of TLS and/or tokens
https://www.garlic.com/~lynn/aadsm23.htm#5 History and definition of the term 'principal'?
https://www.garlic.com/~lynn/aadsm23.htm#13 Court rules email addresses are not signatures, and signs death warrant for Digital Signatures
https://www.garlic.com/~lynn/aadsm23.htm#14 Shifting the Burden - legal tactics from the contracts world
https://www.garlic.com/~lynn/aadsm23.htm#15 Security Soap Opera - (Central) banks don't (want to) know, MS prefers Brand X, airlines selling your identity, first transaction trojan
https://www.garlic.com/~lynn/aadsm23.htm#29 JIBC April 2006 - "Security Revisionism"
https://www.garlic.com/~lynn/2006h.html#46 blast from the past, tcp/ip, project athena and kerberos
https://www.garlic.com/~lynn/2006i.html#13 Multi-layered PKI implementation

a few recent postings discussing some of the issues in more detail
https://www.garlic.com/~lynn/aadsm22.htm#8 Kama Sutra Spoofs Digital Certificates
https://www.garlic.com/~lynn/aadsm22.htm#17 Major Browsers and CAS announce balkanisation of Internet Security
https://www.garlic.com/~lynn/aadsm22.htm#18 "doing the CA statement shuffle" and other dances
https://www.garlic.com/~lynn/aadsm22.htm#19 "doing the CA statement shuffle" and other dances
https://www.garlic.com/~lynn/aadsm23.htm#3 News and Views - Mozo, Elliptics, eBay + fraud, naive use of TLS and/or tokens
https://www.garlic.com/~lynn/aadsm23.htm#5 History and definition of the term 'principal'?
https://www.garlic.com/~lynn/aadsm23.htm#13 Court rules email addresses are not signatures, and signs death warrant for Digital Signatures
https://www.garlic.com/~lynn/aadsm23.htm#14 Shifting the Burden - legal tactics from the contracts world
https://www.garlic.com/~lynn/aadsm23.htm#15 Security Soap Opera - (Central) banks don't (want to) know, MS prefers Brand X, airlines selling your identity, first transaction trojan
https://www.garlic.com/~lynn/aadsm23.htm#29 JIBC April 2006 - "Security Revisionism"
https://www.garlic.com/~lynn/2006h.html#46 blast from the past, tcp/ip, project athena and kerberos
https://www.garlic.com/~lynn/2006i.html#13 Multi-layered PKI implementation

11may76, 30 years, (re-)release of resource manager

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: 11may76, 30 years, (re-)release of resource manager
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Thu, 11 May 2006 10:55:33 -0600
30 years since (re-)release of resource manager; some amount of the stuff i had done as an undergraduate in the 60s for cp67 ... but was dropped in the morph from cp67 to vm370.

collected past posts mentioning the scheduler
https://www.garlic.com/~lynn/subtopic.html#fairshare
and page replacement
https://www.garlic.com/~lynn/subtopic.html#wsclock

misc. past posts mentioning the date for resource manager:
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2001f.html#56 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2006d.html#27 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006g.html#1 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#7 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#22 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#25 The Pankian Metaphor
https://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Really BIG disk platters?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Really BIG disk platters?
Newsgroups: alt.folklore.computers
Date: Thu, 11 May 2006 19:22:44 -0600
"Charlie Gibbs" writes:
A friend once worked in a Burroughs 1700 shop. They added a head-per-track disk to the system, to speed up swapping or whatever. I don't remember the specs of the drive, but it was in a cabinet about 4 feet high, 4 feet wide, and about a foot deep. The axis was horizontal and the platter(s) must have been about 3 feet in diameter.

the 2301 "paging drum" was cylinder with head per track ... actually there was two models the 2303 and the 2301. the 2303 read/wrote single head at a time. the 2301 read/wrote 4 heads in parallel, a "track" was four times larger and there was 1/4th as many "tracks".

picture of 2301:
http://www.columbia.edu/cu/computinghistory/drum.html

the 2305 was also a fixed head device ... but was several spinning disks with head per track. there were two models, one with twice as many addressable heads and tracks. the other had the same number of physical heads but they had paireqd heads on the same track ... offset 180degrees. which ever head the data came under first, was selected ... as a result the one with half as many tracks and half the "logical" heads had half the rotation delay as the standard one.

the standard 2305 required on avg half revolution to get targeted data under the head. the other 2305 required only a quarter revolution to get targeted data under the head.

picture of 2305
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html

... and just for the heck of it

old 650 drum
http://www-03.ibm.com/ibm/history/exhibits/650/650_ph09.html

old 355 disk
http://www-03.ibm.com/ibm/history/exhibits/650/650_ph07.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers,bit.listserv.vmesa-l
Date: Thu, 11 May 2006 22:00:00 -0600
"kim kubik" writes:
There are (at least) two overlapping meanings of the phrase "virtual memory" here: a virtual (i.e., non-real) memory address and a virtual eXtention ("X" as in VAX) of memory out to disk. Most people seem to use the latter meaning.

The first, virtual memory addressing (dividing up of RAM into fixed sized pages) is in most cases a big win: drastic decrease in memory fragmentation.

Extending RAM out to disk pages adds all the cute PhD thesis project benchmarkable optimizations: page size, replacement algorithms, thrash minimization, etc. In an early attempt a compiling TeX in SUN the person finally shut down the machine after 36 hours, noting the paging daemon was taking up more and more of cpu time and eventually might take all of it.


the post did describe a case where virtual memory was used to address fragmentation problem
https://www.garlic.com/~lynn/2006i.html#22 virtual memory

some subsequent drift on this thread in a.f.c.
https://www.garlic.com/~lynn/2006i.html#23 virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#24 virtual memory implementation in S/370

i had done a bunch of the paging stuff as an undergraduate in the 60s ... which was picked up and shipped in cp67 product.
https://www.garlic.com/~lynn/subtopic.html#wsclock

decade plus later there was some conflict around somebody's stanford phd on global LRU replacement (work that i had done as an undergraduate in the 60s) vis-a-vis local lru replacements. some past posts mentioning the conflict.
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2003k.html#8 z VM 4.3
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s

i had also done this stuff with dynamica adaptive scheduling (scheduling policies included fair share) ... and scheduling to the bottleneck
https://www.garlic.com/~lynn/subtopic.html#fairshare

much of it was dropped in the morph from cp67 to vm370 ... but i was allowed to reintroduce it as the "resource manager" which became availble 11may76
https://www.garlic.com/~lynn/2006i.html#26 11may76, 30 years, (re-)release of resource manager

around this time, i was starting to notice the decline of relative disk system performance ... and significant increase in amount of available real storage ... and being able to start to use real storage caching to compensate for the decline in relative disk system performance

i started making some comments about it and the disk division eventually assigned their performance group to refute the comments. the performance group came back and observed that i had slightly understated the problem.

misc. past posts on the subject:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005n.html#29 Data communications over telegraph circuits

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Which entry of the routing table was selected?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Which entry of the routing table was selected?
Newsgroups: comp.protocols.tcp-ip,microsoft.public.win32.programmer.networks
Date: Thu, 11 May 2006 22:42:53 -0600
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
Only a circuit switching advocate would think I'm being pessimistic or saying anything negative about packet switching. A packet switching advocate thinks tossing packets in the ocean of routers and hoping they'll be passed in the right direction is a good thing and better than circuit switching. Given the popularity of VoIP among the telephants, most of the old "Bell Heads" have either died or become packet switching collaborators.

we were called in to consult with this small client/server startup that wanted to do payments on their server
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

they had this technology they called SSL and were looking at taking payment message definitions from a cricuit-based infrastructure and mapping it into a purely packet environment with this thing called a payment gateway (i've periodically claimed it may be the first SOA ... service oriented architecture).

very early testing, there was a problem called into the trouble desk. the trouble desk had been used to doing first level problem determination in a telco provisioned circuit-based infrastructure. after 3hrs of concerted effort, they finally closed the trouble ticket as NTF (no trouble found).

we had to go back and do some detailed analysis of all kinds of possible failure modes and then develop some amount of software and diagnostic procedures to even marginally compensate for not having a telco provisioned operation. at least for the server to payment gateway, we had fair amount of sign-off and authority and could see that it got done.

part of it had been based on having done detailed vulnerability analysis of tcp/ip when were starting our ha/cmp product in the late 80s
https://www.garlic.com/~lynn/subtopic.html#hacmp

minor topic drift on recent thread re: reliable connections/communication
https://www.garlic.com/~lynn/aadsm23.htm#21 Reliable Connections Are Not
https://www.garlic.com/~lynn/2006i.html#16 blast from the past, reliable communication
https://www.garlic.com/~lynn/2006i.html#17 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006i.html#18 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006i.html#19 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006i.html#20 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006i.html#21 blast from the past on reliable communication

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 12 May 2006 08:31:10 -0600
robert.thorpe writes:
The above gives much of the IBM (and to some extent US) view of virtual memory. For a little balance here are some articles on virtual memory elsewhere, if anyone's interested.

https://en.wikipedia.org/wiki/Virtual_memory
https://en.wikipedia.org/wiki/B5000
https://en.wikipedia.org/wiki/Atlas_Computer


a quote from Melinda's paper
https://www.leeandmelindavarian.com/Melinda#VMHist

about some of the commitment to 360/67 and TSS/360:
What was most significant was that the commitment to virtual memory was backed with no successful experience. A system of that period that had implemented virtual memory was the Ferranti Atlas computer, and that was known not to be working well. What was frightening is that nobody who was setting this virtual memory direction at IBM knew why Atlas didn't work. 23

... snip ...

taken from:
23 L.W. Comeau, ''CP-40, the Origin of VM/370'', Proceedings of SEAS AM82, September, 1982, p. 40.

also more of the above quotes found in old post
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history

some amount of Melinda's paper explores the early CTSS work at MIT and derivatives ... aka multics on the 5th floor 545 tech sq. and cp40, cp67, vm370, virtual machines, etc at the science center on 4th floor 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

aka ... not so much ibm, but more mit area.

the science center also did a lot of the stuff leading up to capacity planning
https://www.garlic.com/~lynn/submain.html#bench

and gml (precursor to sgml, html, xml, etc) was invented in 69 at the science center by "G", "M", and "L"
https://www.garlic.com/~lynn/submain.html#sgml

posts in this/related thread:
https://www.garlic.com/~lynn/2006i.html#22 virtual memory
https://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#28 virtual memory

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 12 May 2006 08:49:41 -0600
re:
https://www.garlic.com/~lynn/2006i.html#30 virtual memory

as an aside .... there was a lot of close work with europe ... both grenoble science center, cern, etc.

the original cp67 (and cp40) provided only "vanilla" 360 virtual machines, but lacked virtualization support for virtual machine with virtual memory capability. one of the people came over on assignment from grenoble science center and did much of the implementation to support virtualizing virtual memory aka all the shadow table stuff to support virtual 360/67. recent post with some shadow table discussion
https://www.garlic.com/~lynn/2006i.html#10 Hardware Support for Protection Bits: what does it really mean?

some number of years later he shows up running earn, minor reference
https://www.garlic.com/~lynn/2001h.html#65

collected posts mentioning bitnet &/or earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

later the grenoble science center had modified cp67 to have a "working set dispatcher" and support local lru page replacement article and published a paper in ACM on the work.

in the later contention over awarding a stanford phd thesis for clock and global LRU page replacement ... mentioned in prior post
https://www.garlic.com/~lynn/2006i.html#28 virtual memory

i was able to supply performance comparison numbers of cp67 running at the cambridge science center with global LRU page replacement and cp67 at the grenoble science center with local LRU page replacement ... which possibly contributed to helping resolve the issue.

misc. past posts mentioning the cambridge/grenoble comparison
https://www.garlic.com/~lynn/93.html#7 HELP: Algorithm for Working Sets (Virtual Memory)
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001l.html#6 mainframe question
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2002q.html#24 Vector display systems
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
https://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 12 May 2006 09:32:04 -0600
Anne & Lynn Wheeler writes:
23 L.W. Comeau, ''CP-40, the Origin of VM/370'', Proceedings of SEAS AM82, September, 1982, p. 40.

random drift, during FS days, Les owned part of the FS architecture related to advanced interconnect. my wife reported to him.
https://www.garlic.com/~lynn/submain.html#futuresys

she then went on to work in the g'burg jes group and then was con'ed into going to POK to be responsible for loosely-coupled (i.e. cluster) architecture.
https://www.garlic.com/~lynn/submain.html#shareddata

later when we started work on the ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

minor reference for some additional drift
https://www.garlic.com/~lynn/95.html#13

Les had retired and was director of computing for one of the medical schools in the boston area. He was also the "C" in CLaM ... a three person dataprocessing consulting/services company in the Boston area. We needed a lot of work done on hacmp ... and subcontracted a lof of it out to CLaM (which contributed to their rapid growth).

The science center had moved from 545 tech sq, down the street to 101 Main. After they closed down the science center, CLaM took over their space in 101 Main (for a time).

past posts in this thread:
https://www.garlic.com/~lynn/2006i.html#22 virtual memory
https://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#28 virtual memory
https://www.garlic.com/~lynn/2006i.html#30 virtual memory
https://www.garlic.com/~lynn/2006i.html#31 virtual memory

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers,bit.listserv.vmesa-l
Date: Fri, 12 May 2006 16:04:44 -0600
"robertwessel2@yahoo.com" writes:
Don't forget the single-address-space systems, of which the iSeries (nee AS/400) is perhaps the most commercially successful example. Just because process-per-address space is fashionable, doesn't mean it's the only viable approach.

precursor to as/400 was the s/38 ... which folklore has having been a bunch of future system people taking refuge in rochester after FS was killed. reference to future system effort earlier in this thread.
https://www.garlic.com/~lynn/2006i.html#32 virtual memory

misc. collected postings referencing FS. I didn't make myself particularly popular with the FS crowd at the time, drawing some parallel between their effort and a cult film that had been playing non-stop for several years down in central sq.
https://www.garlic.com/~lynn/submain.html#futuresys

the transition of as/400 from cisc architecture to power/pc ... involved a lot of hangling during the 620 chip architecture days ... with rochester demanding a 65th bit to be added to the 64bit virtual addressing architecture (they eventually went their own way).

a few past posts mentioning 620, 630 and some of the other power/pc activities:
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2001i.html#24 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001i.html#28 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001j.html#36 Proper ISA lifespan?
https://www.garlic.com/~lynn/2004q.html#40 Tru64 and the DECSYSTEM 20
https://www.garlic.com/~lynn/2005q.html#40 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#11 Intel strikes back with a parallel x86 design

i've perodically mused that the migrations of as/400 to power/pc was somewhat fort knox reborn. circa 1980 there was an effort to migrate a large number of the internal microprocessors to 801 chips. one of these was to have been 370 4341 follow-on. I managed to contribute to getting that effort killed ... as least so far as the 4381 was concerned. misc. collected posts mentioning 801, fort knox, romp, rios, somerset, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

for misc. other lore, the executive we had been reporting to when we started the ha/cmp product effort ... moved over to head up somerset ... when somerset was started (i.e. the apple, motorola, ibm, et all effort for power/pc).
https://www.garlic.com/~lynn/subtopic.html#hacmp

the initial port of os/360 (real memory) mvt to 370 virtual memory was referred to as os/vs2 SVS (single virtual storage).

the original implementation was an mvt kernel laid out in a 16mbyte virtual memory (somewhat akin to mvt running in 16mbyte virtual machine) with virtual memory and page handler crafted onto the side ... and CCWTRANS borrowed from cp67.

the os/360 genre was real memory orientation with heavy dependency on pointer passing in majority of the APIs ... being able to process any kind of service request required directly addressing parameter list pointed to by the passed pointer. this was, in part, big part of having address space for os/360 operation. The application paradigm involving I/O was largely dependent on direct transfer from/to application allocated storage. Application and library code would build I/O programs with the "real" address locations that were assigned to the application. Transition to virtual memory environment, had the majority of application I/O involved passing address pointers to these application build I/O programs with "application" allocated storage addresses. In the real address world, the kernel would schedule some I/O permission restrictions and then transfer control directly to the application I/O program. In the transition to the virtual address space world ... all of these application I/O programs were now specifying virtual addresses ... not real addresses. CP67's kernel "CCWTRAN" handled the building of "shadow" I/O program copies ... fixing the required virtual pages in real storage and translating all of the supplied virtual address into real address for execution of the "shadow" I/O programs.

recent post about CCWTRAN and shadow I/O programs
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT

SVS evolved into MVS ... there was a separate address space for every application. However, because of the heavy pointer passing paradigm, the "MVS" kernel actually occupied 8mbytes of every application 16mbyte virtual address space. There was some additional hacks required. There were some number of things called subsystems that were part of most operational environments. They existed outside of the kernel (in their own virtual address space) ... but in the MVT & SVS worlds, other applications were in the habit of directly calling these subsystem functions using pointer passing paradigm ... which required the subsystems (which now were in separate address space) to directly access the calling application's parameters in the application's virtual address space.

The initial solution was something called a "COMMON" segment, a (again initially) 1mbyte area of every virtual address space where applications could stuff parameter values that they needed to be accessed by called subsystems, resident in other address spaces. Over time, as customer installations added a large variety of subsystems, it wasn't unusual to find the COMMON segment taking up five megabytes. While these were MVS systems, with a unique 16mbyte virtual address space for every application, the kernel image was taking 8mbytes out of every virtual address space, and with a five megabyte COMMON area, that would leave a maximum of 3mbytes for application use (out of a 16mbyte virtual address space).

Dual-address space mode was introduced in the late 70s with 3033 processor (to start to alleviate this problem caused by the extensive use of pointer passing paradigm). This provided virtual address space modes ... a subsystem (in its own virtual address space) could be called with a pointer to parameters in the application address space. The subsystem had facilities that allowed it to "reach" into other virtual address spaces. A call to one of these subsystems still required passing through the kernel to swap virtual address space pointers ... and some other gorp.

recent mention of some connection between dual-address space and itanium
https://www.garlic.com/~lynn/2006.html#39 What happens if CR's are directly changed?
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces

Along the way there was a desire to move more of the operating system library stuff that resided as part of the application code. So dual-address space was generalized to multiple address space and a new hardware facility was created called "program call". It was attempting to achieve the efficiency of branch-and-link instruction calling some library code with the structured protection mechanisms required to switch virtual address spaces by passing through priviledge kernel code. the privilege "program call" hardware table had a bunch of permission specification controls ... including which collection of virtual address space pointers could be moved into the various access registers. 31-bit virtual addressing was also introduced.

today there are all sorts of combinations of 24-bit virtual addressing, 31-bit virtual addressing, 64-bit virtual addressing ... as well as possibly several such virtual address spaces be able to be accessed concurrently.

3.8 Address spaces ... some overview including discussion about multiple virtual address spaces:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.8?SHELF=EZ2HW125&DT=19970613131822

2.3.5 Access Registers ... discussion of access registers 1-15 can dissignate any address space
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/2.3.5?SHELF=EZ2HW125&DT=19970613131822

10.26 Program Call
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.26?SHELF=EZ2HW125&DT=19970613131822

10.27 Program Return
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.27?SHELF=EZ2HW125&DT=19970613131822

10.28 Program Transfer
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.28?SHELF=EZ2HW125&DT=19970613131822

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

TOD clock discussion

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TOD clock discussion.
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 12 May 2006 16:39:30 -0600
DASDBill2@ibm-main.lst wrote:
Some other "users" (aka Fortune 100 companies) are used to reliability, availability, and serviceability of such magnitude that an unscheduled reIPL (also pronounced "reboot") on their mainframes happens perhaps twice a year. Memory leaks are not a valid reason for restarting these systems, but amazingly enough these users are intolerant enough to demand that the memory leaks be fixed. And, even more amazingly, the software vendors involved even fix the leaks.

circa 1980, STL wanted to relocate something like 300 of the IMS people to an offsite building about 10 miles away. They looked at remote 3270 support into the STL datacenter but found it totally unacceptable.

NSC had HYPERchannel with A22x channel adapters (looked like control unit and directly attached to channels), A7xx telco adapters, and A51x remote device adapters (emulated ibm mainframe channel and allowed connection of control units). These could be configured to effectively provide channel extension over telco links. This was used to provide remote service for the 300 relocated people from the IMS group. Turned out as a side-effect of getting the local 3274 controllers directly off the IBM channels ... things actually improved about 10-15 percent compared to the direct attached operation (which more than compensated for the slight increase in latency over T1-telco link in the channel path).
https://www.garlic.com/~lynn/subnetwork.html#hsdt

So I tried to make the software generally available ... but both the communication group and the fiber group in pok non-concurred (what eventually was released as escon had been laying around pok since the late 70s). However, the vendor took my design ... and re-implemented it from scratch.

so we roll forward, the 3090 had been in customer shops for a year ... and the product manager for the 3090 tracks me down. a lot of customers provide erep data to an industry operation that collected and provided report summaries about different vendor RAS (reliability, availability, serviceability). the 3090 product manager was very concerned about the 3090 data.

the 3090 channels had been designed to have 3-5 channel errors per year (all channels for all customers, aka not 3-5 channel errors per channel per customer per year ... but 3-5 channel errors aggregate per year across all channels and all customers). the industry source reported that there had been something like a total of 16 reported channel errors for the year. they were deeply concerned about the additional 11 reported channel errors.

well, it turned out to be a number of customers using HYPERchannel for channel extension. in the original design, if I had got an unrecoverable i/o error from the telco transmission or the remote end ... i would eventually simulate a channel check error on the operation ... which got it into the standard operating system erep retry, recovery, and recording processing. this accounted for the additional 11 reported 3090 reported channel errors for the year.

so i went back and looked at stuff in detail and decided that for all intent and purposes ... simulated IFCC (interface control check) resulted in nearly identical operating system retry and recovery. I got the vendor to change the channel check simulation to IFCC simulation.

can you imagine any other market segment where they would even know if the total aggregate i/o bus errors across all machines installed at all customers totaled 15 errors for the whole year.

can you imagine any other market segment where there was an industry service that collected all such information across the installed customer base and provided regular industry reports?

can you imagine any other market segment where the customers would reguarly buy and read such industry reports.

misc. past posts mentioning the 3090 channel incident:
https://www.garlic.com/~lynn/94.html#24 CP spooling & programming technology
https://www.garlic.com/~lynn/96.html#27 Mainframes & Unix
https://www.garlic.com/~lynn/2004j.html#19 Wars against bad things
https://www.garlic.com/~lynn/2004q.html#51 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#28 Adversarial Testing, was Re: Thou shalt have no
https://www.garlic.com/~lynn/2005e.html#13 Device and channel
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances
https://www.garlic.com/~lynn/2006b.html#21 IBM 3090/VM Humor

TOD clock discussion

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TOD clock discussion.
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 12 May 2006 16:57:41 -0600
Anne & Lynn Wheeler wrote:
can you imagine any other market segment where they would even know if the total aggregate i/o bus errors across all machines installed at all customers totaled 15 errors for the whole year.

can you imagine any other market segment where there was an industry service that collected all such information across the installed customer base and provided regular industry reports?

can you imagine any other market segment where the customers would reguarly buy and read such industry reports.


ref:
https://www.garlic.com/~lynn/2006i.html#34 TOD clock discussion

i had given one of the keynotes at the 2001 nasa high dependability computing consortium workshop
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html

and told an abbreviated version of the 3090 story and asked if any of the other vendors had real, published RAS data for their hardware. Some of the vendors said that they had some approximation/estimate for some of the information but none of the vendors at the conference would were willing to publicly publish any of their numbers.

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 12 May 2006 20:25:57 -0600
Bill Todd writes:
I took a quick look and saw nothing specific, so you'll need to provide a pointer or two.

ref:
https://www.garlic.com/~lynn/2006i.html#31 virtual memory

I posted it numerous times before ... so there were URLs to where it had been repeatedly posted before basically i had done global LRU. cambridge had 768kbyte 360/67, 104 "pageable pages" after fixed kernel requirements running cp67.

grenoble modified the same cp67 for "working set" dispatcher and local LRU running on 1mbyte 360/67, 154 "pageable pages" after fixed kernel requirements (basically 50 percent more real storage for application paging). they published a paper on the work in the early 70s in cacm
J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, cacm16, apr73

basically running same kind of workload, cambridge ran approx. 80 users with the same interactive response and thruput as grenoble did with 35 users i.e. cambridge system with 1/3rd less real storage for application paging was able to support more than twice as many users with comperable thruput and no worse interactive response ... typically much better. the interactive response was somewhat more sensitive to latency associated with the "working dispatcher" attempting to avoid page thrashing and how effective local LRU was in selecting pages for replacement ... so the cambridge system response ... even with more than twice as many users typically had better interactive response and lower latency delays.

specific reference from the earlier listed "grenoble" references
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement

after all the festouche had settled, the stanford phd on clock and global LRU was finally awarded ... despite the best efforts by some of the backers of local LRU.

consistent with the stuff I had invented as an undergraduate in the 60s ... also, about the time the whole uproar was going on over whether somebody would get a stanford phd thesis on global LRU ... we had a project that was recording all disk record references from a variety of operational production systems.

there were also a fairly sophisticated cache simulator built which was looking at various disk i/o caching strategies. simulation was done for a broad range of different configurations, disk arm caching, controller caching, channel caching, system caching, etc. across the broad range of different environments and workloads, it was found for a given amount of electronic storage ... system level caching always provided better throughput (modulo some issues with use of some disk arm store for rotational delay compensation ... i.e. being able to start i/o transfer as soon the head had settled as opposed to waiting for the records to arrive under the head in a specified sequence). if there was a fixed amount of electronic cache ... say 20mbytes ... using that 20mbytes for a system level cache always provided better thruput than breaking the electronic store up and partitioining it out to any level of sub-components.

partitioning the electronic store and parceling it out to subcomponents is analogous to doing local LRU replacement strategy (i.e. partitioning real memory and only doing replacement within the individual partitions).

aggregating the electronic store into a single larger cache is analogous to doing global LRU replacement strategy.

some past posts mentioning the global versus partitioned cache work:
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/99.html#105 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005.html#2 Athlon cache question
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s

and for some drift and urban legend
http://susandennis.livejournal.com/2003/02/04/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 12 May 2006 20:45:34 -0600
Anne & Lynn Wheeler writes:
specific reference from the earlier listed "grenoble" references
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement


more of the references listed in the above:
L. Belady, A Study of Replacement Algorithms for a Virtual Storage Computer, IBM Systems Journal, v5n2, 1966

L. Belady, The IBM History of Memory Management Technology, IBM Journal of R&D, v25n5

R. Carr, Virtual Memory Management, Stanford University, STAN-CS-81-873 (1981)

R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm for Virtual Memory Management, ACM SIGOPS, v15n5, 1981

P. Denning, Working sets past and present, IEEE Trans Softw Eng, SE6, jan80

J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, cacm16, apr73

D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971


....

for a little more drift, the Hatfield & Gerald reference in the above eventually resulted in a product called VS/REPACK ... another product of the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

i wasn't involved directly in any of the analysis part of the product ... but i wrote a lot of the data collection facilities that was used to collect the data that vs/repack analysed.

for additional random drift, various past posts mentioning vs/repack
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005.html#4 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005o.html#5 Code density and performance?
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 12 May 2006 22:09:02 -0600
Anne & Lynn Wheeler writes:
L. Belady, A Study of Replacement Algorithms for a Virtual Storage Computer, IBM Systems Journal, v5n2, 1966

L. Belady, The IBM History of Memory Management Technology, IBM Journal of R&D, v25n5

R. Carr, Virtual Memory Management, Stanford University, STAN-CS-81-873 (1981)

R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm for Virtual Memory Management, ACM SIGOPS, v15n5, 1981

P. Denning, Working sets past and present, IEEE Trans Softw Eng, SE6, jan80

J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, cacm16, apr73

D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971


a couple other specific postings about the above:
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 12 May 2006 22:19:55 -0600
another posting going over this
https://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 13 May 2006 01:18:52 -0600
note this same discussion from comp.arch thread in 7aug2004
https://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than inner cylinders?

a little more in that same thread
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#8 Hard disk architecture: are outer cylinders still faster than inner cylinders?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 13 May 2006 08:08:57 -0600
Anne & Lynn Wheeler writes:
consistent with the stuff I had invented as an undergraduate in the 60s ... also, about the time the whole uproar was going on over whether somebody would get a stanford phd thesis on global LRU ... we had a project that was recording all disk record references from a variety of operational production systems.

there were also a fairly sophisticated cache simulator built which was looking at various disk i/o caching strategies. simulation was done for a broad range of different configurations, disk arm caching, controller caching, channel caching, system caching, etc. across the broad range of different environments and workloads, it was found for a given amount of electronic storage ... system level caching always provided better throughput (modulo some issues with use of some disk arm store for rotational delay compensation ... i.e. being able to start i/o transfer as soon the head had settled as opposed to waiting for the records to arrive under the head in a specified sequence). if there was a fixed amount of electronic cache ... say 20mbytes ... using that 20mbytes for a system level cache always provided better thruput than breaking the electronic store up and partitioining it out to any level of sub-components.

partitioning the electronic store and parceling it out to subcomponents is analogous to doing local LRU replacement strategy (i.e. partitioning real memory and only doing replacement within the individual partitions).

aggregating the electronic store into a single larger cache is analogous to doing global LRU replacement strategy.


in this previous incarnation of the thread/subject from 7aug2004
https://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#8 Hard disk architecture: are outer cylinders still faster than inner cylinders?

there was the issue of paging virtual memory being a flavor of cache managed by LRU-type replacement algorithms. besides the similarity between global/local LRU replacement and configuration decision about having fixed amount of electronic store as a single "system" global cache or partitioned and divided up into smaller caches dedicated to specific sub devices ... you can also get into the subject of cache hierarchies as well as technology limitations where you have lots of electronic store and for various issues that it can't all be applied to the highest level in the hierarchy (in which case still being able to use the available electronic storage at other levels in the infrastructure might still provide some benefit).

an example of the limitation issues was the 4.5mip 3033 processor versus a cluster of six 1mip 4341 processors. initially, all the real memory you could with either the 3033 or the 4341 was 16mbytes. six 1mip 4341 processors with 16mbytes of electronic store each (96mbytes total) cost about the same as a 4.5mip 3033 processor with 16mbytes (16mbytes total) .... assuming the workload was partitionable in a cluster, the six 4341 cluster got much higher thruput than the single 3033 at about the same price.

the cache hiearchy could have other issues that I've referred to as the dup/no-up problem. circa 1981, you could get a 3880 disk controller with cache ... limited to 8mbytes. if came as a 3880-11/ironwood and cache line was 4k records or 3880-13/sheriff where cache line was full-track.

a 3033 or 3081 with 16mbytes real storage might have two such disk controllers in configuration for a total of 16mbytes of disk controller cache. the issue in the "duplicate" strategy was if every page fault by the processor pulled in a page from disk, thru the disk controller cache, a copy of the page would be resident in both the disk controller cache and the processor memory (i.e. there was a duplicate in both places). then what happens is when could there be a page that wasn't in the processor 16mbyte memory that happened to be in the disk cache? i.e. if the processor faulted on a virtual page (because it wasn't resident and needed to be brought in from disk), was there a reasonable probability that a page not in processor memory being found in the disk cache. if the total disk cache held about the same number of 4k records as the main processor memory and every record in processor memory was duplicated in disk cache ... what was the probability that there was a 4k record in disk cache that wasn't in processor memory.

I had actually started working on a resource management duplicate/no-duplicate strategy with page migration in configuration with a hierarchy of different performance paging devices (fixed-head 2305s and lots of moveable arm 3330 disks) in the 70s ... which was released as part of the resource manager on 11may1976 .. a couple recent references:
https://www.garlic.com/~lynn/2006h.html#25 The Pankian Metaphor
https://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#26 11may76, 30 years, (re-)release of resource manager
https://www.garlic.com/~lynn/2006i.html#28 virtual memory

in the 3380-ii/ironwood scenario the processor page i/o could do reads that were "normal" (if it wasn't in the disk cache, it would be read into both disk cache and processor memory, if it was in disk cache, it would be read directly from cache) or "destructive" (if it was in the disk cache, it would be read from disk cache and deallocated from the cache ... aka destructive read; if it wasn't in disk cache, it would be read directly into processor memory w/o involving the cache).

misc. past posts mentioning duplicate/no-duplicate scenarios for hierarchical caches
https://www.garlic.com/~lynn/93.html#12 managing large amounts of vm
https://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
https://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001i.html#42 Question re: Size of Swap File
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001n.html#78 Swap partition no bigger than 128MB?????
https://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#16 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#19 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002f.html#26 Blade architectures
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#19 fast check for binary zeroes in memory
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2005c.html#27 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#18 how much swap size did you take?

misc. past posts mentioning the 3033/4341 partitioned comparison (implementation issues)
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/99.html#7 IBM S/360
https://www.garlic.com/~lynn/99.html#110 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#82 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000e.html#57 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002b.html#2 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
https://www.garlic.com/~lynn/2002i.html#22 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#23 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#59 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002p.html#59 AMP vs SMP
https://www.garlic.com/~lynn/2002q.html#27 Beyond 8+3
https://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#67 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003c.html#77 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003d.html#33 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
https://www.garlic.com/~lynn/2003i.html#5 Name for this early transistor package?
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003l.html#31 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004.html#8 virtual-machine theory
https://www.garlic.com/~lynn/2004d.html#12 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004j.html#57 Monster(ous) sig (was Re: Vintage computers are better
https://www.garlic.com/~lynn/2004l.html#10 Complex Instructions
https://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor
https://www.garlic.com/~lynn/2004n.html#14 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#50 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#34 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005d.html#62 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005m.html#25 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#11 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#29 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005p.html#19 address space
https://www.garlic.com/~lynn/2005q.html#30 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#38 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005s.html#22 MVCIN instruction
https://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#44 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries?
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#34 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past
https://www.garlic.com/~lynn/2006i.html#33 virtual memory

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 13 May 2006 09:16:13 -0600
"Eric P." writes:
After our previous discussions, I looked high and low for any details on the grenoble system, and fifo or local working sets on cp67. Unfortunately I found nothing.

My suspicion is that Grenoble used strict fifo local working sets. These are now known (possibly as a result of that very project) to not perform well.

VMS and WNT use Second Chance Fifo, which has very different behavior to strict Fifo, and is reputed to have the same behavior as WSClock. VMS also has an option for a third chance - I don't know if WNT also has that. This gives them all the control advantages that local working sets allow with the same paging statistics as global.

In second chance fifo, pages removed from a local working set are tossed into a global Valid list to become a candidate for recycling. If referenced again quickly the page is pulled page into the local working set for almost no cost. This is essentially the same as the WSClock and its referenced bits.

In 3rd chance, VMS allows a page to make 2 trips through the working set list. After the first trip a flag is set on the working set entry it goes to the tail of the list and the PTE's valid flag is cleared. If it gets touched again then the handler just enables the PTE. When it gets to the head of the list again the PTE is checked to see if it was referenced. If is was, it cycles again, otherwise it goes into the global Valid list. [1]

The working set size is not fixed but can vary if memory is available. If a process pages too much the size is increased until it stops. [2}

The combination of all these features makes the VMS/WNT local working sets completely different from the early strict fifo models.

[1] My informal experiments on VMS found that enabling the 3rd chance mechanism had no effect on performance. The 2nd chance mechanism appears to be quite sufficient.

[2] A valid criticism of VMS, and possibly WNT, is they use boot time sized arrays to track the working set entries rather than linked lists. Because of this there is an absolute maximum working set size and processes which reach that maximum will page fault even when there is gobs of free memory on a system. However that is a deficiency in a particular implementation and not due to the local working sets.

There is no technical reason for these to not allow process working sets to be arbitrarily large, up to all of virtual space is memory is available, and use dynamic software controls to limit their size.

J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, cacm16, apr73

Thanks, I didn't see this reference before. Unfortunately it is imprisoned inside the ACM.


some posts in previous incarnation of this thread from
https://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#8 Hard disk architecture: are outer cylinders still faster than inner cylinders?

the original cp67 that was installed in the univ. last week jan68 used effectively a variation on fifo; it scanned storage looking for pages in state somewhat like your second chance ... and if it couldn't find one ... it took a standard page.

within several months, i had changed that and added testing and resetting of the reference bits in cycle. ... basically clock like operation.

grenoble used resetting of reference bits in local LRU.

so multics did a multi-bit reference cycle operation ... 5th floor, 545 tech. sq; science center was on 4th floor 545 tech sq.
https://www.garlic.com/~lynn/subtopic.html#545tech

the original cp67 didn't have any thrashing controls. early in '68, lincoln labs made a modification that limited the total number of processes in the active/dispatch set ... as a means of limiting page contention ... however, it didn't take into account amount of pages needed by different virtual address spaces.

i added some calculations that measured real storage requirements of a virtual address space. the real storage requierment calculations were dynamically adjusted that included some measure of proportion of time spent waiting for page fault servicing ... aka if total page fault service time was low, estimate of virtual address space requirement for real storage was lowered, if total page fault service time was high, estimate of virtual address space requirement for real storage was increased ... basically dynamically adapting amount of real storage requirements to configuration, workload, and contention (i.e. high paging rates that also introduced long queues and high service times would increase estimates of real storage requirements, the same high paging rates with higher speed devices and no queues, and much lower service times would have lower estimates of real storage requirements).

in the early 70s, we ran huge number of experiments with lots of different code implementation in live operation as well as having various kinds of detailed simulators that also tried wide variety of different implementation details.

one of the scenarios that i had come up with (in the early 70s) was something that would beat true LRU. in part, LRU is based on assumption that pages used in the recent past are likely to be used in the near future. this is only a rough guess across a wide range of different operational characteristics. sometimes the LRU related assumption about page use is very accurate and sometimes it has no relationship to what is actually happening. in some cases, LRU replacement degenerates to FIFO and is not the optimal strategy.

the simulators could mimick the actual implementations which all tended to be approximations of true LRU. the simulators could also implement actual LRU (ordering page reference on every instruction executed). In general, the simulators found that "true" LRU did 10-15 precent better than the real-world LRU approximation implementations. so there had been lots and lots of work attempting to come closer and closer to "true" LRU.

however, the gimmick that I had come up looked, tasted and operated as standard clock ... a flavor of LRU approximation. The referenced multics experiment with multiple reference bits basically kept longer and longer history ... attempting to come closer and closer to a "true" LRU implementation.

I did a slight-of-hand coding trick in an implementation ... where all the standard LRU varieties degenerated to FIFO under various conditions ... this slight-of-hand coding gimmick degenerated to RANDOM under the same conditions. as a result, when LRU was applicable, both "true" LRU and my LRU approximation gimmick operated nearly the same. however, it operational regions when LRU would degenerate to FIFO, my coding slight-of-hand degenerated to RANDOM automagically (aka there was no explicit code to switch from LRU to RANDOM ... it just turned out that the coding slight-of-hand resulted in the implementation degenerating to RANDOM rather than FIFO).

previously mentioned references:
L. Belady, A Study of Replacement Algorithms for a Virtual Storage Computer, IBM Systems Journal, v5n2, 1966

L. Belady, The IBM History of Memory Management Technology, IBM Journal of R&D, v25n5

R. Carr, Virtual Memory Management, Stanford University, STAN-CS-81-873 (1981)

R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm for Virtual Memory Management, ACM SIGOPS, v15n5, 1981

P. Denning, Working sets past and present, IEEE Trans Softw Eng, SE6, jan80

J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, cacm16, apr73

D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971


past posts mentioning lru, fifo, random:
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#10 lru, clock, random & dynamic adaptive
https://www.garlic.com/~lynn/94.html#14 lru, clock, random & dynamic adaptive ... addenda
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/98.html#17 S/360 operating systems geneaology
https://www.garlic.com/~lynn/2000f.html#9 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#32 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#34 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2001f.html#55 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2002j.html#31 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002j.html#32 Latency benchmark (was HP Itanium2 benchmarks)
https://www.garlic.com/~lynn/2002j.html#35 Latency benchmark (was HP Itanium2 benchmarks)
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#0 Alpha performance, why?
https://www.garlic.com/~lynn/2003i.html#72 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2004l.html#66 Lock-free algorithms
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#77 Athlon cache question
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006d.html#21 IBM 610 workstation computer

past posts in this thread
https://www.garlic.com/~lynn/2006i.html#22 virtual memory
https://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#28 virtual memory
https://www.garlic.com/~lynn/2006i.html#30 virtual memory
https://www.garlic.com/~lynn/2006i.html#31 virtual memory
https://www.garlic.com/~lynn/2006i.html#32 virtual memory
https://www.garlic.com/~lynn/2006i.html#33 virtual memory
https://www.garlic.com/~lynn/2006i.html#36 virtual memory
https://www.garlic.com/~lynn/2006i.html#37 virtual memory
https://www.garlic.com/~lynn/2006i.html#38 virtual memory
https://www.garlic.com/~lynn/2006i.html#39 virtual memory
https://www.garlic.com/~lynn/2006i.html#40 virtual memory
https://www.garlic.com/~lynn/2006i.html#41 virtual memory

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers,bit.listserv.vmesa-l
Date: Sat, 13 May 2006 10:15:09 -0600
"Eric P." writes:
VMS and WNT use Second Chance Fifo, which has very different behavior to strict Fifo, and is reputed to have the same behavior as WSClock. VMS also has an option for a third chance - I don't know if WNT also has that. This gives them all the control advantages that local working sets allow with the same paging statistics as global.

In second chance fifo, pages removed from a local working set are tossed into a global Valid list to become a candidate for recycling. If referenced again quickly the page is pulled page into the local working set for almost no cost. This is essentially the same as the WSClock and its referenced bits.

In 3rd chance, VMS allows a page to make 2 trips through the working set list. After the first trip a flag is set on the working set entry it goes to the tail of the list and the PTE's valid flag is cleared. If it gets touched again then the handler just enables the PTE. When it gets to the head of the list again the PTE is checked to see if it was referenced. If is was, it cycles again, otherwise it goes into the global Valid list. [1]


as mentioned in the previous post
https://www.garlic.com/~lynn/2006i.html#22 virtual memory
https://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#28 virtual memory
https://www.garlic.com/~lynn/2006i.html#30 virtual memory
https://www.garlic.com/~lynn/2006i.html#31 virtual memory
https://www.garlic.com/~lynn/2006i.html#32 virtual memory
https://www.garlic.com/~lynn/2006i.html#33 virtual memory
https://www.garlic.com/~lynn/2006i.html#36 virtual memory
https://www.garlic.com/~lynn/2006i.html#37 virtual memory
https://www.garlic.com/~lynn/2006i.html#38 virtual memory
https://www.garlic.com/~lynn/2006i.html#39 virtual memory
https://www.garlic.com/~lynn/2006i.html#40 virtual memory
https://www.garlic.com/~lynn/2006i.html#41 virtual memory
https://www.garlic.com/~lynn/2006i.html#42 virtual memory

some of the work that i had done in the 60s as an undergraduate for cp67 ... had been dropped in the morph from cp67 to vm370. i was able to rectify that with the resource manager released 11may1976.

the cp67 "clock" scanned pages by their real storage address. basically the idea behind a "clock" reset and testing the reference bit is that the time it takes to cycle completely around all virtual pages represents approximately the same interval between the resetting and testing for all pages.

one of the advantages of clock type implementation that i had done in the 60s was that it had some interesting dynamic adaptive stuff. if there weren't enuf pages, the replacement algorithm would be called more frequently ... causing it to cycle through more pages faster. as it did the cycle quicker ... there was shortened time between the time a page had its reference reset and then tested again. with the shortened cycle time, there tended to be more pages that hadn't a chance to be used and therefor have their reference bit turned on again. as a result, each time the selection was called on a page fault, fewer pages had to be examined before finding one w/o its reference set. if the avg. number of pages examined per page fault was reduced ... then it increased the total time to cycle through all pages (allowing more pages to have a chance to be used and have their reference bit set).

part of the vm370 morph was that it change the page scanning from real storage address (which basically distributed virtual pages fairly randomly) to a link list. one of the side-effects of the link list management was that it drastically disturbed the basic premise under which clock operated. with the virtual pages position in the list constantly being perturbed ... it was no longer possible to assert that the time between a page had its reference reset and not taken and the time it was examined again ... was in anyway related to the avg. time it took clock to cycle through all pages.

basically a single reference bit represented some amount of activity history related to the use of that specific page. in clock the avg. amount of activity history that a reference bit represents is the interval between the time the bit was reset and the time it was examined again. on the avg. that is the interval that it takes clock to cycle through all pages ... and is approximately the same for all pages. if pages were being constantly re-ordered on the list (that is being used by clock to examine pages) ... there is much less assurance that the inverval between times that a specific page was examined in any way relates to the avg. elapsed time it takes clock to make one complete cycle through all pages. this perturbs and biases how pages are selected in ways totally unrelated to the overall system avg. of the interval between one reset/examination and the next ... basically violating any claim as to approximating a least recently used replacement strategy.

because of the constant list re-order in the initial vm370 implementation ... it was no longer possible to claim that it actually approached a real approximation of a least recently used replacement strategy. on the "micro" level ... they claimed that the code made complete cycles through the list ... just like the implementation that cycled through real storage. however, at the "macro" level, they didn't see that the constant list re-ordering invalidated basic assumptions about approximating least recently used replacement strategy.

the other thing that the initial morph to vm370 was that "shared" virtual memory pages were not included in the standard list for selection ... so they were subject to the same examine/reset/examine replacement cycle as non-shared pages. this was downplayed by saying that it only amounted to, at most, 16 shared pages.

well a couple releases came and went ... and they then decided to release a small subset of my memory mapping stuff as something called discontiguous shared segments. recent post on the subject in this thread
https://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370

basically the support in vm370 for having more than single shared segment ... and some number of my changes to cms code to make it r/o (i.e. be able to operate in a r/o protected shared segment) ... various collected posts
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon

in any case, this drastically increased the possible amount of shared virtual pages ... that were being treated specially by the list-based replacement algorithm ... and not subject to the same least recently used replacement strategies and normal virtual pages ... some shared virtual page at any specific moment might only be relatively lightly used by a single virtual address space ... even tho it appeared in multiple different virtual address spaces; aka its "sharing" characteristic might have little to do with its "used" characteristic (but the "sharing" characteristic was somewhat being used in place of its actual "use" characteristic for determing replacement selection).

however, i was able to rectify that when they allowed me to ship resource manager several months later on 11may76 ... and put the replacement strategy back to the way I had implemented it for cp67 as an undergraduate in the 60s.
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

so i had a similar but different argument with the group doing os/vs2 ... the morph of real-memory os/360 mvt with support for virtual memory. recent post in this thread about other aspects of that effort
https://www.garlic.com/~lynn/2006i.html#33 virtual memory

they were also claiming that they were doing a least recently used replacement stragegy. however, their performance group did some simple modeling and found that if they choose non-changed least recently used pages ... before choosing changed least recently used pages ... that the service time to handle the replacement was drastically reduced. a non-changed page already had an exact duplicate out on disk ... and therefor replacement processing could simply discard the copy in virtual memory and make the real memory location available. a "changed" page selected for replacement, first had to be written to disk before the real memory location was available. first attempting to select non-changed pages for replacement significantly reduced the service time and processing. I argued that such approach basically perturbed and violated any claim as to approximating least recently used replacement strategy. they did it any way.

so os/vs2 svs eventually morphed into os/vs2 mvs ... and then they shortened the name to just calling it mvs. customers had been using it for some number of years ... it was coming up in 1980 ... and somebody discovered that high-usage, shared executable images (i.e. same executable image appearing in lots of different virtual address spaces and being executed by lots of different applications) were being selected for replacement before low-usage application data pages. The high-usage, shared executable images were "read-only" ... aka they were never modified and/or changed. The low-usage application data areas were constantly being changed. As a result, the high-usage (execution, shared) pages were being selected for replacement before low-usage application data pages.

in much the same way that the vm370 page list management was constantly and significantly changing the order that pages were examined for replacement ... invalidating basic premise of least recently used replacement stragegies ... the os/vs2 (svs and mvs) was also creating an ordering different than based on purely use ... also invalidating basic premise of least recently used replacement strategies.

some past posts mentiong the os/vs2 early forey into least recently used replacement strategy:
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
https://www.garlic.com/~lynn/2000c.html#35 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2001b.html#61 Disks size growing while disk count shrinking = bad performance
https://www.garlic.com/~lynn/2002.html#6 index searching
https://www.garlic.com/~lynn/2002c.html#52 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005n.html#19 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#21 Code density and performance?
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/


previous, next, index - home