List of Archived Posts

2000 Newsgroup Postings (01/04 - 03/05)

2000 = millennium?
Computer of the century
Computer of the century
Computer of the century
Computer of the century
IBM XT/370 and AT/370
Computer of the century
"OEM"?
Computer of the century
Computer of the century
Taligent
I'm overwhelmed
I'm overwhelmed
Computer of the century
Computer of the century
Computer of the century
Computer of the century
I'm overwhelmed
Computer of the century
Computer of the century
Computer of the century
Computer of the century
Computer of the century
System Activity vs IND LOAD
Why is EDI dead? Is S/MIME 'safe'? Who and why?
Computer of the century
Why is EDI dead? Is S/MIME 'safe'? Who and why?
Homework: Negative side of MVS?
Operating systems, guest and actual
Computer of the century
Computer of the century
Homework: Negative side of MVS?
SmartCard with ECC crypto
IBM 360 Manuals on line ?
SmartCard with ECC crypto
"Trusted" CA - Oxymoron?
"Trusted" CA - Oxymoron?
Vanishing Posts...
"Trusted" CA - Oxymoron?
"Trusted" CA - Oxymoron?
"Trusted" CA - Oxymoron?
"Trusted" CA - Oxymoron?
Historically important UNIX or computer things.....
Historically important UNIX or computer things.....
TLS: What is the purpose of the client certificate request?
question about PKI...
TLS: What is the purpose of the client certificate request?
TLS: What is the purpose of the client certificate request?
IBM RT PC (was Re: What does AT stand for ?)
APPC vs TCP/IP
APPC vs TCP/IP
Correct usage of "Image" ???
APPC vs TCP/IP
Hotmail question
OS/360 JCL: The DD statement and DCBs
OS/360 JCL: The DD statement and DCBs
RealNames hacked. Firewall issues.
Multithreading underlies new development paradigm
Multithreading underlies new development paradigm
RealNames hacked. Firewall issues.
64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))
64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))
Mainframe operating systems
distributed locking patents
Cybersafe & Certicom Team in Join Venture (x9.59/aads press release at smartcard forum)
CRC-16 Reverse Algorithm ?
Difference between NCP and TCP/IP protocols
Mainframe operating systems
APL on PalmOS ???
APL on PalmOS ???
Mainframe operating systems
Difference between NCP and TCP/IP protocols
Difference between NCP and TCP/IP protocols
Difference between NCP and TCP/IP protocols
Mainframe operating systems
Mainframe operating systems
Mainframe operating systems
Mainframe operating systems
Mainframe operating systems
Atomic operations ?
Ux's good points.
Ux's good points.
Ux's good points.
Ux's good points.
Difference between NCP and TCP/IP protocols
Ux's good points.
Ux's good points.
ASP (was: mainframe operating systems)
Ux's good points.
Ux's good points.
Ux's good points.
Ux's good points.
Predictions and reality: the I/O Bottleneck
Those who do not learn from history...
Those who do not learn from history...

2000 = millennium?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 2000 = millennium?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 04 Jan 2000 02:11:59 GMT
posted to a similar discussion in seattle.general newsgroup:

isn't there something about zero wasn't introduced until sometime around the 10th century ... possibly someplace in africa? prior to that things like roman was I, II, III, IV, V, VI, VII, VIII, IX, X i.e. it wasn't decimal as we know it today since there was no zero (various of the existing conventions & concepts that predate the introduction of zero could appear to be out of sync with our current concept of arithmetic).

slightly related posting I found in an archive & posted early last year:

Subject: Re: BA Solves Y2K (Was: Re: Chinese Solve Y2K)
Newsgroups: sci.skeptic,alt.folklore.urban,alt.folklore.computers
Date: 12 Feb 1999 14:20:17 -0800

date problems somebody posted to a newsgroup in 1984:

1.In 1969, Continental Airlines was the first (insisted on being the first) customer to install PARS. Rushed things a bit, or so I hear. On February 29, 1972, ALL of the PARS systems canceled certain reservations automatically, but unintentionally. There were (and still are) creatures called "coverage programmers" who deal with such situations.

2.A bit of "cute" code I saw once operated on a year by loading a byte of packed data into a register (using INSERT CHAR), then used LA R,1(R) to bump the year. Got into a bit of trouble when the year 196A followed 1969. I guess the problem is not everyone is aware of the odd math in calendars. People even set up new religions when they discover new calendars (sometimes).


--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch
Date: Tue, 04 Jan 2000 22:02:03 GMT
handleym@ricochet.net (Maynard Handley) writes:
- the OS/360 debacle and the way IBM was very late to the party in supplying interactive (non-batch) computing

TSS & TSO were the officially sanctioned interactive, non-batch computing ... but IBM did have CP/67 and VM/370 offerings ... which, while CP67/VMS370 had a compareatively small install base (compared to OS/360), actually had a larger customer install base than most corporations total install customer base ... in fact, the internal corporate CP67/VM370 install base (smaller than the customer install base) was larger than many coprations total customer install base).

There was also APL/360 interactive and later things like VS/PC.

At one time, I was creating highly modified VM/370 production systems and shipping them to some internal sites where it was run on a small number of the internal computers ... although I believe that number was still larger than the total life-time install base for Multics.

various IBM & interactive references:

https://www.garlic.com/~lynn/99.html#126
https://www.garlic.com/~lynn/99.html#127
https://www.garlic.com/~lynn/99.html#142
https://www.garlic.com/~lynn/99.html#177
https://www.garlic.com/~lynn/99.html#237

&

https://www.leeandmelindavarian.com/Melinda#VMHist

misc. other references:

https://www.garlic.com/~lynn/95.html#14
https://www.garlic.com/~lynn/96.html#24
https://www.garlic.com/~lynn/96.html#35
https://www.garlic.com/~lynn/97.html#15
https://www.garlic.com/~lynn/99.html#7
https://www.garlic.com/~lynn/99.html#33
https://www.garlic.com/~lynn/99.html#76
https://www.garlic.com/~lynn/99.html#100
https://www.garlic.com/~lynn/99.html#109
https://www.garlic.com/~lynn/99.html#112

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.unisys
Date: Tue, 04 Jan 2000 22:21:17 GMT
hack@watson.ibm.com (hack) writes:
In article <slrn8722v4.4s.paul@shippo.virgin.net>,
The S/390 TOD is defined to keep track of atomic time, with an epoch of
1900-01-01 00:00:00 GMT (synchronised to UTC in 1972, not 1958 as for TAI),


I remember working on the redbook TOD definition for what was coming out as s/370. Spent about 3 months elapsed time on it with a couple other people ... that was where I got indoctrinated that the first of the century (as per the architcture definition) was 1901-01-01 00:00:00 GMT ... and to the issue of leap-seconds ... which could be positive or negative & zero, one or more.

Initially, Most 370 operating systems (incorrectly) initialized the TOD clock to 1970-01-01 00:00:00 GMT (not 1901) ... and then some "fixed" it by (incorrectly) initializing to 1990-01-01 00:00:00.

The s/370 tod clock is 64bits with the 12bit being defined as being microsecond resolution (although the architecture didn't require clocks to do microsecond or even submicroseconds tics). That makes the 32bit (first bit of the high word) equal to 1024/1000 of a second (i.e. slightly longer than a second) ... multiple by 1000 and shift right 10 bits to convert the high word to actual seconds.

it was in a timer register; better than 360 where the clock was located in memory and had to steal memory cycles for the timer tic.

misc. references:
https://www.garlic.com/~lynn/93.html#16
https://www.garlic.com/~lynn/96.html#30
https://www.garlic.com/~lynn/96.html#37
https://www.garlic.com/~lynn/97.html#7
https://www.garlic.com/~lynn/99.html#102

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch
Date: Tue, 04 Jan 2000 22:53:05 GMT
Tim Bradshaw writes:
either). It was to do with IBM claiming that FS was life-or-death for them when evidently it wasn't. And there was some monopolistic consequence of that, or so the book claims.

i think that the life-or-death statements were more akin to internal politics trying to justify a project

it is on par with presentation that once was made claiming that if the internal network wasn't converted to SNA that the internal network would start failing.

misc. ref:
https://www.garlic.com/~lynn/96.html#24
https://www.garlic.com/~lynn/99.html#33
https://www.garlic.com/~lynn/99.html#100
https://www.garlic.com/~lynn/99.html#112

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.unisys
Date: Wed, 05 Jan 2000 19:16:03 GMT
I counted bits from the opposite end compared to the diagram in the following reference. The low-order (smallest resolution) 12 bits (bits 0-11) are "sub-microsecond" with bit 12 (from the right) or bit 51 (from the left) being 1 microsecond.

And as noted in the programming notes ... TOD epoch zero is january 1, 1990 ... not January 1, 1991 (first day of the century) as per my prior note.


4.6.1.1 Format

The TOD clock is a binary counter with the format shown in the
following illustration. The bit positions of the clock are numbered 0
to 63, corresponding to the bit positions of a 64-bit unsigned binary
integer.

                  1 microsecond___
v
______________________________________
|                             | |      |
   |_____________________________|_|______|
0                             51      63

In  the  basic  form,  the TOD clock is incremented by adding a one in bit
position 51 every microsecond.    In  models  having  a  higher  or  lower
resolution,  a  different  bit position is incremented at such a frequency
   that the rate of advancing the clock is the same as if a one were added in
bit position 51 every microsecond.   The resolution of the  TOD  clock  is
   such that the incrementing rate is comparable to the instruction-execution
rate of the model.

Programming Notes:

1. Bit position 31 of the clock is incremented every 1.048576
seconds; for some applications, reference to the leftmost 32 bits of
the clock may provide sufficient resolution.

2. Communication between systems is facilitated by establishing a
standard time origin, or standard epoch, which is the calendar date
and time to which a clock value of zero corresponds.  January 1, 1900,
0 a.m. Coordinated Universal Time (UTC) is recommended as the standard
epoch for the clock. This is also the epoch used when the TOD clock is
synchronized to the external time reference (ETR). Note that the
former term, Greenwich Mean Time (GMT), is now obsolete and has been
replaced with the more precise UTC.

     3. A program using the clock value as a time-of-day and calendar
indication must be consistent with the programming support under which
the program is to be executed. If the programming support uses the
standard epoch, bit 0 of the clock remains one through the years
1972-2041. (Bit 0 turned on at 11:56:53.685248 (UTC) May 11, 1971.)
Ordinarily, testing bit 0 for a one is sufficient to determine if the
clock value is in the standard epoch.

4. In converting to or from the current date or time, the
programming support must take into account that "leap seconds" have
been inserted or deleted because of time-correction standards.

     5. Because of the limited accuracy of manually setting the clock
value, the rightmost bit positions of the clock, expressing fractions
of a second, are normally not valid as indications of the time of
day. However, they permit elapsed-time measurements of high
resolution.

     6. The following chart shows the time interval between instants
at which various bit positions of the TOD clock are stepped. This time
value may also be considered as the weighted time value that the bit,
when one, represents.

=========================================================================

for a more detailed description of 370 tod clock see:
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/4%2e6%2e1


4.6.1 Time-of-Day Clock

The time-of-day (TOD) clock provides a high-resolution measure of real
time suitable for the indication of date and time of day. The cycle of
the clock is approximately 143 years.


In an installation with more than one CPU, each CPU may have a separate TOD clock, or more than one CPU may share a clock, depending on the model. In all cases, each CPU has access to a single clock. Subtopics: 4.6.1.1 Format 4.6.1.2 States 4.6.1.3 Changes in Clock State 4.6.1.4 Setting and Inspecting the Clock
============================================================

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

IBM XT/370 and AT/370 (was Re: Computer of the century)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM XT/370 and AT/370 (was Re: Computer of the century)
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 06 Jan 2000 02:13:31 GMT
The XT/AT/370 didn't support all of the 370 instruction set ... there were omissions in the supervisor instructions. It ran a modified version of VM/370 that took into account the supervisor differences, for instance I/O was done by communicating with a monitor program running on the 8088/80286 ... which then did actual disk accesses, keyboard operation, display, etc.

It also had a modified version of one of my page replacement algorithms and my CMS page-mapped file support (which was never shipped, other than internally ... or possibly AT&T longlines, in the standard product to customers) ... biggest problem was that it operated in severe memory constrained environment (by most 370 operating system & application standards).

Initial version ... before first customer ship ... was going to go out with only 384kbytes of "370" memory ... which held the resident kernel as well as all paged application code & file. I did some early benchmarks which showed severe page thrashing ... and notice the page file was mapped into a XT 100mills/access hard disk (on a good day, disk was saturated at 10 accesses/second). By the time the first box shipped to customers they got it up to 512kbytes of "370" memory which slighted mitigated some of the problem.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.unisys
Date: Fri, 07 Jan 2000 08:00:43 GMT
I have a slightly different view of the IBM/PC.

The combination of th IBM name with a 3270 emulator plus spreadsheet resulted in a corporation being able to upgrade their 3270 terminals for only slightly more than the cost of a straight 3270 ... and a person could get on their desk, in one display/keyboard ... both their traditional business computer terminal as well as a machine that could do local business processing (like pc spreadsheet).

It made the ibm/pc market million+ customer base (just the internal IBM market segment was several hundred thousand). I don't remember any specific numbers, but I don't believe there was any other vendor with a million+ PC install base prior to the IBM/PC.

Some misc. history, in the late '70s we were trying to get 3270s on every desk in the company and were being met with various kinds of resistance, including 3270 terminal orders required VP-level sign-off (and had to be included in the fall budget planning cycle). We presented the business case that the monthly cost of a 3270 (using 3-year amortized capital expenditure) was about the same as a business phone ... and business phones were a standard feature on everybody's desk, not requiring VP-level authorization for each individual phone. That broke things loose and started to see every desk having a 3270 terminal ... as well as significant home terminal program.

In any case, with any market the size of the ibm/pc, there is bound to be clones ... just like there was already a thriving 3270 terminal clone business prior to the PC.

Whatever problems and shortcomings that came later, I would contend were, in part, because many of the people had too much history of successful mainframe business practices ... which didn't translate well into the PC market place once it started maturing (lots of companies have had problems dealing with too much success).

In contrast, I remember having dinner a couple times with the MAC developers early in the MAC development cycle. Somewhat simplifying ... their position was that a MAC would never, ever be found any place but on a kitchen table; I contended that MAC would never (ever) be a profitable business unless some business/commerce application was supported ... that at least they could do was provide mainframe 3270 terminal emulation attachment so they could sell into the corporate market. They contended that would never, ever happen, no MACs in offices and/or supporting business applications. Desktop publishing was somewhat of a compromise ... it theoretically could be done on kitchen table as a cottage industry ... even tho most of the desktop publishing MACs appeared to be in commercial establishments for commercial purposes.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

"OEM"?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "OEM"?
Newsgroups: alt.folklore.computers
Date: Fri, 07 Jan 2000 08:13:37 GMT
OEM in the '70s was for "other equipment manufactur" ... also plug compatible manufactur (PCM) ... which built equipment that were plug compatible with IBM devices that attached to IBM mainframes (frequently referred to as clones these days).

the only thing that I remember that might have been in the 60s for IBM mainframe was Ampex memory. Supposedly a project that I worked on as an undergraduate in the late '60s is credited with origindating the (ibm) PCM/OEM business.

misc. reference:
https://www.garlic.com/~lynn/96.html#30

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers
Date: Fri, 07 Jan 2000 08:39:13 GMT
Eric Chomko writes:
Is this the same VM of JCL fame?

JCL ... job control language ... basically for batch operation when the programmer isn't around.

VM had extensive development for interactive ... although the basic monitor provided virtual machine function ... which could also be used to operate other batch operating systems (of JCL fame, dos/360, pcp/360, mft/360, mvt/360, vs1, vs2, mvs, os/390, etc).

VM continues to exist ... and as well, IBM has migrated many of the VM features into the microcode of the current generation of mainframes ... referred to as LPARS (logical partitions) ... and probably nearly all installed machines are configured with logical partitions.

for recent posting on vm-related interactive:
https://www.garlic.com/~lynn/2000.html#1

misc. other references:
https://www.garlic.com/~lynn/99.html#237

there might also be some interest some of the gml->SGML->HTML evolution
https://www.garlic.com/~lynn/99.html#197
https://web.archive.org/web/20231001185033/http://www.sgmlsource.com/history/roots.htm
https://web.archive.org/web/19981206171107/http://www.sgmlsource.com/history/roots.htm

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch
Date: Fri, 07 Jan 2000 15:50:00 GMT
Tim Shoppa writes:
I do realize that you believe that I'm completely wrong in that IBM had 8" floppies before Shugart Associates ever existed.

8" floppies were common microload mechanism in IBM controllers

I had been told by some san jose folks that most of the non-ibm disk efforts formed in the 70s ... were by gpd san jose people who left and started independent efforts.

for instance:
http://www.alshugart.com/milestones.html

One of the other people that worked on the 2321 (data noodle?) went on to do the memorex disk effort and then help form one of the early relational database companies.

with regard to the following reference:
https://www.garlic.com/~lynn/96.html#18

in the late '70s, I was told that so many senior people had left, that they were short on channel architecture expertise. In a couple cases where the engineering labs were claiming that the operating system software was wrong ... turned out to be they were doing something in the disk controller that violated channel architecture ... and I would have time being channel architecture arbitrator between them and POK. Having the test cells running under operating system (rather than stand-alone) tended to flush out hardware problems (especially architecture violations) earllier. A specific one that I remember was when they tried to do unsolicited unit check ... it took 3 months of argueing to get that fixed.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Taligent

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Taligent
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jan 2000 18:13:48 GMT
taligent was mainly targeted at the client portion of a client/server paradigm ... with most of the support for drawing menu/screens for the client. In a intensive JAD with their people in looked like a 30%-50% hit to taligent in terms of new features &/or rewritten code to support industrial stength computing applications.

Even in its targeted domain ... a medium sized project resulted in something like 3500 classes compared to 600-700 implemented in some other products (i.e. it still needed some maturing). It seemed like a number of the products from the period went for low learning curve for turning out graphical demos but turned out to be much more people intensive doing real live apps (compared to some other environments with higher learning curve .. somewhat analogous to the recent C++/Ada implication in the perl related posting in this same ng).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

I'm overwhelmed

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I'm overwhelmed
Newsgroups: alt.folklore.computers
Date: Sun, 09 Jan 2000 05:00:14 GMT
"Jack Peacock" writes:
set, something simple in the 360/20 range. I bet a 360 emulator running on a K7 at 1Ghz would compare favorably to a 370/125 at least, maybe even a 168. And imagine how fast Autocoder would run on a 1401 emulator!

see
https://web.archive.org/web/20240130182226/https://www.funsoft.com/

for os/390 running on intel processor.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

I'm overwhelmed

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I'm overwhelmed
Newsgroups: alt.folklore.computers
Date: Sun, 09 Jan 2000 18:27:36 GMT
"Jack Peacock" writes:
set, something simple in the 360/20 range. I bet a 360 emulator running on a K7 at 1Ghz would compare favorably to a 370/125 at least, maybe even a 168. And imagine how fast Autocoder would run on a 1401 emulator!

oh yes ... the 370/125 was about 100kips 370 microcoded machine ... microcode that ran about 10:1 (i.e. 10 "native" machine instructions per every 370 instruction, i.e. about 1 mip native instruction machine, the other IOP in the 115/125 was only about a 800kip native machine ... and so the 115 was only about 80 kip 370 machine)).

aslo, for ecps assist on the 138/148 machines ... we got about a 10:1 performance improvement dropping straight-line 370 kernel code into native microcode.

assuming 370 simulation code that comes anywhere close to the efficiency of the microcode of the low-end 370 processors (115, 125, 135, 145) ... then a 1Ghz K7 would provide significantly more 370 MIPs than a 168.

misc. references:
https://www.garlic.com/~lynn/94.html#21
https://www.garlic.com/~lynn/94.html#27
https://www.garlic.com/~lynn/95.html#3

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.unisys
Date: Sun, 09 Jan 2000 18:43:08 GMT
amolitor-at writes:
Just a for-instance that's recent. Sysplex. A *really* nice chunk of software that IBM did, that's spawned a great deal of

my wife worked for the person that headed up inter-system coupling part of FS. After that she worked on JES2/JES3 and then was con'ed into going to POK to be responsible for loosely-coupled architecture where she developed the Peer-Coupled Shared Data architecture .. which was the basis for IMS hot standby and then parallel sysplex.

misc. reference:
https://www.garlic.com/~lynn/98.html#30
https://www.garlic.com/~lynn/98.html#37
https://www.garlic.com/~lynn/98.html#40
https://www.garlic.com/~lynn/99.html#71

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers
Date: Sun, 09 Jan 2000 22:20:25 GMT
glass2 writes:
name of Runtime Analyzer for MVS and OS/390. Additionally, I remember some work that the IBM scientific centers did for certain national archives (Was it the Spanish National Archives) which involved cataloging and computer imaging some of their documents.

i believe this was (at least) the madrid science center during much of the '80s as part of getting ready for the 500th aniversery of 1492/columbus. There was a bunch of imaging and indexing of loads of documents (and preparation of cdrom(s) of the material).

I had stopped by there in the mid-80s for a visit on the project.

misc. reference
https://www.garlic.com/~lynn/99.html#9
https://www.garlic.com/~lynn/99.html#112

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers
Date: Sun, 09 Jan 2000 22:44:48 GMT
In article <853ols$d6m@nnrp4.farm.idt.net>, Eric Chomko writes:
Funny how IBM has failed to inspire any serious SW project of any success since the 360. And wasn't Java a Sun thing in the early days? How could IBM have been in it and missed the boat?

I thot java had roots back in dolphin ... the project to do new OO operating system. I had been called in to look at doing industrial strength areas of dolphin ... and then i didn't hear anything for a while ... then the next thing I know, I'm in a meeting with the General Manager of the java organization ... who I had known 20 years previously as one of the two people responsible for pascal/370 & vs/pascal.

random references:
https://www.garlic.com/~lynn/99.html#36
https://www.garlic.com/~lynn/99.html#63
https://www.garlic.com/~lynn/99.html#222

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch
Date: Sun, 09 Jan 2000 23:02:28 GMT
bill@wjv.com.REMOVEME (Bill Vermillion) writes:
At least you'll find some interseting picture from electon microscopes in Switzerland, to head design, chips, etc., at other sites around the globe. It's an interesting and amazing company.

i was told at one time the los gatos vlsi lab was the first place that used a scanning electron microscope for diagnosing a running chip ... may have been with the JIB' (jib prime) microprocessor ... used in the original 3880 disk controllers. los gatos also did blue iliad ... the first 32bit 801 RISC processor.

random references:
https://www.garlic.com/~lynn/94.html#22
https://www.garlic.com/~lynn/94.html#47
https://www.garlic.com/~lynn/95.html#5
https://www.garlic.com/~lynn/95.html#6
https://www.garlic.com/~lynn/95.html#11
https://www.garlic.com/~lynn/98.html#25
https://www.garlic.com/~lynn/98.html#26
https://www.garlic.com/~lynn/98.html#27
https://www.garlic.com/~lynn/99.html#64

there have also been various statements about 801 heavily influencing both AMD 29k and HP snake.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

I'm overwhelmed

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I'm overwhelmed
Newsgroups: alt.folklore.computers
Date: Sun, 09 Jan 2000 23:25:45 GMT
kragen@dnaco.net (Kragen Sitaker) writes:
Suppose you had a machine that handled
- 1000 interactive terminals at 9600 bps -- total 9.6 megabits
- 50 tape drives at 6250 bpi and 100 feet per second -- that's 100 * 6250 * 50 = 31 250 000 bits per second, total 31.25 megabits
- 20 high-speed printers at 17,600 bits per second -- 352 000 bits per second, 0.352 megabits
- 20 disks at 500 kilobytes per second -- that's 10 megabytes per second, or 80 megabits


i believe in the past MINIs have tried to move into this space ... and some have used ethernet for terminal concentrator.

old time disk/tape would pose a little of a problem since not only was it the bit rate ... but they had no intermediate buffering. memory was scarce and they all used the mainframe memory so there was a lot of direct memory access with associated latency and overrun issues.

The interesting thing about various of these configurations in the 60s is they talked about E/B ratios where the numbers were in terms of MIPs and bytes ... most E/B ratios these days are in terms of MIPs and bits (i.e. bits transferred per instruction executed has dropped by at least an order of magnitude).

again, emulation of 390 on intel hardware:
https://web.archive.org/web/20240130182226/https://www.funsoft.com/

random reference:
https://www.garlic.com/~lynn/93.html#31

a problem that you get when there is that much depending on a single box, failure modes become an issue. Lots of PCs today are configured with fake parity memory. I think the best PC memory is possibly 8+2 ECC. (correct 1 bit error and detect two bit). Mainframe memory may be 64+16 (correct 15bit errors, detect 16 bit errors) or better.

misc. other references:
https://www.garlic.com/~lynn/97.html#15
https://www.garlic.com/~lynn/99.html#71
https://www.garlic.com/~lynn/99.html#137

some discussion of support for tens of thousands of interactive terminals
https://www.garlic.com/~lynn/99.html#67

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers
Date: Tue, 11 Jan 2000 01:51:43 GMT
Eric Chomko writes:
Hey, listen I don't hate IBM. In fact, Ted Codd, happens to be sort of a hero of mine. The guy really pioneered databases while working at IBM. Still does, if I'm not mistaken.

Eric


for discussion on System_R (and various early relational).
http://www.mcjones.org/System_R/

I played a part working with Woody Garnett developing DWSS ... which allowed multiple instances/threads of System_R to share common virtual storage (we worked with Vera Watson and Jim Gray). We also helped with some of the technology transfer from San Jose to Endicott for SQL/DS.

In the above & the following references, Baker made some reference about doing technology transfer from Endicott back to STL for DB2 (before going to Oracle).

random references:
https://www.garlic.com/~lynn/aadsmore.htm#dctriv
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers
Date: Wed, 12 Jan 2000 03:37:41 GMT
lwinson@bbs.cpcn.com (lwin) writes:
> The accounting machine I remember was the 407.

This was IBM's flagship of its EAM systems. People today would be amazed at the programming sophistication that machine could do. I wish some of today's programmers knew how to write software that could do everything that machine did.


i spent some amount of time using 407 & playing with the plugboard

url I found courtesy of alta-vista
http://vmdev.gpl.ibm.com/devpages/eheman/dad407.html

in the following photo, card reader on the left, printer in the middle top ... and the right side was where the plug board slid down (pull out handle ... I remember the "door" was hinged at the bottom so it was little like opening oven door).
http://vmdev.gpl.ibm.com/devpages/eheman/407reg2.html

& above author's home page
http://vmdev.gpl.ibm.com/devpages/eheman/

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Computer of the century

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers
Date: Wed, 12 Jan 2000 04:06:02 GMT
misc. other URLs from alta-vista that happen to mention 407
http://www-5.ibm.com/uk/about/history2.html
http://www.isu.edu/comcom/news_letter/early.html
https://web.archive.org/web/20000823135031/http://www.isu.edu/comcom/news_letter/early.html
http://www.mta.ca/~amiller/cs3711/ibm650/ibm650.htm
http://www.world.std.com/~reinhold/early.computers.html
http://www.users.nwark.com/~rcmahq/jclark/adoc.htm

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.unisys
Date: Wed, 12 Jan 2000 22:07:24 GMT
Brian Inglis writes:
Their goal is for the customer to experience zero failures before a part is upgraded or changed by the customer. They build redundant parts with automatic failover inside the boxes they sell. At PM time the local FE CSR checks if any parts have died and orders replacements for installation at the next PM. I no longer work with IBM gear but the total approach is a revelation to those exposed only to mini or micro systems where some piece of hardware is always dying and causing outages.

Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
--
Brian_Inglis@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
use address above to reply


how 'bout an objective of having no more than five channel checks (something like SCSI bus error) in a 12month period across all machines in all customer shops (this is not avg of five errors per machine in a year ... this is five errors total across all machines in a year).

random reference:
https://www.garlic.com/~lynn/94.html#24
https://www.garlic.com/~lynn/96.html#27

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.unisys
Date: Wed, 12 Jan 2000 22:26:11 GMT
an even more interesting concept in the previous reference about five channel checks (which are recoverable/retryable data errors) is the reporting & data gathering infrastructure such that it can even be accurately reported (and raise an alarm if there are 15 errors instead of <5 errors across all machines in a period of a year).

oh yes ... 100% availability for six years (automated operator & ims hot standby) ... ref:
https://www.garlic.com/~lynn/99.html#71

other references regarding availability thread
https://www.garlic.com/~lynn/99.html#182
https://www.garlic.com/~lynn/99.html#184
https://www.garlic.com/~lynn/99.html#185
https://www.garlic.com/~lynn/99.html#186

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

System Activity vs IND LOAD

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System Activity vs IND LOAD
Newsgroups: bit.listserv.vmesa-l
Date:    Wed, 12 Jan 2000 09:45:15 -0800
oops, resend from correct userid

the way that I wrote the original smoothed value stuff including for IND LOAD and other smooted values was it calculated the increment activity for the most recent interval. If the most recent interval was one minute then the smoothed values were decreased by 7/8ths and the activity for the most recent interval added in. If the most recent interval as 1 second, the smoothed values were decreased by 479/480 and the activity for the most recent interval added in (the smooted interval was taken as 8 minute base.

Given 1 minute update interval ... transition from 100% cpu to zero cpu would have a long tail ... since the smoothed cpu value would be decremented by 7/8ths every minute (with nothing added back in). If transition from 100% cpu to 0% cpu happened exactly on a interval boundary .. then eight 1 minute intervals of 0% cpu would reduce the 100% by 7/8ths during each interval and add zero back in. After eight intervals, 7/8**8 isn't converging to zero real fast (still ..34)).

There were later changes that looked at doing some of the smoothed feedback numbers to more quickly adapt to large changes with things like current smoothed value decreased by 3/4 and twice the most recent activity added in (in the previous example 3/4**8 converges faster to zero i.e. .1 instead of .34).

Disclaimer: I actually haven't seen the code in quite some time.

Anne & Lynn Wheeler lynn@adcomsys.net, lynn@garlic.com https://www.garlic.com/~lynn/ http://www.adcomsys.net/lynn/

Why is EDI dead? Is S/MIME 'safe'? Who and why?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why is EDI dead?  Is S/MIME 'safe'?  Who and why?
Newsgroups: comp.security.misc,sci.crypt,alt.security.pgp
Date: Thu, 13 Jan 2000 20:07:54 GMT
EDI has been both translators and the private value added networks that provide secure, reliable connectivity between the trading partners (s/mime addresses some of the secure issues ... but existing internet mail transport isn't exactly known for 100% reliability).

translators are complex because they operate between commercial legacy systems and normalized transport format that has tended to be industry specific.

it has been in extensive use for lots of b-to-b ... which you wouldn't normally hear about in non-b-to-b locals (say internet currently is small percentage of retail commerce and retail commerce is small percentage of b-to-b, aggregate of value related to EDI-transactions can easily be several orders of magnitude larger than existing internet retail commerce).

another body playing here is OMG
http://www.omg.org/

in part because it isn't just the syntax of the information exchange but also the business operations that operate on the information exchanged (i.e. not only need to normalize information format ... but also semantics of the business operations).

XML/internet, etc related EDI reasonably can move it into higher volume & lower valued transactions (lot of existing, more expensive EDI is wrapped around high value activities).

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch
Date: Thu, 13 Jan 2000 21:38:50 GMT
kilgallen@eisner.decus.org (Larry Kilgallen) writes:
Somebody from Microsoft wrote a book on how to avoid coding errors. It was written, however, with a C-bias. Ada programmers who read that book judged it a waste of time because about 90% of the common coding bugs discussed were impossible in Ada.

slightly related ref ... not so much language syntax but surely language usage
https://www.garlic.com/~lynn/99.html#163
https://www.garlic.com/~lynn/99.html#219

aargh ... i've asserted for quite some time that over half (maybe 90%) of the internet, c-based operating system, & c-based application exploits are systemic of C's implicit string length architecture (i.e. being able to exploit buffer overruns because the programmer has relied on implicit string lengths ... something that is relatively less frequent on systems that rely on explicit string/buffer length paradigms).

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Why is EDI dead? Is S/MIME 'safe'? Who and why?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why is EDI dead?  Is S/MIME 'safe'?  Who and why?
Newsgroups: comp.security.misc,sci.crypt,alt.security.pgp
Date: Thu, 13 Jan 2000 22:18:16 GMT
On Fri, 14 Jan 2000 00:43:36 +0800, sb5309 wrote:
What is "remote document processing business - invoices, price-lists, technical drawings etc." ?

I am curious. Thanks


there are printer outsourcers with locations around the country ... that are used by lots of industries ... including computer software vendors; somebody can call their vendor of choice and order a manual; the request is routed to the printing plant closest to the requester ... and the manual is printed "just in time" and shipped directly to the requester.

these print oursourcers also handle big batch jobs like monthly bills & statements. Large percentage of bills, statements, etc are outsources to these 3rd party printers (i.e. in some locals, nearly all utility bills/statements are outsourced; i.e. water, sewage, garbage, power, etc).

Batch bill/statement input tends to be tape ... frequently in some sort of 1403 or AFP format ... which is then processed for printing (the print house may even have to have specialized per-client data extraction to get name/address out of the bill/statement data for envelope printing).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Homework: Negative side of MVS?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Homework: Negative side of MVS?
Newsgroups: bit.listserv.ibm-main
Date: Fri, 14 Jan 2000 19:28:07 GMT
I am writing a paper for a class on MVS. I am suppose to do a "pro and con" type paper. I can find info on the positive side of MVS, yet I am having a hard time finding info on the negative side of the OS. Any help would be appreciated.

I've talked to a number of large organizations that retired MVS and converted to something else not as good. Reason given was that their MVS staff was retiring and they weren't able to backfill the slots.

In many cases, they found they weren't able to compete with financial organizations for scarce resource. Even in some financial organizations, they identify in the top five business risks critical MVS staff with over 30 years (nearing retirement age, kids are all thru school, house paid for, and it is getting difficult to motivate, even with significant salary raises).

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Operating systems, guest and actual

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From:    Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Operating systems, guest and actual
Newsgroups: bit.listserv.vmesa-l
Date:    Thu, 13 Jan 2000 22:15:28 -0800
XT/370 ran a custom version of CP ... it was a custom modified 68k that provided 370 except for things like I/O. which CP communicated with a monitor running on DOS.

misc. ref:
https://www.garlic.com/~lynn/94.html#42
https://www.garlic.com/~lynn/2000.html#5

the XT/370 (& AT/370) was card that fit inside PC case.

there was faster one that was a separate box and had cable that hooked into the PC.

the following possibly is one of those machines in pok .... even tho it was listed as 4341.:
https://www.garlic.com/~lynn/99.html#110

--
Anne & Lynn Wheeler lynn@adcomsys.net, lynn@garlic.com
https://www.garlic.com/~lynn/ http://www.adcomsys.net/lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch
Date: Sat, 15 Jan 2000 18:14:24 GMT
some mainframe paradigms have both data structures (strings) and buffer/io operations with explicit lengths & have had fewer instances of buffer overflow ... in part because explicit length paradigm keeps programmers somewhat more cognizant of the issues (they have to do more work ... but then again they have to do more work).

although when i was undergraduate and did teletype support, i used one byte arithmetic ... which worked until somebody defined a psuedo teletype device which violated assumed implicit/max. teletype transfer lengths

misc. buffer overflow ref:


http://www.lilli.com/360-67 (corrected) https://www.multicians.org/thvv/360-67.html
https://www.garlic.com/~lynn/99.html#44

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Computer of the century

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer of the century
Newsgroups: alt.folklore.computers,comp.arch
Date: Thu, 20 Jan 2000 02:39:07 GMT
random activities that predate the parallel sysplex were numerious ibm shared disk cluster environments from the late '60s and early 70s ... like PARS/ACP cluster (airline control program that became TPF), numerous shared disk configurations from the early to mid 70s with two-way, four-way, and eight-way disk sharing configuration ... then enhanced to 16-way. Besides reserve/release (device level locking) from the mid-60s, PARS added fine-grain locking to 3830 disk controller in the early 70s.

random references:
https://www.garlic.com/~lynn/99.html#71
https://www.garlic.com/~lynn/96.html#15

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

Homework: Negative side of MVS?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Homework: Negative side of MVS?
Newsgroups: bit.listserv.ibm-main
Date: Thu, 20 Jan 2000 02:45:20 GMT
actually we were working on project with one of the labs ... and in a joint meeting with the management and staff ... one of the staff brought up why are we doing all this conversion off of mvs to another platform ... and the management said that they had numerous reqs. out for mvs support people for the past year and not one had been answered (and subsequent look at it showed the existing gov. scale was in fact less than the going scale in the financial industry, in some extreme cases gov. scale is 1/3rd general industry). that the existing people were all there was and they were unable to get any more ... and they had been loosing people in the group. The MVS people were really unhappy.

We have good friends at another lab ... including the support staff (which was outsourced contract ... the contract typically ran for five years ... whenever a different company won the contract ... the existing people were moved from the old company to the new company). The shutoff of the last MVS system was scheduled to be the same day the last senior MVS support person retired

In both cases, the locations went to something that provided much less service to their location ... and numerous people were unhappy. There have been a half dozen other cases where we have had less in depth contact ... but the comments were similar.

A possible pending one is large national database that was developed by two people in the mid to late 60s that they continue to support ... but the individuals are due to retire at any moment. Their use of MVS-based infrastructure can't be duplicated on other available platforms. Porting the applications and database would result in a significantly less efficient implementation at increased cost. There has been activity since the early 90s to migrate to 3-layer architecture for some of the applications ... to possibly mitigate intensity of future conversion effort (when both people retire).

We've also have had conversations with random gov. CIOs commenting about number of studies by outside consulting companies brought in with agenda to show MVS is less efficient, more costly and less agile ... than other platforms. The issue that has been brought forward to associated executives is that retirement age of their (MVS) senior data processing staff is on the top ten list of risks faced by the organization (not necessarily a cost issue ... but at least, their experience can't be backfilled at any cost). This is similar to projections that it might take 20 years to do another saturn/apollo moon effort ... compared to less than 10 years the first time .. even assuming, in theory, the documentation and experience of the first effort could be drawn on (which a lot of people claim to have been lost).

another part of the backfill issue is talking to younger people who realize that with a couple months of HTML experience ... they can get a better paying job building web sites than what they could get doing MVS support with 5-10 years experience.

random references:
https://www.garlic.com/~lynn/99.html#71
https://www.garlic.com/~lynn/99.html#123
https://www.garlic.com/~lynn/99.html#201
https://www.garlic.com/~lynn/99.html#202

At 11:33 AM 1/18/00 -0600, Tom Schmidt wrote:
I don't doubt that you were told this as the reason but I don't believe it to be a valid reason. Consider that they knowingly replaced something that worked well with something else not as good because they couldn't hire replacements. Why wouldn't they attempt to grow the talent in-house? If they *did* attempt it and failed, why did they fail? My bet is that they were attempting to keep the salaries artificially low - much lower than the local or regional competition.

I'm entertained by the suggestion that local financial organizations were paying better, since most financial organizations pay lower in my experience. (Banks think they understand money so they don't give it away, manufacturing companies think they understand their product so they don't give it away, etc.)

<rant>
I've only met one decent HR person in my 25+ year career. The rest were hacks who did more damage to their company than anyone in IT ever could.
</rant>

Tom Schmidt
Madison, WI

On Fri, 14 Jan 2000 19:28:07 GMT, in bit.listserv.ibm-main Anne & Lynn Wheeler wrote:
>I've talked to a number of large organizations that retired MVS and
>converted to something else not as good. Reason given was that their
>MVS staff was retiring and they weren't able to backfill the slots.
>
>In many cases, they found they weren't able to compete with financial
>organizations for scarce resource. Even in some financial
>organizations, they identify in the top five business risks critical
>MVS staff with over 30 years (nearing retirement age, kids are all
>thru school, house paid for, and it is getting difficult to motivate,
>even with significant salary raises).


--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

SmartCard with ECC crypto

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SmartCard with ECC crypto
Newsgroups: alt.technology.smartcards
Date: Mon, 24 Jan 2000 17:44:25 GMT
we announced/demo'ed X9.59 along with AADS age/address & some AADS RADIUS (i.e. authenticate using AADS when connecting to your ISP instead of password) at worldwide retail banking show in Miami last month ... and they it was demo'ed in at least three booths at RSA conference last week. Card used for the AADS signature transactions was an ECC smartcard.

for reference see
https://www.garlic.com/~lynn/99.html#217
https://www.garlic.com/~lynn/99.html#224

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

IBM 360 Manuals on line ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 360 Manuals on line ?
Newsgroups: alt.folklore.computers
Date: Tue, 25 Jan 2000 04:35:36 GMT
also principles of operation was one of the first to be done with script & then GML (i.e. precursor to SGML & HTML).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

SmartCard with ECC crypto

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SmartCard with ECC crypto
Newsgroups: alt.technology.smartcards
Date: Wed, 26 Jan 2000 04:30:31 GMT
"J Hartmann" <jhartmann@bigfoot.com_NOSPAM> writes:
But where could I get this ECC card?

jh


certicom was part of the announcement referenced in the prior postings and mentioned in the press release pointed to by the URLs.

their website is
http://www.certicom.com/

list of their smartcard products is at
http://www.certicom.com/smartcards/index.htm
https://web.archive.org/web/19990428144441/http://www.certicom.com/smartcards/index.htm

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

"Trusted" CA - Oxymoron?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Trusted" CA - Oxymoron?
Newsgroups: alt.privacy,alt.security.pgp,comp.security.pgp,comp.security.pgp.discuss,sci.crypt
Date: Wed, 26 Jan 2000 05:04:57 GMT
Lots of times, especially involving privacy issues ... the only thing that needs to be authenticated is if the entity authorized to perform the requested function ... in which case a generalized "identity" certificate (certifying some binding between a public key and some misc. personal information) can be orthogonal to the objective at hand ... and possible may represent a compromise unnecessarily divulging personal information.

In a typical retail scenerio ... the merchant doesn't actually need to know who you are when you present a credit card ... the merchant really wants to know whether they will be paid or not.

There has also been misc. discussion of EU privacy guidelines about making retail electronic financial transactions as anonymous as cash ... i.e. a credit/debit card presented to a merchant would contain no name &/or require any other identification information. It would similarly work in non-face-to-face retail electronic transactions (aka internet, e-commerce) with no identity information exchanged in the transaction.

In the PKI world for financial institutions, this has been translated into relying-party-only certificates ... i.e. a certificate that only carries the public key and the account number for financial transactions (in order to avoid unnecessarily divulgy privacy information). However, for financial transactions it is easily shown that since the original of the certificate resides in the account record ... it is redundant and superfluous for the consumer to return their copy of the certificate as part of every financial transaction to their financial institution (and doing so can even unnecessarily increase the infrastructure's systemic risk).

misc. references:
https://www.garlic.com/~lynn/ansiepay.htm#aadsnwi2
https://www.garlic.com/~lynn/aadsm3.htm#cstech13
https://www.garlic.com/~lynn/aadsm3.htm#cstech8
https://www.garlic.com/~lynn/aadsm2.htm#scale
https://www.garlic.com/~lynn/aadsm2.htm#inetpki
https://www.garlic.com/~lynn/aadsm2.htm#integrity
https://www.garlic.com/~lynn/aadsm2.htm#account
https://www.garlic.com/~lynn/aadsm2.htm#privacy
https://www.garlic.com/~lynn/aadsm2.htm#stall
https://www.garlic.com/~lynn/aadsmore.htm#hcrl3
https://www.garlic.com/~lynn/aadsmore.htm#schips
https://www.garlic.com/~lynn/aadsmore.htm#vpki
https://www.garlic.com/~lynn/aadsmore.htm#killer0
https://www.garlic.com/~lynn/aepay3.htm#aadsrel2
https://www.garlic.com/~lynn/aepay3.htm#x959discus

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

"Trusted" CA - Oxymoron?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Trusted" CA - Oxymoron?
Newsgroups: alt.privacy,alt.security.pgp,comp.security.pgp,comp.security.pgp.discuss,sci.crypt
Date: Wed, 26 Jan 2000 20:07:15 GMT
the other issue is what does a certificate convey?

standard certification represents some vetting of information that occurred at the time the certificate was manufactured. It does a poor job when timely information &/or information aggregation is involved.

oSCP goes a little way torwards providing timely indication of whether the stale information is still valid or not (it is a direct analoge of numerous distributed caching algorithms developed in the '70s and '80s for things like files &/or pieces of files; the difference is that most of these caching infrastructures not only had timely invalidation protocol ... but also timely cached information refresh semantics).

One of the possible certificate targets was something akin to the semantics of a check with a limited signing limit (i.e. a check that carried printing that said it was limited to $5000). However, as was discovered in the 60s & 70s ... this lacked timely information & information aggregation capability ... i.e. somebody did a one million dollar order by signing two hundred $5000 checks. The 60s & 70s started to see emerging online transaction operations which provided both timely information & information aggregation paradigm support.

The certificate paradigm is targeted at offline, atomic operations (i.e. infrastructures not requiring timely information, online information, and/or information aggregation). Attempting to actually translate certificates into something like a offline, electronic check transaction scenerio ... would be equivalent to reversing the online direction to an offline pre-60s paradigm.

In some cases (as per prior note), CAs may be authenticating the wrong information (i.e. leading to things like privacy compromises). In other cases, a CA can absolutely authenticate some piece of information at some point of time ... but having manufactured a certificate at some point in the past with stale information, its application is irrelevant to online, timely information, and/or information aggregation paradigm. The trust isn't in question but infrastructure failures occur because the paradigms didn't intersect (aka making sure that nobody could fudge the "$5000" on a signing limit check to read "one million" ... when the problem was the use of two hundred $5000 checks).

similar thread:
https://www.garlic.com/~lynn/99.html#228

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Vanishing Posts...

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vanishing Posts...
Newsgroups: wash.politics,seattle.general,seattle.politics
Date: Wed, 26 Jan 2000 23:01:52 GMT
for instant here is the path information I see with respect to the most recent post (that i've seen):

Path: news-west.eli.net!news-chi-1.sprintlink.net!
news-central.sprintlink.net!news-peer1.sprintlink.net!
      news.sprintlink.net!router1.news.adelphia.net!
news.hyperioncom.net!cyclone.news.idirect.com.MISMATCH!
      newsfeed.direct.ca!howland.erols.net!newsfeed.skycache.com!
news.eskimo.com!aaronbego

I had worked on some precursor technology to listserv in the late '70s. A listserv/majordomo server might be running on a machine with no other function ... and it may only be supporting a single mailing list. The listserv mail daemon accepts a direct TCP session for incoming mail and then sends back a direct TCP session to the mailer. The most likely bottleneck is the number of people that may have subscribed to a specific mailing list which translates into how many pieces of mail the server has to send out. For small mail lists this isn't a consideration ... but it becomes quite a problem if you start dealing with thousands of people on a mailing list.

The newsgroup server infrastructure will compensate for large number of readers/subscribers by having a store&forward, distributed server infrastructure (instead of one mailer sending everything to everybody ... it only has to forward stuff to a few select select subscribers which in turns forward to their subscribers). The above path gives the forwarding information from the point the lastest posting contacted the usenet infrastructure to the point it arrived in my part of the usenet infrastructure. While the infrastructure is significantly more efficient and handly significantly larger volume than the listserv/majordomo operation ... there are latencies associated with the efficiency improvement.

Typically, any specific forwarding host may operate in batch mode ... i.e. connecting for usenet post forwarding once every 15 minutes and acquiring all new posts for all newsgroups (say collecting all new postings in support of 40,000 different newsgroups ... not just a simple, single mailing list like many listserv/majordomos operate).

In the above path for the most recent posting ... each "bang" (i.e. "!") separates a store&forward node and can possibly represent anywhere from seconds to 15-30 minutes.

Many of the "local" newsgroups tend to be area related (for instance the seattle hierarchy) ... and have relatively few hops between most of the people participating. As people from a wider area attempt to participate in local newsgroups exchanges ... they will tend to be much further away in terms of "hops" and "latencies". I would expect that most of the participants in the seattle hierachy are relatively few hops away from each other and experience low latencies ... people participating from a much wider area will (with more hops) , of course, experience much larger latencies.

Some of the broadcast technologies (satellite, tv blanking interval) have been use to distribute newsgroups postings (something that isn't really feasable with the mailing list paradigm) to shorten the latency with propagating newsgroup information. I had helped with one such implementation and co-authored an article in the summer of '93 that appeared in boardwatch magazine (at the time targeted primarily to the BBS market, it use to be online ... but their website now only has online issues going back to '95).

In any event, mailing list paradigm can have low latencies when serving small populations ... moving into the newsgroup paradigm to obtain significant increase in efficiencies will introduce additional latencies (having nothing to do any server efficiency).

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

"Trusted" CA - Oxymoron?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Trusted" CA - Oxymoron?
Newsgroups: alt.privacy,alt.security.pgp,comp.security.pgp,comp.security.pgp.discuss,sci.crypt
Date: Fri, 28 Jan 2000 05:15:14 GMT
john@nisus.com (John G. Otto) writes:
Digicash's scheme provides both anonymity and assurance of the transfer. The only draw-backs WRT privacy are that they have a way to report the transactions and is subject to a traffic monitoring.

for the most part, the bank doesn't care what your DNA is ... they just care that only the person that open/own the account & is authorized to use the account ... is the person that uses that account. If a CA binds a person's DNA identity in a certificate to the public key ... it doesn't mean anything to the bank, unless the bank has also used DNA for opening the account.

The current bank business process is to record a shared-secret ... typically person's mothers maiden name when the account is opened ... and asks for that in non-face-to-face transactions.

That same business process can be upgraded to record the person's public key. The bank then verifies digital signatures on non-face-to-face electronic transfers/transactions with the recorded public key ... effectively the same business process that uses mothers maiden name to verify transactions.

No CA is required ... and no identity verification is required by 3rd parties ... and no new business processes are required ... and no trusted 3rd party liability is created ... and no CA policies & practices ... current business processes just have technology upgrades.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

"Trusted" CA - Oxymoron?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Trusted" CA - Oxymoron?
Newsgroups: alt.privacy,alt.security.pgp,comp.security.pgp,comp.security.pgp.discuss,sci.crypt
Date: Fri, 28 Jan 2000 17:55:18 GMT
let's put identity certificates another way ...

the transfer request comes in via smime/email, the body of the message contains the account number and the directions. this is signed and has a certificate attached. the bank takes the certificate ... and authenticates the certificate with the CA's public key ... which comes from some certificate that it has laying around ... which also may need authenticating (the trust chain not only has all of their certificates ... but may also require OCSP transactions).

in any case, eventually after the attached certificate is authenticated, the message gets authenticated with the public key. THe account number from the body of the message is then used to look up the account record. The "name" of the account owner is retrieved from the account record ... and needs to be compared against some information in the body of the identity certificate. Supposedly if the character string in the body of the message matches the character string on record in the account record ... then the transaction is supposedly valid.

The CA infrastructure not only has to absolutely bind some set of identity information to the public key going into the certificate ... but that set of information also has to have some exact match with the owner "string" in the bank record. The CA infrastructure can absolutely be sure that they have correctly bound the identity information ... and the transaction still won't work because the identity strings don't exactly match (the one in the certificate and the one in the account record).

So, some number of banks have looked at issuing relying-party-only certificates ... for privacy and liability issues (as well as relevency, will the strings match). The "identity" bound in the certificate is just the bank account number. That slightly simplifies the process ... since there is no strings to mismatch ... after verifying the certificates in the trust hierarchy ... the account number is taken directly from the transaction and reads the account record (and only the account numbers have to match).

The next step is realizing that the whole CA infrastructure allows that if the relying party is known to have the trust hierarchy CA certificates (and/or can expect to easily obtain them) ... the certificate(s) don't have to be transmitted on every transaction (i.e. the CA certificates to validate a individual's certificate is assumed to be already at the relying party ... and/or the relying party can easily obtain it). In the case of relying-party-only certificates, the bank has stored the original certificate in the account record and sent a copy to the individual. Then under current CA infrastructure policies, if the relying party can be assumed to have a copy of the certificate(s) (&/or can easily obtain them, they don't have to be retransmitted) ... then the individual signing the transaction doesn't have to retransmit copies. In this case, the relying party is known to have all certificates, including the original of the individuals certificate.

So, the bank just pulls the account number from the transaction, reads the account record, pulls the public key out of the account record, and uses it to validate the digital signature on the transaction.

For efficiency purposes the public key will be stored in unencoded form in the account record. Certificates are defined in encoded for for interoperable network transmission, but have to be decoded in order to access the fields, the bank can store the unencoded certificate fields in the account record since the ASN.1/encoded form is only needed for interoperble transmission & revalidation of the CA's signature. Since they create the original of the certificate, they have signed it ... just for redundancy they could reverify their own signature at certificate manufactur time, before decoding and loading into the account record.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

"Trusted" CA - Oxymoron?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Trusted" CA - Oxymoron?
Newsgroups: alt.privacy,alt.security.pgp,comp.security.pgp,comp.security.pgp.discuss,sci.crypt
Date: Fri, 28 Jan 2000 18:42:10 GMT
... and an option ... the relying-party-only bank may determine that it isn't even necessary to transmit a copy of the certificate back to the individual. The individual goes thru the public key RA process with their bank. The bank does the RA bit, then manufactures a certificate by encoding the fields and signing them. The bank then verifies the signature on the newly minted certificate, decodes it and stores the decoded fields in the account record. If the bank decides there is never a business scenerio requiring the individual to transmit the relying-party-only certificate on a relying-party-only transaction, the bank won't bother to transmit a copy of the certificate to the individual. The bank just keeps the original on file (in its unecoded form).

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

"Trusted" CA - Oxymoron?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Trusted" CA - Oxymoron?
Newsgroups: sci.crypt
Date: Sun, 30 Jan 2000 22:33:36 GMT
tmetzinger@aol.comnospam (Timothy M. Metzinger) writes:
as many people have indicated, for a transfer of wealth, the identities of the individuals aren't important, only the identities of the two containers ("from" and "to") matter.

But for signed documents, legal briefs, title transfers, and other applications, it IS important to verify individual identities.

While many commercial CA's don't verify identity, some do... Digital Signature Trust Corp, Verisign, Baltimore (formerly GTE CyberTrust), and the DOD CA's do have certificate policies where they do a face-to-face registration of individuals, and stand behind the certificate's binding.


note that manufactured certificates done at some time in the past is relying on some consensus/domain about the information being certified.

(not having) identity is issue in many circumstances. also, to some extent a business process would come down to doing a string compare against some arbritrary, generalize "identity" representation carried in sort of string embedded in a manufactured certificate and so other arbritrary identity representation string (presumably if the string compare match ... then some business process can proceed). There have been recent SSN privacy postings citing both same SSN as well as same first/last name (i.e. same SSN number given out to two different people with same first/last name).

financial transfers need near-time certification associated with specific domain (and circumstances like current account balance and/or outstanding debt) by institutions carrying at least some liability associated with the process (as opposed to random 3rd parties).

simplified certificates have been transition from offline/paper to offline/electronic. however, in growing number of even document transactions (not limited to purely financial transactions) there is more & more of a move to online/electronic (pretty much anything involving value exchange and/or liability).

Effectively starting to see transition to real time certification within the domain/context of the specific operation taking place ... especially when the certifying agency carries some liability, risk & handling filing (transaction is registered in some central filing place ... so the repository also does a form of real time certification).

Example is title insurance company handling title transfers, they will certify individual(s) within the context of the title transfer & just for the specific title transfer (in part because they carry liability on each specific title transfer ... so the idea of a "blank check" certificate for an arbritrary & unknown number of title transfers involving arbritrary & unknown value is not likely).

Stale certificates represent much more of transition from offline/paper to offline/electronic ... but I would expect to see much more real-time certification in transition to online/electonic, even for document signing, especially where there is value/liability involved and especially when some sort of filing is required. Each one becomes a specific, known, & likely time-stamped certification ... as opposed to a blank check certification/certificate that might be used in an arbritrary & unknown number of times involving arbritrary & unknown value (businesses tended to not like unknown/unbounded liability/risk).

The play for offline/electronic identity certificates would tend to be when value proposition seems to be negligible risk/liability (say government employees within legislated context) &/or an online+filing infrastructure represents significant percentage of value proposition.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Historically important UNIX or computer things.....

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Historically important UNIX or computer things.....
Newsgroups: alt.folklore.computers
Date: Fri, 04 Feb 2000 18:18:49 GMT
i developed a system build process that would dump the kernel to tape ... and since there was so much empty space still on the tape ... dump everything needed to rebuild the kernel, procedures, source, execution environment, procedures, etc.

Periodically would "archive" a tape and possibly make copies in triplicate (and did full tape copes from 800bpi to 6250bpi ... and then from 6250bpi to 3480 cartridges). Problem I had was that we had a data center where the operators would randomly take tapes out of the library and use them for scratch. Lost a bunch of early PLI stuff (even when replicated on three different tapes) and only had trivial amount of the early build tools (again done in triplicate) when Melinda Varian was looking for early examples.
https://www.leeandmelindavarian.com/Melinda#VMHist

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Historically important UNIX or computer things.....

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Historically important UNIX or computer things.....
Newsgroups: alt.folklore.computers
Date: Fri, 04 Feb 2000 20:45:46 GMT
oh, and I had done some early stuff as an undergraduate on operating system monitoring supporting dynamic adaptive performance optimization. The archived stuff was referenced in the early 70s in a legal matter involving some sort of system monitoring patent. When a similar legal matter reared its head again in the mid-80s ... all three tapes containing replicated copies of the original source had been trashed by data center operations.

to the extent stuff survived ... were on tapes that I had copied, checked out of the library and kept under lock & key in my office.

random reference:
https://www.garlic.com/~lynn/99.html#18

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

TLS: What is the purpose of the client certificate request?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TLS: What is the purpose of the client certificate request?
Newsgroups: sci.crypt
Date: Mon, 07 Feb 2000 16:23:29 GMT
Anuj Seth writes:
Hi,

> Yes, the server certificate authenticates the server and the client
> certificate authenticates the client. I don't understand your final
> question about authenticating the user.

That's what I thought as well! About, the question you didn't understand let me rephrase it -- The server requests the client for the certificate. The client sends the certificate across to the server. Now, how is the server supposed to authenticate the client? Basically, the server should be able to access the CA to get the CA's public key. The CA's public key will be used to authenticate the user (If I'm correct). I've read the X.509 standard but couldn't figure out how to connect to the CA to get the CA's public key. Could someone let me know how it is to be done?


at least some of the targets are controlled environments ... where there is only one (or a few) known CA(s) for client certificates and the CA's public key is preloaded into the server.

This is somewhat analagous to the reverse ... where the server CA public keys have been preloaded into the client browswers when the browsers were manufactured.

The server may only be using the client certificate to validate membership in the allowed community (client demonstrates that it owns a valid certificate for access purposes). Validating client membership just involves validating the certificate and validating that something the client signs can validate with the public key in the certificate.

Actually validating the client typically would require verifying something more than membership assertion (like something from the contents of the signed information) against some local infrastructure (like an account record).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

question about PKI...

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: question about PKI...
Newsgroups: sci.crypt
Date: Mon, 07 Feb 2000 17:13:16 GMT
palmpalmpalm@aol.com (Palmpalmpalm) writes:
Hi, does anybody kindly answer my question?

What method does the PKI product provide for mobile users? When users move to another computer, do they have to bring their own private key and certificate always?

Thanks in advance.


mobile software versions can be as simple as a floppy disk that can copies private keys from one PC to another PC.

some forms of digital signatures strive for showing only a single person has access to the use of the private key. demonstrating this can require showing that nobody else had access &/or could have copied the private key. One way for achieving this goal is an environment where the private key is encapsulated inside a hardware token and it is extremely difficult for anybody (even the hardware token owner) to directly access the private key (the hardware token uses the private key for digital signing ... but there is no function for reading the private key; showing that nobody has direct access to the private key is superset of showing nobody else has access).

Showing that nobody can directly access the private key ... then it reduces to trying to show that all digital signatures can only be performed when the hardware token is in their possession (modulo tricking people into using their hardware token).

A hardware token can also be mobile & interchangeable (like a card) that is moveable from one device to another (like between different PCs, PDAs, & cellphones). A hardware token could also be physically housed in a PDA or cellphone ... and when used in conjunction with a PC ... the PC and the hardware token communicate via the PDA/cellphone (using proximity contactless protocol, infrared, something like "bluetooth", more traditional wireless, or even some physical adapter connection).

One objective of the AADS (sometimes referred to as PKNI ... or public key, no certificates) parameterised risk management is establishing the integrity of digital signing mechanism (minimizing some of the risk unknowns when performing authorization functions ... and/or being able to scale the integrity comparable to the authorization requirements). Rather than just saying that the digital signature could have come from a hardware token ... having a high level of confidence that the digital signature could have only originated from a specific hardware token with verifiable integrity characteristics (like an audit trail between the time the hardware token chip was manufactured and the time the public/private keys were generated & the public key was recorded).

misc. bluetooth reference:
http://www.gartner.com/public/static/hotc/hc00082960.html
http://www.bluetooth.com/

misc. aads reference:
https://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

TLS: What is the purpose of the client certificate request?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TLS: What is the purpose of the client certificate request?
Newsgroups: comp.security.misc
Date: Mon, 07 Feb 2000 20:05:06 GMT
Anuj Seth writes:
Hi,

> > If this is the reason, how does one authenticate the user??

The server requests the client for the certificate. The client sends the certificate across to the server. Now, how is the server supposed to authenticate the client? Basically, the server should be able to access the CA to get the CA's public key. The CA's public key will be used to authenticate the user (If I'm correct). I've read the X.509 standard but couldn't figure out how to connect to the CA to get the CA's public key. Could someone let me know how it is to be done?


the server doesn't request a copy of the certificate for authentication, the server makes a request for a digitally signed message for authentication purposes. One method that the server has for obtaining the public key for authenticating a digital signature is by attaching the corresponding public key certificate. But then, in order to verify the public key certificate, the CA's public key needs to be obtained. This can be done by preregistering the CA's public key.

It is possible to also preregister the client/individual's public key (in much the same way that CA's public keys can be preregistered).

the membership & client specific authentication can be done with digital signature and no certificates being transmitted ... where the public key is registered in place of a password in something like a radius database for the client/user. then instead of using userid/password for authentication (for either PPP &/or webserver authentiation), the client digitally signs something and the server does a radius transaction to authentication the client's signature. This would handle digital signature authentication for membership authentication, client-specific authentication, as well as timely operation authentication (is the client still authorized for the connection as of this moment) and does it within the dominate currently deployed client authentication process (radius).

For just determining a client membership authentication (as opposed to client authentication) ... giving the client a membership certificate and recording the CA's public key at the server is pretty straight-forward. Getting into actual client authentication with timely information about current status gets into some sort of account record. As soon as an account record becomes involved, then it is more straightforward to record the client's public key directly and do away with the certificate transmission portion of the operation.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

TLS: What is the purpose of the client certificate request?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TLS: What is the purpose of the client certificate request?
Newsgroups: comp.security.misc
Date: Mon, 07 Feb 2000 20:24:18 GMT
misc. reference ...

RFC 2246              The TLS Protocol Version 1.0          January 1999


Structure of this message: opaque ASN.1Cert<1..2^24-1>;

struct { ASN.1Cert certificate_list<0..2^24-1>; } Certificate;

certificate_list This is a sequence (chain) of X.509v3 certificates. The sender's certificate must come first in the list. Each following certificate must directly certify the one preceding it. Because certificate validation requires that root keys be distributed independently, the self-signed certificate which specifies the root certificate authority may optionally be omitted from the chain, under the assumption that the remote end must already possess it in order to validate it in any case.

The same message type and structure will be used for the client's response to a certificate request message. Note that a client may send no certificates if it does not have an appropriate certificate to send in response to the server's authentication request.
--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

IBM RT PC (was Re: What does AT stand for ?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM RT PC (was Re: What does AT stand for ?)
Newsgroups: alt.folklore.computers
Date: Tue, 08 Feb 2000 17:57:57 GMT
note also that PC/RTs were the NSFNET1 backbone routers. A PC/RT had 440kbit/sec adapter cards ... three RT/440kbit streams were channeled into an IDNX (basically a private phone switch) T1. Each NSFNET1 backbone site had racks & racks of PC/RTs handling 440kbit/sec streams connected to IDNX box running T1 connections between backbone sites.

PC/RT ran a virtual machine supervisor (written in pl.8 & somewhat adopted from display writer follow-on project) under which ran a highly modified version of AT&T system 5.2. One of the porting problems to AIX on PC/RT was device drivers since they weren't actually interfacing to the bare hardware.

AOS started out as project to port BSD to 370 mainframe and then was retargeted to PC/RT.

misc. romp, pc/rt & aos references:
https://www.garlic.com/~lynn/98.html#25
https://www.garlic.com/~lynn/98.html#26
https://www.garlic.com/~lynn/98.html#27
https://www.garlic.com/~lynn/99.html#2
https://www.garlic.com/~lynn/99.html#36
https://www.garlic.com/~lynn/99.html#64
https://www.garlic.com/~lynn/99.html#65
https://www.garlic.com/~lynn/99.html#129
https://www.garlic.com/~lynn/99.html#146

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

APPC vs TCP/IP

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: APPC vs TCP/IP
Newsgroups: bit.listserv.ibm-main
Date: Wed, 09 Feb 2000 00:08:06 GMT
minor mainframe tcp/ip reference:
https://www.garlic.com/~lynn/99.html#36

in the early history, I've talked to people who claimed that after having done some MVS tcp/ip implementation they were instructed to recode so that it is much slower than sna/appc.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

APPC vs TCP/IP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: APPC vs TCP/IP
Newsgroups: bit.listserv.ibm-main
Date: Wed, 09 Feb 2000 08:10:40 GMT
the 60s/70s packet prtoocol was the arpanet protocol that the IMPs talked. The Work on "internet protocol" started sometime int the 77/78 timeframe. TCP appeared earlier but layered on top of the IMP/arpanet host-to-host stuff.

internet protocol (as opposed to the IMP-based arpanet stuff) first appears as standard document RFC760 published 1/1/1980

many TCP/IP products were based on bsd4.3 tahoe that was done in 87-89

by comparison, one could make the claim that no "networking" occurs at all in SNA prior to APPN (circa 10/86) since that is the first appearance of a "network layer" being implemented (ala OSI).

misc. references
https://www.garlic.com/~lynn/99.html#39
https://www.garlic.com/~lynn/99.html#44
https://www.garlic.com/~lynn/internet.htm
https://www.garlic.com/~lynn/99.html#33
https://www.garlic.com/~lynn/99.html#36
https://www.garlic.com/~lynn/99.html#40
https://www.garlic.com/~lynn/99.html#48
https://www.garlic.com/~lynn/99.html#112

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Correct usage of "Image" ???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Correct usage of "Image" ???
Newsgroups: alt.folklore.computers
Date: Fri, 11 Feb 2000 14:42:08 GMT
misc. on page replacement ... circa 1968 ... of course there was atlas on others before.
https://www.garlic.com/~lynn/93.html#0
https://www.garlic.com/~lynn/93.html#4
https://www.garlic.com/~lynn/93.html#5
https://www.garlic.com/~lynn/94.html#1
https://www.garlic.com/~lynn/96.html#0a
https://www.garlic.com/~lynn/98.html#54
https://www.garlic.com/~lynn/99.html#18

this was on cp/67 which was port from cp/40 circa 65 or 66. some of information on virtual memory is history paper at:
https://www.leeandmelindavarian.com/Melinda#VMHist

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

APPC vs TCP/IP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: APPC vs TCP/IP
Newsgroups: bit.listserv.ibm-main
Date: Thu, 10 Feb 2000 03:26:08 GMT
we use to rib some of the people working on appn that instead of trying to force-fit networking into SNA that they should work on real networking in tcp/ip; that the SNA crowd wasn't going to appreciate their efforts. when it came to announce APPN ... the SNA group, in fact, non-concurred ... and there was a 6 week delay while the announcement letter was re-written so that APPN was quite clearly differentiated from SNA (i.e. nobody should be confused about SNA supporting networking).

Announcement letter did make it out in oct of 1996 ... some month I was giving presentation at the SNA ARB (architecture review board) meeting in raliegh.

misc. references:
https://www.garlic.com/~lynn/99.html#67
https://www.garlic.com/~lynn/99.html#70
https://www.garlic.com/~lynn/99.html#71
https://www.garlic.com/~lynn/96.html#15
https://www.garlic.com/~lynn/99.html#201
https://www.garlic.com/~lynn/99.html#12
https://www.garlic.com/~lynn/99.html#36

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

Hotmail question

Refed: ** -, ** -, ** -, ** -, ** -, ** -, ** -, ** -, ** -, ** -, ** -, ** -, **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hotmail question
Newsgroups: comp.security.misc
Date: Sun, 13 Feb 2000 17:56:29 GMT
dialins are typically dynamically assigned IP addresses (and/or slaved to the incoming modem) ... one possibility is the packet is left-over from previous assignee of that IP addresses that had dropped their connection to the ISP.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

OS/360 JCL: The DD statement and DCBs

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS/360 JCL: The DD statement and DCBs
Newsgroups: alt.folklore.computers
Date: Mon, 14 Feb 2000 17:11:08 GMT
Lars Poulsen writes:
Some of the utility programs did some really bizarre things with their DCBs, including stuffing parameters they had read from their SYSIN DD's into the DCB before opening the file. But they still had to have a SYSUT1 DD card there, even if there was no information on it.

asmg read jfcb (job file control block) macro could pull up the internal representation before the open.

mostly i remember it being used by "monitors" ... i.e. programs that simulate stripped down job scheduler for compile, load & go. Our shop converted 709 ibsys doing student fortran jobs measured in jobs per second (it was tape-to-tape, with 1401 acting as unit record front end) to OS/360 PCP 9.5 doing the same workload measured in minutes per job.

adding hasp got it down to jobs per minute, but a 30 second 3-step fortran complie, lked, & go was still almost all job scheduler time. a monitor was tested that got it down to a couple seconds per student job ... but it was about the same time we installed watfor ... which got the 360/65 thruput on student fortran jobs back to finally better than the 709 ibsys

random references:
https://www.garlic.com/~lynn/93.html#1
https://www.garlic.com/~lynn/94.html#18

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

OS/360 JCL: The DD statement and DCBs

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS/360 JCL: The DD statement and DCBs
Newsgroups: alt.folklore.computers
Date: Mon, 14 Feb 2000 17:17:10 GMT
also, another interesting reference:


http://jmaynard.home.texas.net/hercos360/

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

RealNames hacked. Firewall issues.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RealNames hacked. Firewall issues.
Newsgroups: comp.security.firewalls,comp.security.misc,comp.security.unix
Date: Mon, 14 Feb 2000 23:12:35 GMT
Wally Whacker writes:
You guys missed the point. I didn't say credit cards were perfect. I said a verifiable payment scheme gives the consumer much less ability to undo a bad transaction. Why not argue that point? It's a pretty scary scenario. Why have digital signatures if their purpose isn't to verify the actual source of a transaction?

Plaintiff: "Mr Jones bought this car from us. We have his digital signature on the transaction."

Defendant: "Your Honor, I didn't make this charge".

Judge: "You signed it. Your digital signature is on it. Judgement in favor of plaintiff. Next case."

Is that your preferred scenario?

If your private key becomes equivalent to your signature, what happens when your private key gets ripped off? Ooops. Now you are really f**ked. It's like someone having your fingerprints on their fingers.

At some point the human factor always enters in. How are you going to buy these "phone cards"? Cash? Carrying a lot of cash around is even more dangerous that using a credit card. At some point there is always a human involved and you better thank your stars for that. If machines were able to 100% rule the financial transactions universe without human ability to intervene then hacking would truly become the most profitable profession.


X9.59 is industry draft standard for end-to-end digital signature authentication for all electronic retail payments (credit, debit, internet, non-internet, etc). it operates within the existing payment business processes for authentication and extends various forms of parameterised risk management to the digital signature paradigm.

Specifically, it doesn't say that your card can't be ripped off ... &/or your private key can't be stolen ... but tries to establish a parameterised infrastructure as to the risk associated with such things happening given the known integrity parameters surrounding the private key and digital signing environment.

PC-based software private key & signing environments have a lot higher risk than a hardware token based environment. A hardware token with no activation process has higher risk than a PIN-activated hardware token. A PIN-activated hardware token has higher risk than a biometric-activated hardware token. Kinds of cryptography, length of keys, kinds of chips, whether there is audit trail on specific chip back to the foundry, etc are also all risk & integrity issues.

misc. X9.59 references at:
https://www.garlic.com/~lynn/

specific references ... multiple posting thread at:
https://www.garlic.com/~lynn/aadsm2.htm#straw
https://www.garlic.com/~lynn/aadsm2.htm#strawm1

other threads.
http://lists.commerce.net/archives/ansi-epay/199901/msg00014.html

misc identity vis-a-vis authentication issues:
https://www.garlic.com/~lynn/aadsmail.htm#vbank

part of the issue might be characterized with the problem associated with creating a trusted third party infrastructure that provides some meaningful information binding (in manufactured certificates) that would carry some meaning to relying parties.

Businesses & financial infrastructures have used account records as information binding paradigm for eons. X9.59/AADS can bind a public key in an account record for doing digital signature authentication ... and it isn't necessary to step outside of the existing business processes and/or legal infrastructure to determine what that means (i.e. integrity is improved and risk is lowered compared to some currently deployed technology ... but that doesn't change the business relationships and/or the business processes).

It is much harder problem to try and establish an implied business meaning between an arbitrary 3rd party and relying parties which have no direct business relationship, especially if somebody has to be motivated to exchange some money with arbitrary 3rd party in return for a manufactured certificate.

In general with retail transactions disputes, the burden of proof has been on the entities with the most resources (i.e. merchants & banks). There is discussion not only about making a digital signature equivalent to a real signature ... but also creating the sense of non-repudiation and shifting the burden of proof from the business entity with the most resource (merchants & banks) to the entity with the least resource (typically the consumer). It would seem that such a shift also changes some of the retail value proposition ... possibly creating value opportunity justifying payment of money to arbitrary 3rd parties in exchange for certificates (although it would seem somewhat ironic that consumers would be the ones to pay for the privilege of giving up some of their rights).

The AADS digital authentication framework has also been applied to non-financial things ... like radius ... i.e. register a public key in the radius authentication database and instead of doing "radius" userid/password authentication for internet/isp connectivity ... do radius userid/digital signature authentication for internet/isp connectivity. It is also straight-forward to add the same radius authentication process to webserver client authentication (i.e. instead of doing RYO client authentication stubs in the webserver, invoke radius).

This AADS framework approach was rather than treating digital signatures as a new business and authentication paradigm ... intergrate digital signatures into existing business processes as an authentication technology upgrade (rather than forcing a paradigm business shift, just treat it as a technology enhancement ... on par with prior events like forcing passwords to be 8+ characters & were resistant to various known password pattern exploits & guessing).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Multithreading underlies new development paradigm

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multithreading underlies new development paradigm
Newsgroups: comp.arch,comp.programming.threads
Date: Tue, 15 Feb 2000 02:42:33 GMT
task switch can involve different address space table ... in which case a processor may use completely different TLB entries ... even if they resolve to the same physical locations. Futhermore, TLBs may only support a limited depth of address space tables (i.e. each TLB entry is tagged with an address space index ... and there are only a limited number of active address space indexes). If the address space index table is full then a table entry has to be scavanged which also involves first flushing all the TLB entries tagged/associated with that table entry.

It is possible to have virtual page thrashing (constantly replacing virtual pages in real memory). It is also possible to have cache thrashing and TLB thrashing. And it is possible to have address space index table thrashing ... which not only effects the entry in the address space index table ... but also results in a global TLB purge for all entries associated/tagged with that table entry.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Multithreading underlies new development paradigm

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multithreading underlies new development paradigm
Newsgroups: comp.arch,comp.programming.threads
Date: Tue, 15 Feb 2000 15:47:42 GMT
brucehoult@pobox.com (Bruce Hoult) writes:
All that might be true, on certain machines, at certain times. Or it might never be true on certain machines. For example, there might be no such thing at the hardware as an "address space table". I have in front of me MPC750UM/AD, the "MPC750 RISC Microprocessor User's Manual" and I see no signs of any such thing as an address space index in either the segment registers (or BATs), the TLB, or the cache.

On a task switch, you need to load 16 32-bit segment register with values appropriate to the incoming task. These control, for each 256 MB address range, the mapping to a 52-bit virtual memory space, plus the access protections for that address range for that task. You can't run out of address space (or process) indexes because there aren't any. If you want to share TLB entries between tasks then that is easily arranged. You don't need to flush either the TLB or the cache on a task switch.


yep, typically if there is virtual memory there is some sort of TLB as high speed cache of virtual->real mapping, either hardware loaded (which is more likely with page tables) or software managed (more likely with inverted tables). In either case the TLB entries have some sort of address space affinity. In the case of inverted tables w/801, address space affinity is at the segment register level ... and the "tag/association" is the value loaded into the segment register (40-28 = 12 bit value in the case of ROMP & 52-28=24 bit value in case of RIOS/power)

The sharing issue that I had with the design back in the mid '70s was that it had so few concurrent sharable objects (i.e. registers). The original design point had the segment registers instructions inline w/o having to cross a protection domain (i.e. as easy as base registers). Moving to more open software systems required that any segment register chatter activity be moved into a protection domain with kernel calls. In the original design point of the architecture it was straigtforward to have an application with dozens of (almost) concurrent shared objects. A partial solution was to go to composite shared objects (within a single 256mbyte object) ... but then it became a administrative packaging issue with other limitatins.

misc. references
https://www.garlic.com/~lynn/98.html#25
https://www.garlic.com/~lynn/98.html#26
https://www.garlic.com/~lynn/98.html#27

misc. other 801/romp/rios/power references:
https://www.garlic.com/~lynn/94.html#47
https://www.garlic.com/~lynn/95.html#5
https://www.garlic.com/~lynn/95.html#9
https://www.garlic.com/~lynn/95.html#11
https://www.garlic.com/~lynn/96.html#15
https://www.garlic.com/~lynn/97.html#5
https://www.garlic.com/~lynn/99.html#64
https://www.garlic.com/~lynn/99.html#67
https://www.garlic.com/~lynn/99.html#129

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

RealNames hacked. Firewall issues.

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RealNames hacked. Firewall issues.
Newsgroups: comp.security.firewalls,comp.security.misc,comp.security.unix
Date: Wed, 16 Feb 2000 16:59:58 GMT
it is possible to use biometrics for device activation ... which then does digital signature authentication processes ... instead of directly using biometrics for authentication. If the device is own'ed by the person ... the person is doing biometric authentication to a device that they "own" ... which then performs authentication operations to the rest of the world. This approach eliminates biometric information flowing around the infrastructure. At the simplest consumer level ... a "card" works when they have it and doesn't work when they don't have it ... and the card is needed to perform some operation.

misc. references:
https://www.garlic.com/~lynn/aadsm2.htm#privacy
https://www.garlic.com/~lynn/aadsm3.htm#cstech4
https://www.garlic.com/~lynn/aadsm3.htm#cstech5
https://www.garlic.com/~lynn/aadsm3.htm#cstech12
https://www.garlic.com/~lynn/aadsm3.htm#kiss2
https://www.garlic.com/~lynn/aadsm3.htm#kiss9
https://www.garlic.com/~lynn/aadsmore.htm#bioinfo1
https://www.garlic.com/~lynn/aadsmore.htm#bioinfo2
https://www.garlic.com/~lynn/aadsmore.htm#bioinfo3
https://www.garlic.com/~lynn/aepay3.htm#x959risk3
https://www.garlic.com/~lynn/99.html#157
https://www.garlic.com/~lynn/99.html#160
https://www.garlic.com/~lynn/99.html#165
https://www.garlic.com/~lynn/99.html#166
https://www.garlic.com/~lynn/99.html#170
https://www.garlic.com/~lynn/99.html#172
https://www.garlic.com/~lynn/99.html#189
https://www.garlic.com/~lynn/99.html#235

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))
Newsgroups: comp.arch,comp.sys.intel
Date: Wed, 23 Feb 2000 01:28:11 GMT
last time I solved this problem (been seven years), there were about 480k commercial flight segments on the OAG master ... and about 4080 airports worldwide with commerical flights. The 480k or so commercial flight segments combined into something like 650k possible "flights" (a multihop flight number with four flight segments would have 3 destinations from the starting point, two destinations from the first stop, and one destination from the 2nd stop (3+2+1 flights). The "worst" was a flight number with 16 flight segments that took off from a regional airport and flew the rounds of some lessor known places ... eventually ending back at the original airport.

Another pet peeve of mine has been flight numbers that had intermediate stops with "change of equipment" but not a connection (this had certain advantages because the normal page &/or screen listing shows "directs" before connects. A plane could take off from someplace with at least two flight numbers ... at some intermediate airport one of the flight numbers would be magically associated with a different plane ... which involved changing "equipment" ... but it wasn't listed as a connection.

connections were where things started getting tricky for routes (got into a lot larger number of permutations). database implementations giving all possible routes for all possible from/dest pairs tended to handle directs and one connects ... but started to break down as soon as you tried to do two connects.

then comes fares ... there can be a dozen different fares for flight segments as well as at least dozen different fares between end-points

... and so various flight selection criteria can be (once all possible routes have been determined between the origin/destination)

"least expensive" ... based on either the end-to-end fare and/or the aggregation of possible segment fares (with some of the fares have varying seat availability on per segment basis or day of the week basis)

least travel time

arrival before specifc time

departure after specific time

maximum travel points (longest distance travelled)

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))
Newsgroups: comp.arch,comp.sys.intel
Date: Wed, 23 Feb 2000 01:33:56 GMT
oh yes, and there was lots of fare tuning ... implementations that had fares in database on both per flight segment basis as well as origin/destination basis ... could see a million updates in a single day to the fare database.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Mainframe operating systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe operating systems
Newsgroups: bit.listserv.ibm-main
Date: Wed, 23 Feb 2000 04:45:27 GMT
one of the things that VM provided was clear seperation between resource managerment (CP) and various system services (CMS) that tended to be jumbled all together in other operating systems. As a result, there was a constant focus on efficiency in the CP path lengths (a focus that has not been there in other implementations).

In part it was possible for customers to constantly point out ... at least in the guest cases, the with and w/o numbers i.e. what was the thruput of running MVS with and w/o VM; ... when was the last time you heard somebody complained about the benchmarked performance difference of running a MVS application with & w/o MVS?

There has been some effort claiming the difference in thruput between application run in VSE & MVS .. but MVS has tended to cloud the thruput issues as having lots more function ... as opposed to any lack of attention to thruput (although there was a situation where it was possible to demonstrate higher VS1 thruput under VM with handshaking than VS1 run standalone ... i.e. handshaking allowed VS1 to take advantage of certian VM resource management facilities rather than using its own).

Part of the thruput and pathlength focusability resulted in VM/CP optimizations that rarely has been demonstrated by other implementations. In part, it was also a significant factor giving rise to LPARs (could be viewed as a microcode implementation of CP subset).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

distributed locking patents

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: distributed locking patents
Newsgroups: comp.arch
Date: Thu, 24 Feb 2000 17:20:41 GMT
ibm provided funding for athena at MIT ... equally with dec. ibm also provided funding to cmu for the mach, andrew (toolkit & filesystem), camelot, etc work ... to the tune of about 50% more than the combined ibm/dec funding at athena. I believe ibm also provided substantial seed for the transarc spin-off ... and then paid again substantially when they bought the spin-off outright.

IBM PASC, started working with Locus in the early '80s and doing ports to S/1 and a couple other boxes in Palo Alto ... including process migration and fractional file caching (in addition to distributed access and full file caching).

Early DCE meetings included key people from Locus, Transarc, and several IBM locations.

there was also misc. other unix work like the AT&T/ibm port of UNIX to TSS ... running as a TSS subsystem (I believe saw a large deployment inside of AT&T).

prior to that ... in the early '70s, at least one of the commercial service bureaus did cluster version of vm/cp ... on the 360/67. This included process migration ... they had data centers on both the east coast and the west coast ... and providing 7x24 access to clients around the world. The hardware required regularly (weekly) scheduled preventive maint (PM) and so one driver was both intra-datacenter and inter-datacenter process migratin (there was no time-slot that could be scheduled to take down a box for PM where there weren't users from someplace in the world expecting uninterrupted service).

random references:
https://www.garlic.com/~lynn/99.html#63
https://www.garlic.com/~lynn/99.html#64
https://www.garlic.com/~lynn/99.html#66
https://www.garlic.com/~lynn/99.html#67
https://www.garlic.com/~lynn/99.html#36
https://www.garlic.com/~lynn/93.html#0
https://www.garlic.com/~lynn/93.html#19
https://www.garlic.com/~lynn/97.html#14
https://www.garlic.com/~lynn/99.html#237

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Cybersafe & Certicom Team in Join Venture (x9.59/aads press release at smartcard forum)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Cybersafe & Certicom Team in Join Venture (x9.59/aads press release at smartcard forum)
Newsgroups: alt.technology.smartcards
Date: Fri, 25 Feb 2000 17:04:11 GMT
X9.59/AADS press release at smartcard forum, at:
http://www.cybersafe.com/news/pr20000223.html
https://web.archive.org/web/20000612043237/http://www.cybersafe.com/news/pr20000223.html
http://www.certicom.com/press/2000/feb2300.htm
https://web.archive.org/web/20000819093535/http://www2.certicom.com/press/2000/feb2300.htm
http://lists.commerce.net/archives/ansi-epay/200002/msg00003.html
https://web.archive.org/web/20010222203108/http://lists.commerce.net/archives/ansi-epay/200002/msg00003.html

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

CRC-16 Reverse Algorithm ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CRC-16 Reverse Algorithm ?
Newsgroups: sci.crypt
Date: Fri, 25 Feb 2000 21:33:19 GMT
one of the interesting things in CRC16 chips in the '70s where computer boxes using it for bit error detection and RF modem boxes using the same chip for eliminating runs of zeros. There was strange situation in some <college campus> RF/broadband systems which were getting undected transmission errors because of unfortunate interaction of using the same CRC16 recursively.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Difference between NCP and TCP/IP protocols

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Difference between NCP and TCP/IP protocols
Newsgroups: alt.folklore.computers,comp.protocols.tcp-ip,alt.culture.internet
Date: Sat, 26 Feb 2000 15:20:16 GMT
as to similar discussion last spring
https://www.garlic.com/~lynn/internet.htm

Much of the internet protocol (aka IP) activity went on in IEN in 77/78 time-frame (references to TCP & TCP as part of HOST/HOST shows up earlier)

IEN-11 has interesting title ... but the file contents

IEN-11

Section 2.3.3.4 (IEN # 11) is titled "Internetting or Beyond NCP" by Danny Cohen of ISI. Please obtain copies from the author. This memo is also assigned the numbers PRTN 213 and NSC 106, and is dated 21 March 1977.

some number of the early (offline) RFCs just went online this week but nothing new for NCP.

misc online


rfc60 ... A simplified NCP Protocol
rfc215 .. NCP, ICP, and TELNET:
rfc381 .. TWO PROPOSED CHANGES TO THE IMP-HOST PROTOCOL
rfc394 .. TWO PROPOSED CHANGES TO THE IMP-HOST PROTOCOL
rfc550 .. NIC NCP Experiment
rfc618 .. A Few Observations on NCP Statistics
rfc660 .. SOME CHANGES TO THE IMP AND THE IMP/HOST INTERFACE
rfc687 .. IMP/Host and Host/IMP Protocol Change
rfc704 .. IMP/Host and Host/IMP Protocol Change
rfc773 .. COMMENTS ON NCP/TCP MAIL SERVICE TRANSITION STRATEGY
rfc801 .. NCP/TCP TRANSITION PLAN

from rfc801 (nov, 81; catenet shows up in ien111, august, 1979) ...

It was clear from the start of this research on other networks that the base host-to-host protocol used in the ARPANET was inadequate for use in these networks. In 1973 work was initiated on a host-to-host protocol for use across all these networks. The result of this long effort is the Internet Protocol (IP) and the Transmission Control Protocol (TCP).

These protocols allow all hosts in the interconnected set of these networks to share a common interprocess communication environment. The collection of interconnected networks is called the ARPA Internet (sometimes called the "Catenet").

The Department of Defense has recently adopted the internet concept and the IP and TCP protocols in particular as DoD wide standards for all DoD packet networks, and will be transitioning to this architecture over the next several years. All new DoD packet networks will be using these protocols exclusively.


--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Mainframe operating systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe operating systems
Newsgroups: bit.listserv.ibm-main
Date: Sat, 26 Feb 2000 23:58:05 GMT
MVT .... SVS ... i.e. MVT laid out in single 16mbyte "virtual address space" doing own its paging. Initial implementation (i.e. AOS2) was effectively MVT with some tweaks, a page fault handler and the CCWTRANS module from the CP/67 "I" system (i.e. the internal version of CP/67 modified to run on 370 relocate architecture instead of 360/67 relocate architecture). I seem to remember some night shift testing in the 705(?) machine room (i may even remember some names).

MVS then lots more rewrite ... with workload spread out in multiple address spaces. Lot of issues during the 24bit days because the kernel residing in the same address space as the applications ... and attempting to preserve the 8mbyte/8mbyte split getting to be real interesting as system/kernel requirements began to exceed 8mbyte limit.

MFT ... VS1

DOS ... DOS/VS ... DOS/VSE ... VSE

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

APL on PalmOS ???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: APL on PalmOS ???
Newsgroups: comp.lang.apl,alt.comp.sys.palmtops.pilot
Date: Sun, 27 Feb 2000 03:10:35 GMT
in 71 or so, i was somewhat associated with the conversion of apl/360 to cms/apl (among other went from 32k byte workspaces to being having up to nearly 16mbyte workspaces)

random refs:
https://www.garlic.com/~lynn/93.html#5
https://www.garlic.com/~lynn/94.html#7
https://www.garlic.com/~lynn/94.html#55
https://www.garlic.com/~lynn/97.html#4
https://www.garlic.com/~lynn/99.html#20
https://www.garlic.com/~lynn/99.html#36
https://www.garlic.com/~lynn/99.html#38
https://www.garlic.com/~lynn/99.html#90
https://www.garlic.com/~lynn/99.html#149

5100 was doing some 360 slight of hand for apl

ref:
http://www.brouhaha.com/~eric/retrocomputing/ibm/5100/

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

APL on PalmOS ???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: APL on PalmOS ???
Newsgroups: comp.lang.apl,alt.comp.sys.palmtops.pilot
Date: Sun, 27 Feb 2000 03:17:16 GMT
as an aside ... at the 5100 ref
http://www.brouhaha.com/~eric/retrocomputing/ibm/5100/

notice the mention of the "PALM" processor ... Put All Logic in Microcode

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Mainframe operating systems

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe operating systems
Newsgroups: bit.listserv.ibm-main
Date: Sun, 27 Feb 2000 17:52:26 GMT
oh yes, some claim escape of part of qthe IBM Kingston MFT team to Boca and were responsible for

S/1 RPS and then initial OS2

... so MFT history could also read:


os/mft -+---- VS1
        |
+---- RPS ---- OS2

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Difference between NCP and TCP/IP protocols

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Difference between NCP and TCP/IP protocols
Newsgroups: alt.folklore.computers,comp.protocols.tcp-ip,alt.culture.internet
Date: Mon, 28 Feb 2000 05:35:30 GMT
some other related information

from ... rfc1251 ... Who's Who in the Internet
Braden came to ISI from UCLA, where he had worked 16 of the preceding 18 years for the campus computing center. There he had technical responsibility for attaching the first supercomputer (IBM 360/91) to the ARPAnet, beginning in 1970. Braden was active in the ARPAnet Network Working Group, contributing to the design of the FTP protocol in particular. In 1975, he began to receive direct DARPA funding for installing the 360/91 as a "tool-bearing host" in the National Software Works. In 1978, he became a member of the TCP Internet Working Group and began developing a TCP/IP implementation for the IBM system. As a result, UCLA's 360/91 was one of the ARPAnet host systems that replaced NCP by TCP/IP in the big changeover of January 1983. The UCLA package of ARPAnet host software, including Braden's TCP/IP code, was distributed to other OS/MVS sites and was later sold commercially.

from ... ien-166 Design of TCP/IP for the TAC


                          +---------+
+         +
+--------------- + Message + <-------------------+
|+-------------- + Buffers + <------------------+ \
         ||               +         +                     \ \
VV               +---------+                      \ \
        +---+                                               \ \
+--- +   + Tumble                                         \ \
|+-- +   + Table         +-----+    +----+                 \ \
||   +---+               |     |    |    |                  \ \
   ||                     +>| TCP |<-->| IP |<-+                \ \
VV         +--------+  | |     |    |    |  |   +------+      \ \
 +++++++      |        |  | +-----+    +----+  |   |      |    +++++++
| MLC |<---->| Telnet |<-+                    +-->| 1822 |<-->| IMP |
+++++++      |        |  | +-----+            |   |      |    +++++++
||         +--------+  | |     |            |   +------+      /\/\
   ||                     +>| NCP |<-----------+                 / /
||   +---+               |     |                             / /
   |+-->+   + Tumble        +-----+                            / /
+--->+   + Table                                           / /
+---+                                                / /
||             +----------+                       / /
          ||             +          +                      / /
|+-----------> + Message  + --------------------+ /
          +------------> + Buffers  + ---------------------+
+          +
+----------+

Figure 1.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/ http://www.adcomsys.net/lynn/

Difference between NCP and TCP/IP protocols

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Difference between NCP and TCP/IP protocols
Newsgroups: alt.folklore.computers,comp.protocols.tcp-ip,alt.culture.internet
Date: Mon, 28 Feb 2000 05:44:16 GMT
also ... rfc721 Out-of-Band Control Signals in a Host-to-Host Protocol

This note addresses the problem of implementing a reliable out-of-band signal for use in a host-to-host protocol. It is motivated by the fact that such a satisfactory mechanism does not exist in the Transmission Control Protocol (TCP) of Cerf et. al. [reference 4, 6] In addition to discussing some requirements for such an out-of-band signal (interrupts) and the implications for the implementation of the requirements, a discussion of the problem for the TCP case will be presented.

While the ARPANET host-to-host protocol does not support reliable transmission of either data or controls, it does meet the other requirements we have for an out-of-band control signal and will be drawn upon to provide a solution for the TCP case.

The TCP currently handles all data and controls on the same logical channel. To achieve reliable transmission, it provides positive acknowledgment and retransmission of all data and most controls. Since interrupts are on the same channel as data, the TCP must flush data whenever an interrupt is sent so as not to be subject to flow control.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Difference between NCP and TCP/IP protocols

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Difference between NCP and TCP/IP protocols
Newsgroups: alt.folklore.computers,comp.protocols.tcp-ip,alt.culture.internet
Date: Mon, 28 Feb 2000 06:47:35 GMT
as mentioned before ... similar discussion from last spring
https://www.garlic.com/~lynn/internet.htm

as mentioned the NCP -> TCP/IP transition included generic gateway for heterogeneous network ... prior to that NCP/IMP was a homogeneous network infrastructure ... which somewhat limited its extensibility. The internal corporate network essentially included a gateway function in every node from the beginning ... and also contributed to it being larger than the Arpanet from ssentially the beginning until sometime in the mid-80s (where the transition to tcp/ip w/gateways and the advent of workstations as nodes help the arpanet/internet overtake the internal corporate network in number of nodes).

random refs.
https://www.garlic.com/~lynn/99.html#7
https://www.garlic.com/~lynn/99.html#33
https://www.garlic.com/~lynn/99.html#39
https://www.garlic.com/~lynn/99.html#112
https://www.garlic.com/~lynn/99.html#124

& from rfc1705 ... Six Virtual Inches to the Left: IPng Problems

4.5 Compatibility Issues

The Internet community has a large installed base of IP users. The resources required to operate this network, both people and machine, is enormous. These resources will need to be preserved. The last time a change like this took place, moving from NCP to TCP, there were a few 100 nodes to deal with [Postel, 1981c]. A small close knit group of engineers managed the network and mandated a one year migration strategy. Today there are millions of nodes and thousands of administrators. It will be impossible to convert any portion of the Internet to a new protocol without effecting the rest of the community.


--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/ http://www.adcomsys.net/lynn/

Mainframe operating systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe operating systems
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 28 Feb 2000 17:14:27 GMT
smetz@NSF.GOV (Metz, Seymour) writes:
No, CP used page-formatted volumes for paging and SPOOL, but CMS formatted its own minidisks; the only simulation was relocating the addresses and checking the bounds. However, VM/SP added a shared file system, and it may have been page formatted.

I built PAM for CMS ... paged access method originally on CP/67 ... and some other virtual memory functions ... which were then ported to VM/370 release 2 (there was a virtual memory management CSC tech. report, describing dynamically loadable shared segments, as well as dynamically relocatable shared segments ... aka the same shared segment could appear at multiple different virtual addresses in different address spaces, paged page tables, PAM, misc. other stuff).

This was deployed extensively inside IBM on release 2 base. The HONE system (what all the branch & field people use) had a relase 2PLC15 system that was available around the world that included all the features. Prior to the dynamically loadable shared segments ... VM/370 only supported mapping of shared segments with the simulated IPL-by-name function. HONE was primarily delivered it service in APL ... which included a "padded-cell environment" for salesman and other field people. To achieve performance, the APL interpreter had most of its pages defined as a shared segment using an IPL-by-name hack. However, there was several HONE applications written in Fortran that required exiting from APL, executing Fortran application and returning to APL. The HONE system built on VM/370 release 2plc15 made extensive use of dynamically loadable shared segments for switching back & forth between APL & fortran applications ... somewhat transparently to the end-user.

A small subset of the dynamically loadable shared segments showed up in VM/370 Release 3 in the guise of the LOADSYS command (the CMS changes for putting several CMS applications in shared segment, a subset of the CP function but w/o being able to have the same shared segment show up as different virtual address in different address spaces, no PAM ... and therefor no ability to specify shared segments as part of loading a CMS executable from PAM formated disk).

The pageable page tables and some misc. other pieces (like the base for multiprocessing infrastructure) did go out in the resource manager later on in Vm/370 release 3.

The PAM stuff did ship somewhat in the VM release for XT/AT/370.

random refs:
https://www.garlic.com/~lynn/93.html#31
https://www.garlic.com/~lynn/95.html#3
https://www.garlic.com/~lynn/97.html#4
https://www.garlic.com/~lynn/98.html#23
https://www.garlic.com/~lynn/99.html#38
https://www.garlic.com/~lynn/99.html#149
https://www.garlic.com/~lynn/2000.html#5

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Mainframe operating systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe operating systems
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 28 Feb 2000 18:37:34 GMT
I'm not sure who wrote the job entry subsystem for vs1 ....

but I first saw HASP at mft OS/11 (houston spooler out of SEs at nasa houston). it continued thru mft & mvt. Along the way it became an official product, the group moved to g'burg and renamed JES2.

There was also "ASP", done by SEs out of lockheed ... which eventually was also picked up by the g'burg group, became a product and renamed JES3. My wife was in the g'burg group at the time and she got to be one of the ASP catchers ... starting out reading the ASP listings and generating a PLM.

random references
https://www.garlic.com/~lynn/93.html#2
https://www.garlic.com/~lynn/93.html#15
https://www.garlic.com/~lynn/94.html#2
https://www.garlic.com/~lynn/94.html#18
https://www.garlic.com/~lynn/98.html#35a
https://www.garlic.com/~lynn/99.html#33
https://www.garlic.com/~lynn/99.html#58
https://www.garlic.com/~lynn/99.html#77
https://www.garlic.com/~lynn/99.html#92
https://www.garlic.com/~lynn/99.html#93
https://www.garlic.com/~lynn/99.html#94
https://www.garlic.com/~lynn/99.html#111
https://www.garlic.com/~lynn/99.html#117
https://www.garlic.com/~lynn/99.html#212
https://www.garlic.com/~lynn/2000.html#13

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Mainframe operating systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe operating systems
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 28 Feb 2000 22:54:06 GMT
oops, somehow I was under the impression that the ASP involved Lockheed SEs. I believe Rick Haeckel also transferred out of the ASP group to STL.

Bob@CPUPERFORM.COM (Bob Halpern) writes:
The ASP group was the same IBM development team that made the Direct Couple into a commercial product. Direct Couple was an attached support processor. This was the IBM Los Angeles Scientific Center staff based in the Kirkiby Building in Westwood and at the UCLA Western Data Processing Center (WDPC). WDPC was also the UCLA home of ARPAnet, but by UCLA staff. The Direct Couple project also had UCLA staff (like me) on it. We got to program some unusual systems, including some IBM stuff that never saw the light of day. WDCOM was a communication system that serviced universities all over the western United States into the Direct Couple system. It supported some interactive products (e.g. 1050) and STR batch devices like 7701, 7702, 1974, etc. I programmed the communicatins for the 7740 front end, which was the precusros to the 2701, 2702, and 2703 on the 360.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Mainframe operating systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe operating systems
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 29 Feb 2000 17:08:05 GMT
KeHall@EXCHANGE.ML.COM (Hall, Ken , ECSS) writes:
CUNY ran ASP in 1973, and I'm almost certain the 'A' in ASP stood for "Asymmetric".

Something like "Asymmetric Spool Processor", or similar. I never understood why it was called that until I got here and found out how JES3 works, with the GLOBAL/LOCAL concept.

There WAS an "Attached Processor", but it was a hardware upgrade to the 3031/3033 and other boxes. An extra CPU that had no channels.


IBM attached processors showed up with 158s & 168s ... two processor shared-memory configurations with one of the processors not having any channels/IO.

303x were repackaged 370s with channel director in place of channels. channel director was a stripped down 158 processor with the 370 microcode replaced with the channel director microcode & supported six channels.

3031 was 158 with a channel director (3031ap ... could be 3-5 158 processors, two 158 processors with 370 microcode, & one of the "370s" connected to channel director(s) ... aka a 158 running channel director code).

3032 was 168 with a channel director.

3033 was in some sense the 168 logic mapped to newer & faster technology.

a sixteen channel configuration had three channel directors.

then there was 3081 with a pair of processors sharing a channel subsystem.

random references:
https://www.garlic.com/~lynn/93.html#14
https://www.garlic.com/~lynn/95.html#3
https://www.garlic.com/~lynn/99.html#7

there are descriptions about tightly-coupled, symmetric multiprocessing (all the processors in shared-memory configuration having the same capabilities) and tightly-coupled, asymmetric multiprocessing (not all the processors in shared-memory configuration having the same configuration. AP (attached processor) configurations would be an example of tightly-coupled asymmetric multiprocessing.

Then there is loosely-coupled multiprocessing (processors operating in concert w/o sharing memory) but sharing various kinds of I/O. These days these kinds of configurations are also talked about as "clusters".

However, there are also the genre of "shared-nothing" clusters ... machines operating in concert but only coordinated to the extent of communication and various forms of message passing.

random references:
https://www.garlic.com/~lynn/94.html#7
https://www.garlic.com/~lynn/96.html#15
https://www.garlic.com/~lynn/97.html#14
https://www.garlic.com/~lynn/99.html#71

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Mainframe operating systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe operating systems
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 29 Feb 2000 21:17:51 GMT
smetz@NSF.GOV (Metz, Seymour) writes:
SWAC? Now I don't feel so old; there's someone who worked on a machine that was before my time ;-)

My recollection is that it was the 1401 that killed interest in SPOOL on the 707x and 709x lines; the unit record equipment on the big iron was truly archaic, so the 1402 and 1403 looked like big steps up from the equipment supported by SPOOL.


like 1401N1 printer & 2540 card reader punch

my first programming job in school was porting something called MPIO from 1401 to 360/30. The university had 709 with 1401 front-end handling unit record. Tapes were moved back & forth between 709 tape drives and 1401 tape drives. The 30 could be used in 1401 hardware emulation mode ... but the university paid me to re-implement all the function running as 370 program. It still had to accept handle 709 bcd & 709 binary. bcd was subset of 360 ... but 709 binary cards used all twelve punch rows. Had to play games with read-no-feed & if got an error reading the card ... reread it with "column binary" ... which mapped 80 column card into 160 byte positions.

random references:
https://www.garlic.com/~lynn/93.html#15
https://www.garlic.com/~lynn/94.html#53
https://www.garlic.com/~lynn/95.html#4
https://www.garlic.com/~lynn/96.html#21
https://www.garlic.com/~lynn/97.html#21
https://www.garlic.com/~lynn/98.html#9
https://www.garlic.com/~lynn/99.html#13
https://www.garlic.com/~lynn/99.html#59
https://www.garlic.com/~lynn/99.html#93
https://www.garlic.com/~lynn/99.html#130

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Atomic operations ?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Atomic operations ?
Newsgroups: comp.programming.threads
Date: Wed, 01 Mar 2000 00:13:20 GMT
compare&swap and Compare Double & Swap

circa early '70s ... work to come up with it was originally for shared memory multiprocessing ... but the challenge that was presented in order to get the instructions adopted was to come up with us in uniprocessor configurations i.e. the stuff in the programming notes at the following reference:
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/7%2e5%2e23

random references:
https://www.garlic.com/~lynn/99.html#89
https://www.garlic.com/~lynn/99.html#88
https://www.garlic.com/~lynn/99.html#176

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Ux's good points.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ux's good points.
Newsgroups: alt.folklore.computers
Date: Wed, 01 Mar 2000 14:48:07 GMT
jmfbahciv writes:
Sigh! Just because it's called timesharing doesn't mean it's timesharing. And before you all jump all over me, I don't know about that product, but IBM never really did know how to do timesharing. My hypothesis is that their lore is steeped in the card processing days and can't approach computing other than via record kachunking thinking.

/BAH


much of cp/40, CP/67, CMS and early vm/370 was done in 545 tech. sq ... in the same bldg. that multics was being done ... and by some of the same people who had done ctss. company was large and extremely varied organization with operations all over the world ... although significant percentage of the company did participate in business data processing (& some of the interactive offerings platformed on batch processing infrastructure had significant shortcomings ... whether they were called timesharing or not).

long reference is melinda's paper that also covers some of common time-sharing heritage:
https://www.leeandmelindavarian.com/Melinda#VMHist

doing some unix work in late '80s ... it occurred to me that most of the unix "schedulers" (that actually performed the timesharing function) ... looked like a large bug I had fixed in the late '60s (i.e. looked like design that cp/67 had possibly inherited from ctss ... and unix might have also inherited it from the same place, possibly by way of multics ... but it never got fixed).

misc ref:
https://www.garlic.com/~lynn/97.html#12

random other refs:
https://www.garlic.com/~lynn/99.html#76
https://www.garlic.com/~lynn/99.html#109
https://www.garlic.com/~lynn/2000.html#1
https://www.garlic.com/~lynn/2000.html#69
https://www.garlic.com/~lynn/2000.html#70
https://www.garlic.com/~lynn/93.html#0
https://www.garlic.com/~lynn/99.html#112

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Ux's good points.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ux's good points.
Newsgroups: alt.folklore.computers
Date: Wed, 01 Mar 2000 16:25:58 GMT
writes:
Eh? When did that C change from 'conversational' to 'cambridge'? I'm sure it was 'conversational' some 25 years ago. I found it quite workable then already, even with EXEC. I didn't yet speak PL/I then, or even know about it; I was only ten years old and EXEC was all my father had managed to teach me about computers so far. He worked at IBM and school was quite close to his office. Sometimes I would visit him; he would let me play a bit with some mainframe terminal, an S/370 it must have been.

all during the CP/40 and CP/67 days (60s & early 70s), the "C" in CMS was "Cambridge" (work was done at the cambridge scientific center at 545 tech sq). For VM/370 they changed the "C" to "conversational".

see melinda's paper for more detail
https://www.leeandmelindavarian.com/Melinda#VMHist

note also that origins of GML and the network also started there:

random refs:
https://www.garlic.com/~lynn/2000.html#75
https://www.garlic.com/~lynn/99.html#197
https://www.garlic.com/~lynn/internet.htm

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Ux's good points.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ux's good points.
Newsgroups: alt.folklore.computers
Date: Wed, 01 Mar 2000 19:14:23 GMT
hancock4@bbs.cpcn.com (Lisa nor Jeff) writes:
What is it about Unix based programming that makes it so superior to traditional mainframe (ie MVS) systems?

one of the good things about interactive oriented systems (which tend to have various kinds of timesharing flavor) is that when almost anything happens it interacts with the user.

One of the bad things about interactive oriented sysetms is that when almost anything happens it interacts with the user.

batch oriented systems have tended to have paradigm that when anything happens .. that facilities/options need to be available to handle things automagically (the person responsible for the work isn't likely to be immediately available). interactive oriented &/or interactive derived systems tend to be weak in automagic handling of events.

automagically handling of things going on tends to be beneficial for infrastructures deploying 7x24 services ... among other things 3rd shift on weekends tend to not have the strongest staffing support for cover events that might happen in real time. even some deployed web servers believe they require 7x/24 operation (the lead time to get all the automagic in place is longer ... but tends to be offset by reduction in long term operational costs).

a simple example is some of the mainframe error logging & reporting infrastructure. I got a call in the late '80s about a problem with a new mainframe product that had been out in customer shops for a year. They were concerned about the total number of a kind of I/O errors (akin to scsi bus error) that had occurred that year. There was something like 10-15 total of these errors (in a period of a year across all machines) and they had predicted that there should have only been 3-5 (per year in the whole machine population). Most infrastructures don't even strive for that sort of error rate ... to say nothing of having the infrastructure in place to be aware of the actual numbers (how many vendors know the total, actual number of SCSI bus errors ... or equivalent ... for all machines that they have built and shipped?).

minior reference:
https://www.garlic.com/~lynn/94.html#24

In part this isn't just worrying about things going right ... but assuming from the start that things will go wrong (in hardware, in software, and in operations)

minor reference:
https://www.garlic.com/~lynn/99.html#71

random refs:
https://www.garlic.com/~lynn/96.html#7 ... why do mainrames exist thread
https://www.garlic.com/~lynn/96.html#8
https://www.garlic.com/~lynn/96.html#27 .. mainframes & unix thread
https://www.garlic.com/~lynn/96.html#28
https://www.garlic.com/~lynn/96.html#29
https://www.garlic.com/~lynn/96.html#31
https://www.garlic.com/~lynn/96.html#32
https://www.garlic.com/~lynn/96.html#33
https://www.garlic.com/~lynn/96.html#34
https://www.garlic.com/~lynn/96.html#35
https://www.garlic.com/~lynn/96.html#36
https://www.garlic.com/~lynn/96.html#38
https://www.garlic.com/~lynn/96.html#39
https://www.garlic.com/~lynn/98.html#4 ... misc mainframe
https://www.garlic.com/~lynn/98.html#18
https://www.garlic.com/~lynn/98.html#51
https://www.garlic.com/~lynn/99.html#16 ... misc mainframe & interactive
https://www.garlic.com/~lynn/99.html#31
https://www.garlic.com/~lynn/99.html#47
https://www.garlic.com/~lynn/99.html#197
https://www.garlic.com/~lynn/2000.html#1
https://www.garlic.com/~lynn/2000.html#8

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Ux's good points.

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ux's good points.
Newsgroups: alt.folklore.computers
Date: Wed, 01 Mar 2000 19:22:22 GMT
oh yes, with regard to 10-15 error events/counts (instead of 3-5) across all machines shipped, the referenced first year mainframe processor specific part was valued in the low tens of billions (I wasn't told the actually number machines, but possible 12month revenue in the range of $20b-$40b).

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Difference between NCP and TCP/IP protocols

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Difference between NCP and TCP/IP protocols
Newsgroups: alt.folklore.computers,comp.protocols.tcp-ip,alt.culture.internet
Date: Thu, 02 Mar 2000 02:38:10 GMT
following is excerpt from a collection of 1980 email looking at some arpanet issues
The best single paper I've seen on internetwork design issues is:

Sunshine, Carl A., "Interconnection of computer networks," Computer Networks 1, North-Holland Publishing Company, 1977, pp. 175-195.

It took some time for me to read and understand it, but I think it was worth the effort and I recommend it. At roughly the same time that paper was published, DARPA and their associates began work on a specific approach for coherent interconnection of the myriad nets surrounding ARPANET. One result of their work is a large volume of documentation recording the design options that were taken and the reasons for taking them. If you're interested I can send you some of the pertinent documentation (most of which I have in soft copy), but there's a lot of it.


the complete archive is at:
https://web.archive.org/web/20161224053106/http://www.edh.net/net_bakeoff.txt

an news article regarding the subject of the archive is at:
http://www.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm NOTE Above URL has gone 404

Full text of articles can now be found here (for fee):
http://www.siliconvalley.com/mld/siliconvalley/archives/

IBM'S MISSED OPPORTUNITY WITH THE INTERNET

Source: DAN GILLMOR column

IN 1980, some engineers at International Business Machines Corp. tried to sell their bosses on a forward-looking project: connecting a large internal IBM computer network to what later became the Internet. The plan was shot down, and IBM's leaders missed an early chance to grasp the revolutionary significance of this emerging medium. This is one of those who-knows-what-might-have-been stories, an intriguing little footnote in Internet history. It's a cautionary tale

Published on September 24, 1999, Page 1C, San Jose Mercury News (CA)


random ref:
https://www.garlic.com/~lynn/99.html#140

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Ux's good points.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ux's good points.
Newsgroups: alt.folklore.computers
Date: Thu, 02 Mar 2000 17:26:40 GMT
Brian Inglis writes:
That's why IBM's been trying to kill it since 1967 -- makes their other "timesharing" systems look like JCL wrappers by comparison. If you've ever heard of fair share scheduling or clock paging algorithms, they were invented there, IIRC. I understand they have even replaced MVS I/O by VM I/O because the code path lengths were much shorter, and could support higher thruput with lower system and total CPU usage.

definately it was shown that VS1+handshaking under VM could outperform VS1 running standalone on the same hardware for lots of workloads. And currently, almost all the machines ship with a kind of abbreviated VM embedded in the microcode ... and hardware setup can include specifying multiple logical machines/partitions (aka LPARs) on a single physical machine.

MVS wasn't just pathlength ... it was also I/O paradigm shift from the very early 360 memory constrained and the use CKD and multitrack searches. Between the time multitrack searches were implemented for numerous OS/360 disk operations and the early '80s, configurations had shifted from being memory constrained to being I/O constrained.

Early '80s prototype to shift MVS ckd/multitrack search i/o paradgim from the mid-60s memory constrained design point to an I/O constrained design point was turned down on the basis that it would cost $26m to implement and deploy (i.e. too expensive). However, having had the opportunity brought to their attention ... they proceeded to design and deploy a series of kludges that aggregated a much higher cost than directly addressing the problem.

i.e. CKD multitrack search offloaded to disk hardware scanning for finding a specific record (compared to memory cached lookup), however by the late 70s ... disk accesses had become one of the dominate thruput bottlenecks; identifying the opportunity was somewhat inherit in some of the scheduling work ... which was not only fair share & dynamic adaptive scheduling ... but also scheduling to the bottleneck.

random refs:
https://www.garlic.com/~lynn/95.html#10
https://www.garlic.com/~lynn/99.html#4
https://www.garlic.com/~lynn/93.html#31
https://www.garlic.com/~lynn/94.html#43
https://www.garlic.com/~lynn/97.html#16
https://www.garlic.com/~lynn/98.html#46
https://www.garlic.com/~lynn/99.html#6
https://www.garlic.com/~lynn/99.html#54
https://www.garlic.com/~lynn/99.html#74
https://www.garlic.com/~lynn/99.html#75

&
https://www.garlic.com/~lynn/98.html#4
https://www.garlic.com/~lynn/98.html#11
https://www.garlic.com/~lynn/98.html#18
https://www.garlic.com/~lynn/98.html#47

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Ux's good points.

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ux's good points.
Newsgroups: alt.folklore.computers
Date: Thu, 02 Mar 2000 17:35:35 GMT
however, once the disk location was found ... os/360s were good at moving data ...

1) for all the batch i/o they could use direct i/o rather than move mode i/o (i.e. unix's have tended to use move mode where data was copied to/from kernel buffers as part of operation & hiding device synchronization ... rather than using application memory directly and surfacing some of the synchronization semantics to the application allowing greater degree of application specific optimization) and

2) normal disk record allocation paradigm was contiguous (with some overflow ... initial contiguous allocation and possibly some number of additional "extents").

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

ASP (was: mainframe operating systems)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ASP (was: mainframe operating systems)
Newsgroups: bit.listserv.ibm-main
Date: Fri, 03 Mar 2000 16:50:55 GMT
"Gerard S." writes:
| IBM 2065 360/65 | .70 | | IBMrpq 2067 360/67 | .98 | | IBMrpq 2067 mp 360/67 | 1.96 |

I don't understand the different between 65 & 67. All my numbers showed the 67 as being a 65 when operating in 65 mode. 67 added an address translation box. 65 (& 67 in 65-mode) operated at 750ns, no-cache, etc. When operating in 67-mode with address translation, an extra 150ns was added to each memory reference (to account for processing by the TLB and misc. other relocate hardware functions), i.e. 900ns.

Operating in relocation mode slowed the 67 down by 20% (compared to 65)

The 65 and "simplex" 67 would slow down with heavy i/o workload ... because of cpu & channel contention for the memory bus.

Duplex 67 was a somewhat different beast. Memory was tri-ported between the two 67 CPUs and the channel controller. A "half-duplex" 67 was slightly slower than simplex 65/67 because of the memory bus implementation; however a "half-duplex" 67 tended to have measurably higher thruput than a simplex 65/67 under heavy i/o load because of independent paths to memory.

In any case, 65/67 tended to look more like a .5mip machine unless you were doing a large number or register to register (65/67 were noticeably slower than 75).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Ux's good points.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ux's good points.
Newsgroups: alt.folklore.computers
Date: Fri, 03 Mar 2000 18:30:19 GMT
MTS .. in manner similar to CMS made use of various OS/360 compilers, runtime environments and os/360 applications via OS/360 system services simulation (i.e. while the underlying kernel got replaced ... there were still millions of lines of os/360 code that could be leveraged in various forms, aka more than just the ibm hardware was used).

misc other ref (which have since gone 404):
http://www.itd.umich.edu/~doc/Digest/0596/index.html
https://web.archive.org/web/20030110040516/http://www.itd.umich.edu/~doc/Digest/0596/index.html
http://www.itd.umich.edu/~doc/Digest/0596/feat01.html
https://web.archive.org/web/20020806152744/http://www.itd.umich.edu/~doc/Digest/0596/feat01.html
http://www.itd.umich.edu/~doc/Digest/0596/feat02.html
https://web.archive.org/web/20021216002337/http://www.itd.umich.edu/~doc/Digest/0596/feat02.html
http://www.itd.umich.edu/~doc/Digest/0596/feat03.html
https://web.archive.org/web/20040409103335/http://www.itd.umich.edu/~doc/Digest/0596/feat03.html
pretty much the same pages now appear to be here:
http://www.clock.org/~jss/work/mts/index.html

in above reference "LLMPS" came from lincoln ... basically a small standaline multitasker that support a variety of unit record and tape utilities (card->printer/punch, card->tape, tape<->tape, tape->printer/punch, etc). Somewhere in storage, i still have a LLMPS manual.

for more detail on early timesharing see melinda's paper at princeton:
https://www.leeandmelindavarian.com/Melinda#VMHist

misc. other refs (which are gone also):
http://www.apricot.com/~zed/mts.html
https://web.archive.org/web/20060107214613/http://www.apricot.com/~zed/mts.html
http://www.cc.ubc.ca/ccandc/sep96/machines.html
https://web.archive.org/web/20030105073904/http://www.cc.ubc.ca/ccandc/sep96/machines.html

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Ux's good points.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ux's good points.
Newsgroups: alt.folklore.computers
Date: Fri, 03 Mar 2000 19:28:11 GMT
jeffj@panix.com (Jeff Jonas) writes:
I was never indoctorinated into the IBM mainframe mindset, but in 1996 I worked at Apertus Technologies which specialized in mainframe conenctivity. I never got a good explanation as to why it was still faster for the mainframe to talk SNA to 3270 EMULATORS (xtnte, tn3270, ...) than to talk TCP/IP natively (either by token ring or ethernet).

in late 80s i worked on mainframe tcp/ip product. the base implementation could just about saturate a 3090 processor doing 44kbytes/sec thruput.

i crafted in rfc1044 support and with some tuning at cray research it was able to sustain ibm hardware channel speeds (limiting factor) between a 4341-clone and a cray ... only using nominal amount of the 4341 processing (about 1mbyte/sec ... or 20 higher thruput using significantly less cpu).

I've heard all sorts of rumors regarding reasons behind tcp/ip thruput being slower than lu6.2 or sna support (i.e. tuned rfc1044 outperformed both sna and base tcp by huge factors).

I believe there was a study in the late 80s ... possibly at CMU ... looking at LU6.2 vis-a-vis TCP.

One of the comparisons was lu6.2 typically involved 14 data copies between the application and leaving the machine and around 180k instructions... compared to typical (unix) TCP implemenation that required five data copies and possibly 40k instructions (for relatively modest block sizes the cache misses caused by the data copies could consume more machine cycles than the pathlength).

random references;
https://www.garlic.com/~lynn/99.html#36
https://www.garlic.com/~lynn/99.html#119
https://www.garlic.com/~lynn/96.html#23

there was TCP work going on to both eliminate data copies as well as reduce pathlength (possibly to 3 data copies and 5k instruction pathlength down from 5 data copies and 40k instruction pathlength).

random references:
https://www.garlic.com/~lynn/99.html#0
https://www.garlic.com/~lynn/99.html#164
https://www.garlic.com/~lynn/98.html#34
https://www.garlic.com/~lynn/93.html#32
https://www.garlic.com/~lynn/93.html#31
https://www.garlic.com/~lynn/94.html#00

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Ux's good points.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ux's good points.
Newsgroups: alt.folklore.computers
Date: Fri, 03 Mar 2000 22:10:32 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
As I've noted in past years, TSS was a multi-million dollar write- off in the early 1970s. I've often wondered why, and did heads roll?

there has been long standing joke about heads roll uphill.

there is also some analogy between network vendors in the 90s with regard to security and ibm back then with regard to interactive ...

There would also be some competition regarding incremental feature business case for mainline commercial market segment bringing in several tens of billions per year against an incremental emerging interactive market segment feature bringing in possibly couple hundred million per year (if both business cases required $50million investment and both promise 10% increase in per annum revenue ... which would you choose ... 10% of 50billion or 10% of 500million).

One could even make the case during the 60s & early 70s ... that something like MTS was a significant IBM ROI ... it sold IBM hardware w/o requiring any incremental investment.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Ux's good points.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ux's good points.
Newsgroups: alt.folklore.computers
Date: Fri, 03 Mar 2000 22:49:39 GMT
"George R. Gonzalez" writes:
There was some IBM technical paper that described it's architecture. A breathtaking super-duper design job.

part of the problem back then was system programming, design, etc tended to be very deterministic (rigid?) ... there was little or no sense of dynamic adaptive, fast path (special casing most common sequence), etc.

trivial example was that at entry to "interactive" (i.e. type in trivial edit command and hit return), TSS would gather up all the necessary pages from the 2311 and copy them to the 2301 (fixed head drum/disk). Then dispatch the task ... which could bring in pages from the 2301. At exit from the interactive queue, tss would gather up all the task's pages and copy them off the 2301 back to a 2311.

there is also the folklore about when TSS was decommuted and reduced from thousand of people to tens of people ... the one person now responsible for scheduler realized that every module in a kernel function call sequence (possibly tens of different modules for a simple function) would make a call to the scheduler. Changing that so that the scheduler was only called when absolutely necessary supposedly pulled a 1million instructions out of a kernel call pathlength.

On the other hand ... the IBM/ATT effort that ported unix to TSS as a TSS subsystem was a very successful system inside ATT (even tho TSS was decommuted from the standpoint of availability to general customres).

random references:
https://www.garlic.com/~lynn/94.html#46
https://www.garlic.com/~lynn/94.html#53
https://www.garlic.com/~lynn/95.html#1
https://www.garlic.com/~lynn/96.html#26
https://www.garlic.com/~lynn/98.html#11
https://www.garlic.com/~lynn/98.html#12
https://www.garlic.com/~lynn/98.html#17
https://www.garlic.com/~lynn/99.html#142

one of the '70s TSS performance benchmarks showed a two processer duplex 67 system with 2mbytes of memory having something like three times the thruput of a single processor 1mbyte 67 system. my observation was that TSS must be memory limited in the single processor case (which was contested by some of the TSS people). The two processors should nominaly show slightly less than 2 thruput of the single processor case (because of things like multiprocessor locking overhead).

However, say the TSS kernel has 600k fixed memory requirements; On a single processor, 1mbyte system, that would leave about 400kbytes for applications and on a dual processer, 2mbyte system that would leave 1400kbytes for applications.

Now, we did have a VM case with two processor 158AP that showed greater than 2 the single processor thruput. 370/158s two processor configuration slowed the cycle down by 10% to allow for cross-cache chatter (invalidation signals) ... so a two processor configuration should have at best 1.8 the thruput of a one processor system (not taking into account kernel overhead of multiprocessing software locking protocols). The 158 & 168 "AP" machines were a special asymetric configturation with only one of the processors having I/O capability.

Typical interactive VM workload was mix-mode with 100% cpu utilization and very high I/O rates (both paging and file accesses). High I/O interrupt rates tended to have a detrimental effect on 370 cache hit rates. We had hardware monitor on VM 158AP configuration where the processor with the I/O capability showed a 15% MIP rate degradation (compared to a single processor environment) and the processor w/o I/O capability showed a 50% MIP rate increase (compared to a single processor environment). The MIP rate degradation was in part the multiprocessor mode slowdown for cross-cache chatter. The MIP rate increase was because of significantly improved cache hit ratio.

Being able to intelligently manage task switching & asynchronous interrupts on a cache machine can result in significant improvement in the cache hit rate, the processor effective MIP rate and the thruput.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Predictions and reality: the I/O Bottleneck

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Predictions and reality: the I/O Bottleneck
Newsgroups: comp.arch
Date: Fri, 03 Mar 2000 23:20:53 GMT
Bryan O'Sullivan writes:
> Another funny (in retrospect) prediction the paper makes is that
> log-structured file systems on disk arrays will provide a 1000-fold
> I/O performance improvement (a hundred disks, each providing ten
> times the bandwidth due to the log-structured organization).

It would be interesting to compare and contrast the claims made in early-stage academic CS papers with those made in some other fields of endeavour, such as medical research or some of the social sciences.


random refs to 93 "log structure filesystems -- think twice" thread in comp.arch.storage
https://www.garlic.com/~lynn/93.html#28
https://www.garlic.com/~lynn/93.html#29

random I/O bottleneck thread:
https://www.garlic.com/~lynn/93.html#31

part of the problem with some of the predictions could have been lack of data/experience with real live production environments.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Those who do not learn from history...

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Those who do not learn from history...
Newsgroups: alt.folklore.computers
Date: Sun, 05 Mar 2000 19:45:38 GMT
something I found from somebody's posting to a 1984 newsgroup ... previously posted here last year
https://www.garlic.com/~lynn/99.html#24

Date: 7 December 1984, 14:35:02 CST

1.In 1969, Continental Airlines was the first (insisted on being the first) customer to install PARS. Rushed things a bit, or so I hear. On February 29, 1972, ALL of the PARS systems canceled certain reservations automatically, but unintentionally. There were (and still are) creatures called "coverage programmers" who deal with such situations.

2.A bit of "cute" code I saw once operated on a year by loading a byte of packed data into a register (using INSERT CHAR), then used LA R,1(R) to bump the year. Got into a bit of trouble when the year 196A followed 1969. I guess the problem is not everyone is aware of the odd math in calendars. People even set up new religions when they discover new calendars (sometimes).

3.We have an interesting calendar problem in Houston. The Shuttle Orbiter carries a box called an MTU (Master Timing Unit). The MTU gives yyyyddd for the date. That's ok, but it runs out to ddd=400 before it rolls over. Mainly to keep the ongoing orbit calculations smooth. Our simulator (hardware part) handles a date out to ddd=999. Our simulator (software part) handles a date out to ddd=399. What we need to do, I guess, is not ever have any 5-week long missions that start on New Year's Eve. I wrote a requirements change once to try to straighten this out, but chickened out when I started getting odd looks and snickers (and enormous cost estimates).


... snip ... top of post, old email index

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Those who do not learn from history...

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Those who do not learn from history...
Newsgroups: alt.folklore.computers
Date: Sun, 05 Mar 2000 19:49:02 GMT
as an aside ... the 1984 newsgroup was called century ... for discussing the possible issues that were coming up with the year 2000.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/


next, previous, subject index - home