List of Archived Posts

2000 Newsgroup Postings (10/14 - 11/24)

Why trust root CAs ?
Why trust root CAs ?
Why trust root CAs ?
Why trust root CAs ?
Why trust root CAs ?
IBM Somers NY facility?
History of ASCII (was Re: Why Not! Why not???)
Why trust root CAs ?
Why trust root CAs ?
Optimal replacement Algorithm
Optimal replacement Algorithm
Amdahl Exits Mainframe Market
Amdahl Exits Mainframe Market
Airspeed Semantics, was: not quite an sr-71, was: Re: jet in IBM ad?
Why trust root CAs ?
Why trust root CAs ?
[OT] FS - IBM Future System
[OT] FS - IBM Future System
OT?
OT?
Competitors to SABRE?
OT?
Why trust root CAs ?
Why trust root CAs ?
Why trust root CAs ?
Why trust root CAs ?
OT?
OT?
OT?
OT?
OT?
OT?
Optimal replacement Algorithm
Optimal replacement Algorithm
Optimal replacement Algorithm
Why IBM use 31 bit addressing not 32 bit?
Optimal replacement Algorithm
OT?
Ethernet efficiency (was Re: Ms employees begging for food)
Ethernet efficiency (was Re: Ms employees begging for food)
Famous Machines and Software that didn't
Reason Japanese cars are assembled in the US (was Re: American bigotry)
IBM 3340 help
Reason Japanese cars are assembled in the US (was Re: American bigotry)
Al Gore and the Internet (Part 2 of 2)
Al Gore and the Internet (Part 2 of 2)
Al Gore and the Internet (Part 2 of 2)
Al Gore and the Internet (Part 2 of 2)
Famous Machines and Software that didn't
Al Gore and the Internet (Part 2 of 2)
Al Gore and the Internet (Part 2 of 2)
Al Gore and the Internet (Part 2 of 2)
TSS ancient history, was X86 ultimate CISC? designs)
360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive designs)
TSS ancient history, was X86 ultimate CISC? designs)
X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive designs)
360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
TSS ancient history, was X86 ultimate CISC? designs)
Cryptogram Newsletter is off the wall?
Cryptogram Newsletter is off the wall?
360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
Building so big it generates own weather?
TSS ancient history, was X86 ultimate CISC? designs)
TSS ancient history, was X86 ultimate CISC? designs)
TSS ancient history, was X86 ultimate CISC? designs)
HASP vs. "Straight OS," not vs. ASP
SET; was Re: Why trust root CAs ?
Cryptogram Newsletter is off the wall?
Metric System (was: case sensitivity in file names)
Florida is in a 30 year flashback!
8086 Segmentation (was 360 Architecture, Multics, ...)
Reading wireless (vicinity) smart cards
TSS ancient history, was X86 ultimate CISC? designs)
Cryptogram Newsletter is off the wall?

Why trust root CAs ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Sat, 14 Oct 2000 14:59:59 GMT
David Schwartz writes:
I think you'll find that if you do this, you recreate the PKI. First, you'll want a central repository of whose key is whose. Second, you'll want one place to go to revoke a key should it be compromised. And so on.

you just don't need a certificate based PKI ... you go with an online, real-time PKI.

The paradigm for the certificate model PKI was the offline email case between parties that had no prior relationship and/or knowledge of each other (i.e. connect to the network, download email, disconnect, read email ... and perform various actions as specified by the email, even tho you had no prior knowledge of &/or relationship with the person sending the email).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Sat, 14 Oct 2000 15:04:01 GMT
pbrdhd.nojunk writes:
On Sat, 14 Oct 2000 19:43:28 +1000, "Lyalc" <lyalc@ozemail.com.au> wrote:
>If you replace Public Key with Password, this models works just as well, and
>works today, at zero incremental cost.

Scheme outlined has advantages over passwords which may justify the incremental costs. EG: - a password is inherently less secure since it relies on keeping the password secret, and yet password is known to all entities/devices for which you use that password. A public key can be put on a bill board without lessening security. - using a public key approach allows enables encryption of data unique to the user, increasing security. - the use of a device to handle the registration and authentication simplifies the process from the point of view of the end user and obviates the need to handle, remember and keep secure multiple passwords.


replacing a password registerd in an account record with a public key, the public key part of the protocol is the same whether the private key is stored in a password protected software file, a hardware token w/o any activiation, a hardware token with PIN activation, or a hardware token with biometric activation, or a hardware token with both PIN & biometric activation.

the public key part of the protocol is the same whether the consumer registers the same public key with one location or the same public key with 15 locations. In the case of multiple public key registration, one might be the bank and another might be the consumer's ISP. Using a common public key at both an ISP and the bank ... doesn't have the downside of somebody at the ISP doing fraudulent transactions at the bank.

deploying a common public key protocol would give the consumer a great deal of freedom and choice as to the level of security and integrity that they would like to use (software files, different tokens at different integrity levels, activation, etc)

The PIN/biometric activation ... as opposed to authorization is an issue. In the case of flowing an authorization PIN/password ... which might get compromized, it is realitively easy to get a new PIN/password. Biometric authorization in an open environment is much harder to deal with (effectively biometric authorization is a complex PIN/password that the person doesn't have to remember). In the case of biometric authorization compromize, it is much harder to issue a new body part. It is also harder to make sure that a unique body part is used for each entity that the consumer wishes to authenticate with.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Sat, 14 Oct 2000 19:32:00 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
"Verification" or authentication is not a boolean that you either have or do not have. Like all other security related things, authentication is a continuous variable. You can have a little or a lot, although it is hard to have absolutely none and impossible to have absolute confidence.

and a wide variety of different business processes could select different points on the landscape involving different levels of integrity as well as behavior on the part of individuals

even within the same business process there could be very large portion of the landscape coverage.

banks tend to have more confidence & experience with individuals that they give $10,000 credit card limits to compared to individuals they start out at $300 limits ... which can involve a broad range of different factors and the weight/importance given those factors.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Sun, 15 Oct 2000 00:15:32 GMT
David Schwartz writes:
There is no distinction in this case between a third and a second party. In any arrangement, who does what can be changed. If banks want to be CAs, they can be. It is, however, logical for banks to specialize in banking. On the other hand, it's not necessarily logical for a CA to specialize in certifying _financial_ transactions.

the bank doesn't really care if somebody certifies that you are you, the bank really does care if the person they are dealing with is the person that has rights to the account ... they surely can't use a 3rd party to tell them which person has what rights to which account.

given that a bank can be sure that you are the person that has rights to a particular account and that you own a private key (by self-signing the corresponding public key) ... then the bank can record the associated public key for that account.

the bank recording a public key may or may not involve the bank returning a copy of a manufactured certificate. if there is a case where they would return a copy of a manufactured certificate, the original of that certificate is stored in the bank account record.

if the bank finds that the consumer is alwas returning a copy of the manufactured certificate attached to a transaction sent to the bank, then the bank can advise the consumer to do a little bit of knowledge compression ... i.e. every field that exists in the copy of the manufactured certificate that is also in the original of the manufactured certificate stored at the bank ... can be compressed from the copy of the manufactured certificate. Such compression can lead to the efficiency of attaching zero byte certificates on every transaction.

if the bank finds that the consumer is only attaching the copy of the bank's manufactured certificate to transactions sent to the bank, the bank can save the consumer extra effort by precompressing the copy of the manufactured certificate returned to the consumer (during registration) ... i.e. by providing the consumer a zero byte certificate that has been pre-compressed.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Sun, 15 Oct 2000 16:24:56 GMT
"Lyalc" writes:
Yet private keys, that public keys depend upon are protected by the same password you claim are insecure. Weakest link rules apply here!

there is some difference between shared-secret pin/passwords ... and locally owned pin/passwords.

Protecting my own property with pin/passwords .... for access to a device I own myself .... is different than using shared-secret pin/passwords that have to be registered with one or more entities (and also different from dynamically generated per session encryption keys).

The distinction between the different types of pin/passwords (shared-secret versus private ownership) possibly is more clearly illustrated in the case of biometric shared-secret ... i.e. the electronic representation of the biometric body party is essentially a much more complex version of a pin/password that doesn't have to be remembered. In this kind of use of shared-secret pin/password in an open network ... the compromize of a traditional pin/password allows for changing the pin/password (although possibly expensive & time-consuming operation).

The use of a shared-secret biometric body part in an open network exhaserbates the problem of shared-secret pin/password compromize because of the difficulty of changing body parts. The other traditional guideline associated with shared-secret use is having a unique value for each entity that you authenticate with ... implying the use of a uniqe body part ... and hopefully you have fewer authentication partners than you have body parts supported for biometric authentication.

By comparison owning a personal hardware token that is biometrically activated has different failure and exploit modes. There is not a shared-secret biometric value that is being registered at multiple locations and transmitted over networks on every authentication operation. There is not cases of evesdropping a electronic biometric code and being able to reproduce it for fraudulent transactions at random places in the world.

In this scenerio, both the (public/private key) hardware token and the biometric value has to be stolen. The consumer is able to obtain a replacement hardware token and register its public key as needed ... not encountering the difficulty associated with having body parts replaced (associated with compromize of shared-secret biometric paradigms).

random refs:
https://www.garlic.com/~lynn/aadsmore.htm#bioinfo1
https://www.garlic.com/~lynn/aadsmore.htm#bioinfo2
https://www.garlic.com/~lynn/aadsmore.htm#bioinfo3
https://www.garlic.com/~lynn/aadsmore.htm#biosigs
https://www.garlic.com/~lynn/aadsmore.htm#biosigs2
https://www.garlic.com/~lynn/99.html#157
https://www.garlic.com/~lynn/99.html#160
https://www.garlic.com/~lynn/99.html#165
https://www.garlic.com/~lynn/99.html#166
https://www.garlic.com/~lynn/99.html#167
https://www.garlic.com/~lynn/2000.html#57

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

IBM Somers NY facility?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Somers NY facility?
Newsgroups: comp.lang.asm370
Date: Tue, 17 Oct 2000 15:26:23 GMT
lwinson@bbs.cpcn.com (lwin) writes:
Would anyone know what kinds of offices IBM has at its Somers NY facility? Anything open to the public, such as demonstration centers or any kind of equipment showroom? (I know the archives are there.)

coming up the turnpike from the south ... it looks like a "pyramid power" building ... lobbies of the different sections have large glass pyramids rising above the rest of the structure. in theory one could stand in one of the lobbies directly under the peak of the pyramid and collect energies.

last time we were there it was hdqtrs for different "groups" (i.e. various divisions had their hdqtrs at various locations around the world, divisions were collected into groups ... and somers had executives & staffs at the group level).

the other interesting building was "purchase" ... which was in the process of being built for a food company. lots of marble and big spaces.

for whatever reason it was picked up on a really good deal. however, i think that two years ago mastercard picked it up on another really good deal for their hdqtrs (they consolidated a number of different locations on manhatten).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

History of ASCII (was Re: Why Not! Why not???)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of ASCII (was Re: Why Not! Why not???)
Newsgroups: comp.lang.java.programmer,alt.folklore.computers
Date: Wed, 18 Oct 2000 20:58:21 GMT
Tom Almy writes:
Obviously, my memory failed here! I did start out with IBM equipment, and you are right in that the "correspondence" keyboard was on the IBM 2741 terminal I used, which differed from the keyboards on the keypunches. And even there were different layouts.

2741 came standard and correspondence. CP/67 determined which translate table to use by sending the login message using both standard & non-standard bit pattern.

The user was then expected to type login followed by their userid. If the first character translated to "l" (aka login) using the standard translate table ... it was assumed to be a standard 2741. Otherwise it would switch & try the correspondence translate table ... looking for a "l" as the first character.

from some old translate table index:


00           2741
04           correspondence 2741
08           apl 2741
0c           correspondence apl
10           tty
-
18           apl tty

... i.e. 08 bit was used both as part of the translate table indexing but also as flag indicating APL translate command had been issued.

when i originally added tty/ascii support to cp/67 in 68 ... i had to play around with some of the non-standard ascii key assignments.

I also tried to extend the dynamic terminal type recognition so that TTY, 2471, and 1050s could dial into the same rotory bank. The ibm 2702 line controller had SAD command that controlled which line scanner was associate with which address.

On initial connect, I do something like specify the SAD for the tty line scanner and then send a who are you. If that timed out, i would specify the SAD for the 2741 line scanner and try some things.

It actually worked ... except eventually i talked to a ibm hardware engineers who said that it was out of spec. While ibm supported changing the line scanner with the SAD command, they took a short cut in the implementation and hard-wired the oscillator to each line ... i.e. "tty" lines would have an oscillator hard wired for 110 baud and "2741" lines would have an oscillator hard wired for 134.? baud.

Learning that eventually led to a 4-person project that built a plug-compatible controller out of Interdata3 (and credited with originating the ibm plug-compatible control unit business). One of the things that was supported was (relatively) high frequency strobe of incoming initial bits to dynamically determine baud rate.

random refs:
https://www.garlic.com/~lynn/93.html#2
https://www.garlic.com/~lynn/94.html#2
https://www.garlic.com/~lynn/96.html#9
https://www.garlic.com/~lynn/96.html#12
https://www.garlic.com/~lynn/96.html#30
https://www.garlic.com/~lynn/96.html#37
https://www.garlic.com/~lynn/96.html#39
https://www.garlic.com/~lynn/99.html#44
https://www.garlic.com/~lynn/99.html#76

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Wed, 18 Oct 2000 21:02:39 GMT
pbrdhd.nojunk writes:
Agreed, but I wasn't proposing that biometric be used except on the local device which holds the private key. Ideally this would be a tamper proof print recognition smartcard, so people would have the convenience of just thumbing their card while it is slotted to authorise/authenticate a transaction, but the print recognition data would be not be held or used anywhere off the card. This way the critical authentication info in the wider system is still the key pair for which the private key is held on the card, not the bio-recognition data which is used purely locally to authorise the use of the private key.

if we aren't careful ... we'll find ourselves in violent agreement.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Wed, 18 Oct 2000 23:49:41 GMT
Greggy writes:
If you were a bank and you had these choices before you and your marketing people were telling you that you can either:

A) use this totally insecure CA strategy and provide your customer base with a very simple to use web site that would save you money in teller payrolls
or
B) use a real strategy for security that makes your customers work harder, which in turn would drive your customers away from your new web sites (and you lose the payroll savings as well)

which would you choose?


there have been other situations making such claim regarding things like CAs & certificates in financial infrastructures.

the frequent fallicy is ignoring the fact that customer support is required and that requires things like answering questions about pieces and components (i.e. a 1-800 number in the case something doesn't work ... who does the customer call).

in order for the call center to effectively answer calls ... they need access to the related components ... which leads a financial institution to registering the components in databases accessible by the call centers.

the per screen lay-out costs at the call center and the registration process in support of the call center frequently dominate all the costs associated with any such activity (regardless of the implementation).

of course the above only applies to real-live roll-outs and can be bypassed/ignored in the case of toy pilots, in which case other trade-off decisions can be made regarding the degree of investment in toy pilots (i.e. like punting on the issue of providing customer support).

One of the issues for toy pilots can be assumption that the early adapter participation in toy pilots will self-select ... i.e. everything goes right and the individual participates or it doesn't go right and they don't participate.

getting out a technology "with-it" press release on a toy pilot, it is possible to cut all sort of corners, possibly leaving worrying about the real implementation later after testing the water.

Furthermore, CA/certificates as being easy for customers and non-CA/certificates as being hard for customers is not a proven generalization (improved integrity doesn't have to be unnecessarily intrusive).

Skirting the requirement of expense for full customer support in association with toy pilots is probably a much better understood generalization.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Optimal replacement Algorithm

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Optimal replacement Algorithm
Newsgroups: comp.arch
Date: Thu, 19 Oct 2000 15:25:32 GMT
Alexis Cousein writes:
agelman1@san.rr.com wrote:

On Sat, 07 Oct 2000 19:13:31 GMT, "Kevin Lai" <kev1997@ms15.hinet.net>
>What is the "Optimal Replacement Algorithm", in a few words?
>Thanks.

Just what it says -- optimal. It has one "slight" drawback in that it isn't causal (i.e. requires you to know not only information about accesses in the past, but also accesses that are going to happen in the future).


I think about the time belady published paper on optimal at IBM YKT ... IBM CSC was doing simulation work LRU, random, fifo, 1-4 bit clock, etc. I believe there was also a ACM paper from project mac on 1-4 bit reference and a ACM paper out of Brown on a timer counter in place of bits (again all about the same period).

The IBM CSC simulation work had found a variaion on 2bit clock that outperformed true LRU (in simulations across wide variety of live trace data).

Basically the characteristic is that true LRU nominal outperforms FIFO ... except in situations where LRU degrades to FIFO ... and for those situations RANDOM tends to be better ... the issue then becomes recognizing when to switch back and forth between LRU and RANDOM.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Optimal replacement Algorithm

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Optimal replacement Algorithm
Newsgroups: comp.arch
Date: Thu, 19 Oct 2000 20:52:07 GMT
toor@y1.jdyson.net (John S. Dyson) writes:
Take a look at my FreeBSD VM page replacement algorithm... It works surpisingly well, and converges very rapidly to a reasonable working set under load. It is deceptively simple, and effectively adapts to the all of the usage and transfer rates. It isn't any of the degenerate clock-type, LRU, random type algorithms, but is a learning hybrid.

part of the LRU & LRU approximations is the assumption that passed behavior predict future behavior ... sometimes that is true and sometimes it isn't true (as well as working set members & working set size). the adaptive transition (work circa '70-'71) between LRU and non-LRU attempted to correspond with different execution characteristics & behavior.

other adaptive characteristics were relative miss latency, relatively miss bandwidth and relative cache/storage size.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Amdahl Exits Mainframe Market

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Amdahl Exits Mainframe Market
Newsgroups: bit.listserv.ibm-main
Date: Fri, 20 Oct 2000 22:46:45 GMT
scotth@aspg.com (Scott Harder) writes:
The "dumbing down" has been going on a long time in data centers around the country for a number of years. I saw system automation take what used to be very smart Operations people and turn them into nothing more than babbling idiots - only able to see something that looked abnormal and place a phone call to Technical Support. Actually, Ops. Mgmt where I worked loved it because they no longer had to take any blame.

It took a while to fully dumb down all areas of Operations (I know what you're thinking and you should be ashamed!) and now the sights are set on the majority of Technical Services, leaving one or two gurus in the shop.

It was bound to happen, but me thinks the pains will be a little more intense in getting Technical Services "dumbed down". Then who will the finger be pointed at when something goes wrong?????

My $0.02,


I know of two large gov. labs that had large, heavily invested MVS shops ... and both migrated to Unix because they were unable to obtain MVS technical expertise. One of the labs. retired the MVS complex on approx. the same day as their last senior MVS technical support retired after being nearly 30 years in the position. The other location had several MVS technical openings advertised for over 12 months, existing MVS technical personel were being lured away to better paying commercial jobs, the number of MVS openings were growing and they had been unable to backfill any of them for over a year.

From their standpoint, dumbing down MVS support would be a response to being unable to fill the positions.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Amdahl Exits Mainframe Market

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Amdahl Exits Mainframe Market
Newsgroups: bit.listserv.ibm-main
Date: Fri, 20 Oct 2000 22:53:29 GMT
on the otherhand ... part of a large financial infrastructure credited 100% up time (24/7) solid for over six years (now outages) to

1) ims hotstandby
2) automated operator

as various hardware & software failures were eliminated, operator mistakes became one of the largest remaining failure causes.
https://www.garlic.com/~lynn/99.html#71

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Airspeed Semantics, was: not quite an sr-71, was: Re: jet in IBM ad?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Airspeed Semantics, was: not quite an sr-71, was: Re: jet in IBM ad?
Newsgroups: alt.folklore.computers
Date: Sat, 21 Oct 2000 15:30:28 GMT
erg@panix.com (Edward Green) writes:
ObComputerFolklore: Besides design, and I imagine the SR-71 _was_ designed with the aid of supercomputers at _least_ as powerful as a Pentium-II desktop, we may discriminate based on onboard computers, and to what extent the pilot is a pilot vs. a spectator. I think in the case of the X-15 whose irrelevantly stubby wings ultimately prompted your comment, we had a true "piloted rocket" which required the full heroic skill of Chuck Yeager and his pals to operate.

i remember boyd saying that they used supercomputer time for designing the F16 ... using "boyd's law" ... basically stuff about performance envelopes and analysis of trade-offs early '70s (but that was even before cray-1 which wasn't shipped until '76)
http://www.arnold.af.mil/aedc/highmach/stories/f16.htm
https://web.archive.org/web/20010222212740/http://www.arnold.af.mil/aedc/highmach/stories/f16.htm
http://www.defense-and-society.org/FCS_Folder/boyd_thesis.htm
https://web.archive.org/web/20010722090426/http://www.defense-and-society.org/FCS_Folder/boyd_thesis.htm
https://www.garlic.com/~lynn/99.html#120

SR-71 was early 60s ... so even 6600 would not have been available
http://209.1.224.11/CapeCanaveral/Lab/3993/SR71.html
https://web.archive.org/web/20010222210304/http://209.1.224.11/CapeCanaveral/Lab/3993/SR71.html
http://www.airspacemag.com/asm/mag/supp/fm99/oxcart.html
https://web.archive.org/web/20010204132600/http://www.airspacemag.com/asm/mag/supp/fm99/oxcart.html

6600 avail. 9/64
https://web.archive.org/web/20010218005108/http://www.isham-research.freeserve.co.uk/chrono.txt
http://www.cray.com/company/history.html
https://web.archive.org/web/20010203153300/http://www.cray.com/company/history.html
http://ei.cs.vt.edu/~history/Parallel.html

stretch & 7090 would have been available in the sr-71 timeframe and there were little or no plane design related applications available at that time.

in the f-16 timeframe had 7600, 91/95/195, misc. others

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Sun, 22 Oct 2000 21:53:21 GMT
"David Thompson" writes:
Yes, if the merchant is abusive, or hacked, you have a problem in any account-based scheme -- only anonymous "cash" solves that. Or throwaway accounts kept confidential by the issuer; or whose issuance is itself anonymous (but at present only banks can do this and AFAICT they aren't willing to do so). Or non-technical means like data-subject protection laws that are enacted and enforced, but that's getting offtopic.

or authenticated account transactions as in X9.59 (financial industry draft standard for all retail electronic account-based transactions).

x9 (financial industry standards org) overview at: http://www.x9.org

the nominal requirements given the x9a10 working group for x9.59 work item was to preserve the integrity of the financial infrastructure for all electronic retail payment (account-based) transactions with just a digital signature.

in effect only authenticated transactions are supported for these account (numbers) ... i.e. non-authenticated transactions against x9.59 account numbers may not be authorized (note today a financial institution may map multiple different account numbers with different characteristics to the same account ... so there is nothing precluding having both authenticated account number and a non-authenticated account number mapping to the same account ... and the authorization risk rules can be different based on the transaction type).

misc. details on x9.59 at https://www.garlic.com/~lynn/

there is a mapping of x9.59 to iso8583 (i.e. payment cards, debit, credit, etc)
https://www.garlic.com/~lynn/8583flow.htm

misc. other discussion (authenticated transactions, privacy, name/address issues, etc)
https://www.garlic.com/~lynn/99.html#217
https://www.garlic.com/~lynn/aadsm2.htm#straw
https://www.garlic.com/~lynn/aadsm3.htm#cstech13
https://www.garlic.com/~lynn/ansiepay.htm#privacy
https://www.garlic.com/~lynn/aadsmore.htm#x959demo
https://www.garlic.com/~lynn/aepay3.htm#sslset2
https://www.garlic.com/~lynn/aepay3.htm#x959risk2

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Sun, 22 Oct 2000 22:20:54 GMT
Anne & Lynn Wheeler writes:
there is a mapping of x9.59 to iso8583 (i.e. payment cards, debit, credit, etc)

one of the issues in mapping digital signatures to the existing payment infrastructure and providing end-to-end authentication ... was the typical 8583 message was 60-100 bytes. It was possible to map the additional x9.59 fields and a ec/dss into something like an additional 80 bytes or so (nearly doubling message size for authentication).

A certificate-based infrastructure represented two approaches ...

1) use certificates in an internet-only mode and truncate them at boundary between the internet and the financial infrastructure. The downside was giving up on basic security tenets like end-to-end integrity and end-to-end authentication.

2) carry the certificate ... but compress it into a manageable mode.

Now, some things cropped up:

Justification for the financial institution registering the public key were a) provide sufficient information for customer call center to handle trouble calls related to digital signature transactions and b) support relying-party-only certificates. relying-party-only certificates basically carried only an account number ... and designed by financial institutions to address liability and privacy issues.

With transactions carrying only a an appended relying-party-only certificate ... the account record has to be read ... containing the original of the certificate.

Therefor flowing any such certificate at all through the infrastructure can be viewed in one of two ways:

1) it is possible to compress out of a transaction appended certificate every field that the recepient is known to already have. Since the recepient has the original of the appended certificate, then every field in a transaction appended certificate can be removed ... resulting in a zero-byte appended certificate

2) given the consumer has a copy of the certificate, while the recepient (financial institution) has the original of the certificate that will be read when the transaction is executed, it is redundant and superfluous to bother sending back to the recepient a copy of something the recepient already has (i.e. the original registered public key along with all its binding).

So the x9.59 mapping to 8583 can viewed as either a) carrying a zero-byte appended certificate ... or b) not bothering to superfluously transmit to the financial institution a copy of some information (i.e. a public key and the public key bindings, represeted by a certificate) appended to every transaction when the financial institution has the original of that same information. In either case it minimizes the transaction payload bloat incurred by adding transaction authentication.

The X9.59 mapping to 8583 with included digital signature can provide end-to-end transaction authentication ... i.e. financial transaction instruction from the consumer to the consumer's financial institution ... with the institution responsible for authorizing and executing the financial transaction instruction is actually also authenticating the same instruction.

In prior posts, it was also pointed out that it is redundant and superfluous (as well as raising privacy issues) if parties not responsible for executing the transaction are performing extraneous authentication operations on the transaction.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

[OT] FS - IBM Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT] FS - IBM Future System
Newsgroups: bit.listserv.ibm-main
Date: Mon, 23 Oct 2000 04:27:54 GMT
wmhblair@INTERTEX.NET (William H. Blair) writes:
Many ideas that were circulating around this time are also discussed in the archives of my former neighbors, Anne & Lynn Wheeler: refer to their web site at https://www.garlic.com/~lynn/

some of the postings i've made on the subject:
https://www.garlic.com/~lynn/96.html#24
https://www.garlic.com/~lynn/99.html#100
https://www.garlic.com/~lynn/99.html#237
https://www.garlic.com/~lynn/2000.html#3

a couple URLs found from alta vista
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Index.html
http://www.cs.clemson.edu/~mark/acs_people.html

from
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
<<NOTE: above now 404, a current pointer:
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems. But this proved to be difficult because of IBM's cost structure and its R&D spending, and the strategy only resulted in a partial narrowing of the price gap between IBM and its rivals.

... &
This first quiet warning was taken seriously: 2,500 people were mobilised for the FS project. Those in charge had the right to choose people from any IBM units. I was working in Paris when I was picked out of the blue to be sent to New York. Proof of the faith people had in IBM is that I never heard of anyone refusing to move, nor regretting it. However, other quiet warnings were taken less seriously.

===============================================================

as in prior postings ... my view at the time was "inmates in charge of the asylum", didn't exactly make me liked in that group ... see ref in previous postings (above) to central sq. cult film.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

[OT] FS - IBM Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT] FS - IBM Future System
Newsgroups: bit.listserv.ibm-main
Date: Mon, 23 Oct 2000 04:38:00 GMT
oh, and on the subject of "compatible" ... recent posting touching on the subject of plug-compatible ...
https://www.garlic.com/~lynn/2000f.html#6

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

OT?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT?
Newsgroups: bit.listserv.ibm-main
Date: Mon, 23 Oct 2000 16:57:37 GMT
Steve Samson writes:
FS was never promised. It was a code name for a new system that started sometime in the late '60s and was shelved in the early '70s. The big innovations were to be object orientation and a one-level store. That is, the "file system" was an extension of memory addressing. ... The big barriers still uncrossed are the one-level store and FBA DASD. I think they've been substantially mooted by other developments.

Most of the stuff was around at the time of FS and "collected" as part of the project.

One-level store ... as tried by tss/360 a couple years prior to FS didn't work ... for one thing, the semantics between file access and virtual memory management were too primitive from at least a performance standpoint (i.e. LRU replacement algorithms didn't work well on virtual memory spaces being accessed sequentially).

The big issue for any of the OSs/VSs supporting FBA was CKD multi-track search for vtoc, pds, etc. CKD multi-track search was a technology trade-off from the mid-60s based on relative memory sizes and i/o bandwidth (trading off decreased memory utilization for increase i/o utilization). By the late-70s, the trade-off had flipped (shifted from being memory constrained to I/O constrained) ... and CKD searches were limiting system thruput. I got something like a quote from STL for $26m pricetag to ship vtoc & PDS support for non-CKD operation (even showing significant performance improvement).

random refs:
https://www.garlic.com/~lynn/97.html#29
https://www.garlic.com/~lynn/99.html#237
https://www.garlic.com/~lynn/2000b.html#54
https://www.garlic.com/~lynn/2000c.html#75
https://www.garlic.com/~lynn/2000f.html#9
https://www.garlic.com/~lynn/2000f.html#10

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

OT?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT?
Newsgroups: bit.listserv.ibm-main
Date: Mon, 23 Oct 2000 17:03:08 GMT
Anne & Lynn Wheeler writes:
The big issue for any of the OSs/VSs supporting FBA was CKD multi-track search for vtoc, pds, etc. CKD multi-track search was a

aka CKD multi-track search was inhibitor in supporting FBA ... however replacing CKD multi-track search with different strategy saw significant performance thruput regardless of whether device was CKD or FBA.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Competitors to SABRE?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Competitors to SABRE?
Newsgroups: alt.folklore.computers
Date: Mon, 23 Oct 2000 23:02:54 GMT
ehrice@his.com (Edward Rice) writes:
Sounds like IBM-promoted hype to me. A little 7-man shop I worked for (after the initial development was done) developed a system capable of something like 300,000 transactions per hour on a dual-processor Honeywell box (i.e., not an especially large or fast system). NASDAQ ran on a pair of 1108's through the 1970's. IBM would like to think that it was the biggest and the best in all ways, but in fact SABRE was a /very/ special-purpose system which is pretty hard to even call an "operating system." You can define, in an environment like that, what you'll call a "transaction" -- was SABRE's definition "buy a ticket" which is what the customer would see as an atomic piece of work, or was it "query db for flight number" plus forty other queries and updates, linked, which would tend to escalate the transaction count by many-fold from what the user might see? You can make production statistics lie just lie you can make any other kind of statistics lie.

in the early '90s i got to look at "routes" which represented 25% of the transactions on a large airline res system (typically 3-4 "linked" transactions). They gave me a list of 10 impossible things that they had been grappling with (many in large part because of problems with the limited dataprocessing facilities provided by the system ... in effect, TPF was slightly more than some of the dedicated purpose real-time systems ... as opposed to a real operating system).

as an aside, 20-30% of the transactions involved directly driving the various ticket printing and other devices around the world ... however avg. peak load (for the whole system) was still several thousand per second (not per minute or per hour ... but per second).

I redid routes that addressed all ten impossible things and had something like 10times the thruput; it wasn't TPF-based (in large part because the application was implemented in a totally different way ... and not subject to various restrictions imposed by a TPF-based implementation).

TPF is also used in some transaction switching systems in the financial industry (imagine a large router in tcpip/internet genre)

random refs:
https://www.garlic.com/~lynn/99.html#24
https://www.garlic.com/~lynn/99.html#100
https://www.garlic.com/~lynn/99.html#103
https://www.garlic.com/~lynn/99.html#136a
https://www.garlic.com/~lynn/99.html#152
https://www.garlic.com/~lynn/99.html#153
https://www.garlic.com/~lynn/2000.html#31
https://www.garlic.com/~lynn/2000.html#61

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

OT?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT?
Newsgroups: bit.listserv.ibm-main
Date: Mon, 23 Oct 2000 23:23:11 GMT
Steve Samson writes:
I don't disagree with your first statement. What I said was that the marketoons (DPD at the time) thought that they couldn't sell FS, and since all the top IBM management came from the marketing side, the technicians could not make their case.

(at least a) nail in the coffin was the houston science center showing a FS machine implemented on fastest ibm (supercomputer) technology (370/195+) at the time would be running applications at approximate thruput of 370/145.

that is aside from various issues of actually getting the vast array of new technologies all developed, tested, integrated, and into the market in any reasonable time-frame.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Tue, 24 Oct 2000 00:03:09 GMT
"Tor Rustad" writes:
I'm reading your posts with interest, and will look into some of your links on this matter. I must admit that I have not paid attention to X9.59. In SET, the ISO8583 mapping is implemented already, why do we need another way to do this? To open up for on-line debet card transactions?

As long as the card companies allow no security via credit cards, and the banks earn more mony (in some countries) on credit card transactions, the business case for implementing X9.59 doesn't look good. Also, _if_ X9.59 mandates new messages at the ASN.1 level, this will be expensive to implement. Futhermore, some of us are starting to get really fed up with all these PKI standards...


there is existing fraud ... even w/o the internet.

SET didn't provide end-to-end authentication. It truncated authentication at the SET payment gateway and then generated an acquiring financial infrastructure transaction with a flag indicating authentication ... which eventually got to the customer's issueing financial institution.

Two years ago, somebody from VISA gave a presentation at an ISO meeting regarding the number of (SET) transactions coming into consumer issuing financial institutions with the SET authenticated flag turned on ... and they could show that there was no SET technology involved in the transaction (issue crops up in security infrastructures when there isn't end-to-end authentication and end-to-end integrity).

The X9.59 mapping to ISO8583 is done in much the same way that the SET mapping was done ... find a place in the existing ISO8583 message definition to stash the information ... so that ISO8583 doesn't have to be changed (although the 8583 work group now has a new field being defined that would specifically support this function).

If you note that the X9.59 standard doesn't really define messages ... it defines a set of fields formated for digital signature signing and authentication. It specifically doesn't say how the fields are transmitted on an end-to-end basis ... it just defines how the fields have to be formated when they are signed and how the fields have to be formated when the signature is verified. In this sense, it took a similar approach to that of FSTC (
http://www.fstc.org &
http://www.echeck.org ) for digitally signed electronic checks (the transmitted messages carrying the fields may bear absolutely no resemblance to the format of the fields for signing and authentication). And in fact, with minor changes, the X9.59 definition (if translated to XML/FSML encoding) is usable for echeck (i.e. the charter given the X9A10 work group was to preserve the integrity of the financial infrastructure for all electronic retail payment (account-based) transactions with a digital signature).

The industry has regular message and software upgrades ... some of them dictated by regulations ... and more than a quite a few changes that are orders of magnitude larger change compared to what is needed for x9.59. The resources needed to effect the X9.59 changes, if scheduled as part of standard industry change process ... might not even show up as a real line item (lost in the noise of business as usual).

Standard business case applies to X9.59 ... benefits outweight the costs. Done as part of normal business, the technology, development, and deployment costs of X9.59 can be managed into the noise range. As cited in previous postings ... those costs however are totally dwarfed by costs of deploying real live customer call center support for new kinds of technology transactions.

One of the issues with X9.59 is that it has been defined as part of the industry financial standards process ... for implementation as part of standard production operation. In order to achieve end-to-end integrity, it doesn't define toy pilots that might be do'able for $50k where the authentication and integrity may be stripped off at the internet boundary (as an aside, development, test, deployment and training for one new screen in a customer call center can easily be more than the cost of a toy pilot ... the real costs for new kinds of technology is how to provide customer support ... if done correctly, the technology issues can be managed into the noise range).

So what are compelling business issues for end-to-end authentication and integrity ... along with fraud & risk reduction?

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Tue, 24 Oct 2000 00:14:51 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
What's the big deal about +/- 60-180 bytes on the wire? Yes, I realize that multiplying umptyzillion transactions per day by 60 bytes amounts to a lot of bits per second, but modern network performance is determined to the first order by the number of packets, not the number of bytes or the sizes of packets.

It costs the same (for all reasonable meanings of "cost") to send a packet containing 80, 160, or 180 bytes on a modern network, with the possible but quite implausible exceptions of some radio-telephone and low speed modem links.

Part of the reason for that is network traffic is so extremely bursty ("self-similar") that you must over-provision, or your customers get unhappy because sometimes their dirty HTTP pictures take longer to appear.


the issue is relying on existing infrastructure ... built around the 80-160 byte messages. FOr instance, I've seen reports that SET certificates ranged in size from 2k bytes to a high of 12kbytes.

Part of the X9.59 work was that it could be mapped to a brand new network where all the bandwidth rules have totally changed ... and it wasn't necessary to worry about existing practical concerns. So at at the 100,000 foot level ... does a definition also carry as a prerequisite that brand-new financial infrastructure be built in order to support its deployment.

However, part of the X9.59 work was also to see how it could be mapped to existing ISO8583 financial networks where there are real bandwidth rules and transactions size issues ... and still provide end-to-end integrity and end-to-end authentication.

Part of the charter given the X9A10 work group was to preserve the integrity of the financial infrastructure for all electronic retail payments (account-based) with a digital signature. "All" would include all the existing financial infrastructures as well as the new ones that haven't been built and deployed yet (i.e. the charter wasn't to define an internet only solution or a new network only solution, it was a solution for all electronic retail account-based payments).

It is believed that the same X9.59 definition can be made to work in both kinds of environments (the existing financial infrastructures as well as the new infrastructures that haven't been built yet).

there is a line someplace about, in theory there is no difference between theory and practice, but in practice there is.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Tue, 24 Oct 2000 00:35:23 GMT
Anne & Lynn Wheeler writes:
there is a line someplace about, in theory there is no difference between theory and practice, but in practice there is.

and besides ... it is redundant and superfluous to transmit a copy of something (and appended transaction certificate) possibly thousands of times a day ... to somebody that has the original. Even HTTP browswers know about caching and trying to avoid redundant and superfluous transmission.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Tue, 24 Oct 2000 00:39:57 GMT
Anne & Lynn Wheeler writes:
authentication). And in fact, with minor changes, the X9.59 definition (if translated to XML/FSML encoding) is usable for echeck (i.e. the charter given the X9A10 work group was to preserve the integrity of the financial infrastructure for all electronic retail payment (account-based) transactions with a digital signature).

one of the FSML issues was the lack of deterministic encoding rules in standard markup language definitions. In the possibility that the fields in a signed object ... are not transmitted in the signed object bit representation ... the recepient has to be able to exactly recreate the original object bit representation (in order to verify the signature) from the transmitted fields (regardless of how they are transmitted). The authenticating entity needed to follow the same deterministic (markup language) encoding rules in recreating the signed object that were followed by the signer when the original orject was created (for signing).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

OT?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 24 Oct 2000 00:48:03 GMT
jmaynard@thebrain.conmicro.cx (Jay Maynard) writes:
Houston Science Center? I never knew there was such a thing...possibly because it may well have predated my entry into the mainframe world in 1981. Was it down at JSC (MSC)?

i believe houston and philidelphia science centers were consolidated about the same time ... mid to late '70s (Philidelphia was where falkoff and iverson had done APL)? Bill Timlake came up from the Houston Science Center sometime in the 74/75 timeframe to head up the Cambridge Science Center.

searching alta vista for ibm houston science center ... came up with the former FSD houston location (one of the few things that didn't go when IBM sold the federal system division).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

OT?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 24 Oct 2000 00:58:14 GMT
Steve Samson writes:
Well, yes but ... it did not all have to come out at once, if a suitable migration scenario had been devised. That's one area where the techies fell down. The Houston benchmark could have proven anything since the 3-layer FS design had tremendous optimization opportunities at each level. I find it hard to believe that the model recognized what was in the pipeline, rather than just was available off the shelf. Depends what they wanted to prove.

one might be able to make the claim that they did exactly that and it is now called AS/400 (and there was 5-layer hardware FS).

FS seemed to garner every new R&D idea that had been thot of ... whether it was practical or not (some line about in theory there is no difference between theory and practice ... but in practice there is). Technies not being able to sort between practical and not practical were going to take quite awhile to getting around to devising any sort of migration plan.

I think that houston prooved that the very next product out the door from IBM was going to be a 370-based product and not an FS-based product.

in any case, the focused effort that was being done instead of continueing to produce products for the existing market place was suspended(?).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

OT?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 24 Oct 2000 01:13:25 GMT
Steve Samson writes:
p.s. BTW, who is actually writing these "Anne & Lynn" messages? Is it like piano four hands?

my wife and I share a PC ... but as far as i know she has never bothered to post to any scientific, engineering, computer and/or ibm newsgroup.

as to FS ... I'm biased because I believe what I was shipping at the time was superior to what was defined as research in the resource management section.

my wife is biased the other way ... she reported to the guy that headed up inter-system coupling area (FS was divided into something like 13-14 sections/areas ... resource management was one of the area sections & inter-system coupling was another area/section) for FS (before she went to POK to be responsible for loosely-couple architecture). she does admit now that a lot of it was extremely blue sky (but it was fun even if not practical).

random refs:
https://www.garlic.com/~lynn/99.html#71
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

OT?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 24 Oct 2000 01:25:19 GMT
Anne & Lynn Wheeler writes:
one might be able to make the claim that they did exactly that and it is now called AS/400 (and there was 5-layer hardware FS).

and parallel sysplex staged delivery of work my wife did in the FS inter-system coupling area before going to POK to be responsible for loosely-coupled architecture.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

OT?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 24 Oct 2000 14:11:08 GMT
Anne & Lynn Wheeler writes:
my wife is biased the other way ... she reported to the guy that headed up inter-system coupling area (FS was divided into something like 13-14 sections/areas ... resource management was one of the area

Les Comeau was with IBM at MIT doing CTSS days and early cambridge scientific center ... and headed up the CP group that implemented CP/40 and early CP/67. see melinda's paper at
https://www.leeandmelindavarian.com/Melinda#VMHist

In the late '60s Les transferred to Wash DC. During FS he was head of the inter-system coupling section/area in FS. My wife worked for him. After FS was shutdown, my wife spent some time working on JES2/JES3 in g'burg and then went to POK to be responsible for loosely-coupled architecture. She did Peer-Coupled Shared Data architecture and fought for things like making trouter/3088 more than just CTCA.

About the time she was in POK, I was on the west coast and doing some things for HONE ... both SMP kernel support and helping with loosely-coupled support (at the time, the resulting implementation was considered the largest single system image complex in existance).
https://www.garlic.com/~lynn/2000c.html#30

I also got to do a from scratch implementation for HYPERchannel that looked more like what she was pushing for trouter/3088 and had some of the characteristics from fs inter-system coupling. It was deployed by the IMS organization in STL and Boulder.
https://www.garlic.com/~lynn/2000b.html#29
https://www.garlic.com/~lynn/2000b.html#38

I did the rfc 1044 support in ibm's tcp/ip ... with thruput about 50* the base implementation (again, much more characteristic of fs inter-system coupling)
https://www.garlic.com/~lynn/93.html#28
https://www.garlic.com/~lynn/99.html#36
https://www.garlic.com/~lynn/2000.html#90
https://www.garlic.com/~lynn/internet.htm#0

Her Peer-Coupled Shared Data didn't really make it into the (mainframe) market until ims hot-standby ... and now parallel sysplex.
https://www.garlic.com/~lynn/99.html#71

Later when we running the skunk works that turned out HA/CMP, a lot of the implementation was subcontracted to a company in Cambridge called CLaM. Les had moved back to Boston and formed a company CLaM ... initials stood for Comeau, Linscott and Miller. misc. refs:
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/2000d.html#2

so there could be a claim made that a lot of the work on FS inter-system coupling did eventually migrate into products (independent of other parts of FS showing up in as/400).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

OT?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 24 Oct 2000 14:40:28 GMT
Also, somewhat in conjunction with HA/CMP ... we worked on getting some various forms of high-speed interconnect for the 6000 (before starting on FCS support). We used the term HSDT (high-speed data transport) for this part of the skunk-works (lots of stuff from fs inter-system coupling).

The 6000 group had done effectively their own version of escon ... called SLA (serial link adapter) ... it was about 10% faster than escon and used components that were something like 1/10th the cost of the escon components.

However, there was nobody else in the world that supported SLA ... so at best, one 6000 box could talk to another 6000 box.

To make it more interoperable ... we con'ed the company that produced HYPERchannel and very high-end tcp/ip routers to support a SLA card in their product line (they had a full set of directly attached boxes supporting a wide range of vendor supercomputers and mainframes, in addition to high-end, high-performance routers). With their support of SLA ... it gave the 6000 high-truput attachment directly into every kind of computer supported by their product line.

another random hsdt ref (rios in the following ref is rs/6000)
https://www.garlic.com/~lynn/99.html#67

random other hsdt refs:
https://www.garlic.com/~lynn/94.html#22
https://www.garlic.com/~lynn/94.html#33b

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Optimal replacement Algorithm

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Optimal replacement Algorithm
Newsgroups: comp.arch
Date: Wed, 25 Oct 2000 14:55:19 GMT
toor@y1.jdyson.net (John S. Dyson) writes:
You'll find that FreeBSD performs nearly as will as non-prejudiced but naive intuition might imply... Remember the first time you tried a real, LRU scheme, was very disappointed as to it's performance? With the FreeBSD

in clock-type page replacement algorithms that approx. LRU ... pages in storage are swept in a fixed order ... testing the (hardware) page reference bit to see if it has been set since it was last reset ... and resetting the page reference bit. Pages that have been touched/used since the last sweep that turned the referenced bit off will not be selected for replacement ... pages that have not been touched/used since the previous sweep (turning off the bit) will be selected for replacement. Approx. to LRU is based on recently used/touched pages will not be replaced.

There are various variations ... in systems with single hardware reference bit ... the software can emulate multiple bits in software ... effectively "shifting" the value of a hardware bit into software. Multiple bits then represent the settings of multiple sweeps of resetting the bit (and longer reference history).

The problem with clock and other similar implementations approximating LRU is that it degrades to FIFO under all sorts of conditions i.e. at some point all pages have their reference bits on ... the test & reset sweep thru all pages. At the end of that full sweep, all pages have their reference bit off and the algorithm replaces the page it started with. Then there is a very high probability that the next several pages examined in the fixed order will continue to have their reference bit off ... so the algorithm makes a sweep replacing several pages in a fixed order.

THe '71 simulation work found that LRU appoximation algorithms as well as simulated "true" LRU algorithm (across lots of different live load trace data) found that LRU replacement frequently degenerated to FIFO (LRU and FIFO were indistinguishable).

What was discovered was that with a minor coding hack ... CLOCK and misc. other LRU approximatins could be made to degenerate to nearly RANDOM instead of degenerating to FIFO ... i.e. in the situations that caused true LRU to degenerate to FIFO ... RANDOM replacement performed much better than FIFO.

My line from '71 was something to the effect that if the algorithm couldn't distinguish between pages (effectively the situation where nearly all page reference bits where all on or all off and led to LRU replacement degenerating to FIFO) that it was better to choose pages randomly for replacement than to choose pages for replacement in a fixed order (FIFO).

In simulation, true LRU tended to avg. 10-15% better than the normal CLOCK and other LRU approximation algorithms. The variation that had LRU approximation degenerate to RANDOM tended on the avg. to be 10-15% better than true LRU.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Optimal replacement Algorithm

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Optimal replacement Algorithm
Newsgroups: comp.arch
Date: Wed, 25 Oct 2000 15:02:43 GMT
Anne & Lynn Wheeler writes:
In simulation, true LRU tended to avg. 10-15% better than the normal CLOCK and other LRU approximation algorithms. The variation that had LRU approximation degenerate to RANDOM tended on the avg. to be 10-15% better than true LRU.

aka ... in the portions of an operational envelope where LRU tended to give good results, LRU was used ... when system shifted to portion of the operational envelope where LRU was less well suited, RANDOM was used.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Optimal replacement Algorithm

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Optimal replacement Algorithm
Newsgroups: comp.arch
Date: Wed, 25 Oct 2000 16:13:41 GMT
Anne & Lynn Wheeler writes:
in clock-type page replacement algorithms that approx. LRU ... pages in storage are swept in a fixed order ... testing the (hardware) page reference bit to see if it has been set since it was last reset

I had originally done clock-type alrogithms in the late 60s that approximated global LRU and a dyanmica adaptive working set size.

In the literature at the time, there was stuff on fixed working set size stuff with local LRU that executed on a fixed time basis.

The problems with what was in the literature was that it was significantly sub-optimal, overhead was independant relationship to contention and/or demand for pages, and didn't dynamicly adapt to different configurations & load (amount of storage, replacement latency, queuing delay, contention, etc).

The clock-type algorithms both dynamically adapted the overhead processing and interval to demand/contention.

I was able to show that global LRU and adaptive working set size significantly outperformed what was in the literature at the time.
https://www.garlic.com/~lynn/99.html#18
https://www.garlic.com/~lynn/93.html#4

In '71, I had somewhat accidentally stumbled across LRU degenerating to RANDOM instead of FIFO (and outperforming true LRU on the avg bas as much as true LRU outperformed clock-like LRU approximations) when i was trying various rearrangements of the clock code to make it much more efficient in an SMP environment (reduce lock contention, etc with multiple processors all contending for the page replacement supervisor).

other random refs:
https://www.garlic.com/~lynn/94.html#49
https://www.garlic.com/~lynn/94.html#1
https://www.garlic.com/~lynn/93.html#7
https://www.garlic.com/~lynn/99.html#104
https://www.garlic.com/~lynn/93.html#0
https://www.garlic.com/~lynn/94.html#01
https://www.garlic.com/~lynn/94.html#2
https://www.garlic.com/~lynn/94.html#4
https://www.garlic.com/~lynn/94.html#10
https://www.garlic.com/~lynn/94.html#14
https://www.garlic.com/~lynn/93.html#6
https://www.garlic.com/~lynn/93.html#5

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why IBM use 31 bit addressing not 32 bit?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why IBM use 31 bit addressing not 32 bit?
Newsgroups: bit.listserv.ibm-main
Date: Wed, 25 Oct 2000 16:38:02 GMT
"Hank Murphy" writes:
Well, 31-bit addressing is, what - 17 years old now? - and it's hard to know exactly why a designer did something that long ago. However, the function of bit zero is to carry the addressing mode (24 or 31) in the PSW and in the 24<->31 bit mode-switching instructions (BASSM, BSM). It also allows one, if sufficient discipline is exercised WRT the use of bits 0-7, to look at an address constant and determine the mode.

the written design justifications for 31-bit instead of 32-bit (at the time) had things like BXLE instructions where operands sometimes were addresses and sometimes were signed integers i.e. BXLE typically was used to both increment and decrement stepping thru things like arrays (aka difficulties doing signed interger operation on a unsigned 32-bit number).

the 370/etc princ-of-ops was a script/gml document that was a subset of the architecture "red-book". The architecture "red-book" had a bunch of "conditional" sections that gave a lot of justification & trade-offs of why things were done they way they were.

For instance, t\he original 370 architecture "red-book" gave a complete virtual memory and addressing architecture (before virtual memory was announced). Coming up on the time to announce 370 virtual memory ... there was a big battle over what would be announced and shipped. Both the 155s & 165s in the field needed significant hardware upgrade to support relocation. One of the points finally came down to announcing & releasing the full 370 virtual memory architecture (as embodied in the red-book) would take a significant additional redesign and rewire of the virtual memory retrofit for the 165 delaying the virtual memory announcement by six months.

There was various escalation meetings with all parties concerned and finally it was decided to go with a subset of 370 virtual memory architecture. One of the deciding factors was that the POK SVS group stated that SVS only had a single address space and customer installations would peak at max. of five page I/O per second and therefor the additional features would see no benefit in a SVS environment.

random refs:
https://www.garlic.com/~lynn/93.html#14
https://www.garlic.com/~lynn/2000c.html#84
https://www.garlic.com/~lynn/2000e.html#57

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Optimal replacement Algorithm

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Optimal replacement Algorithm
Newsgroups: comp.arch
Date: Wed, 25 Oct 2000 20:57:55 GMT
toor@y1.jdyson.net (John S. Dyson) writes:
The FreeBSD scheme isn't perfect, but appears to be very good for real loads (including both sequential and stats that have ARBITRARY skewed distributions.) There are no assumptions as to the usage distributions, unlike in the clock scheme. The adapation to the kind-of distribution is automatic in the FreeBSD scheme. The FreeBSD scheme doesn't depend on a gaussian or address correlated paging stats.

yep, did a lot of stuff about thrashing controls with multiprogramming level ... global LUR vis-a-vis local LRU ... i.e. nominally global LRU replacement but for really ill-behaving applications ... could switch to local LRU (or some called it "self-stealing").

when i got to ship some of this stuff in products ... I had the benefit of a lot of work by others in simulation plus workload profiling that had been collected across hundreds of different configurations & loads over a period of a number of years.

As a result was able to construct some synthetic benchmarks that could be configured along the edges of the configuration/load envelopes, statistical points within the nominal configuration/load envelope, and various severe outlyers ... way beyond the edge of observed configuration/load envelopes.

Several thousand synthetic benchmarks was then devised that covered the observed configuration/load envelope (interior, boundary conditions, extreme outlyers, etc) that took three months elapsed time to execute.

The information was then used to validate all kinds of stuff I was doing in dynamic adaptive & feedback/feedforward stuff across a broad range of different configurations & loads ... not just page replacement, but various thrashing controls, scheduling and dispatching controls, "scheduling to the bottleneck" adaptation, etc. Scheduling to the bottleneck attempting to identify the bottleneck thruput resources and adjust scheduling based on the consumption of bottleneck resources (which might or might not be cpu or storage or i/o or page i/o, etc).

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

OT?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT?
Newsgroups: bit.listserv.ibm-main
Date: Fri, 03 Nov 2000 05:43:03 GMT
Anne & Lynn Wheeler writes:
In the late '60s Les transferred to Wash DC. During FS he was head of the inter-system coupling section/area in FS. My wife worked for him. After FS was shutdown, my wife spent some time working on JES2/JES3 in g'burg and then went to POK to be responsible for loosely-coupled architecture. She did Peer-Coupled Shared Data architecture and fought for things like making trouter/3088 more than just CTCA.

some clarification from anne ... g'burg was responsible for FS I/O. Les was responsible for FS Advanced I/O Architecture.

Instruction source/destination operands had various attributes (possibly 3-5 levels of attributes). The FS microcode was responsible for doing the magical DWIM based on the attributes. Source/Destination operands with attributes associated with I/O was much less well defined for the microcode "do what i mean" magic (than most other parts of the FS architecture)

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Ethernet efficiency (was Re: Ms employees begging for food)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ethernet efficiency (was Re: Ms employees begging for food)
Newsgroups: comp.os.linux.advocacy,comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Date: Fri, 03 Nov 2000 13:35:06 GMT
Lars Poulsen writes:
The notion that as ethernet wire utilization goes up, the throughput peaks at about 30% and after that it becomes an unproductive mess of collisions is common but incorrect. It is rooted in the assumption that collisions are bad, abnormal things, and that once utilization goes up, the retransmission is likely to collide again. Both of those beliefs are wrong. THAT IS NOT HOW ETHERNET WORKS.

There was a paper in (i believe) '88 acm sigcomm proceedings that showed for a typical hub twisted-pair with 30 concurrent machines, all in a solid low-level loop continuously transmitting minimum sized packets that effective thruput dropped to 8.5mbits/sec. Ethernet is listen before transmit with collision detection afterwards (same proceedings also had paper showing slow-start non-stable in real-world environments).

There was a 3mbit/sec early '80s version that didn't listen before transmit. Analytical modeling of a 3mbit thick-net (w/o listen before transmit) with several hundred stations I believe showed efficiencies dropping below 30%. Part of this was due to lack of listen before transmit. Probability of collisions was purely based on amount of contention.

With listen before transmit, collisions become function of both contention and network transmission latency. Worst case in thin/thick net were/are the stations at the futherst ends of the cable (and total distance/latency is somewhat proportional to number of stations since it is the sum of all the segments).

Hub-based twisted pair tends to better bound the worst case latency because it is proportional to the two longest segments and independant of the number of segments.

I got into deep dudo including twisted-pair solutions when I introduced 3-layer architecture to IS executives of large multinational corporation. The 16mbit T/R guys had been using a analytical model with 3mbit/sec ethernet w/o listen before transmit & comparing it to 16mbit T/R (and showing around 1mbit/sec ethernet effective thruput). Quoting the sigcomm paper made T/R guys even less happy.

random refs:
https://www.garlic.com/~lynn/2000e.html#45
https://www.garlic.com/~lynn/2000b.html#11
https://www.garlic.com/~lynn/99.html#33
https://www.garlic.com/~lynn/94.html#22

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Ethernet efficiency (was Re: Ms employees begging for food)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ethernet efficiency (was Re: Ms employees begging for food)
Newsgroups: comp.os.linux.advocacy,comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Date: Fri, 03 Nov 2000 14:35:43 GMT
found the reference

"Measured Capacity of Ethernet: Myths and Reality" in proceedings of ACM SIGCOMM, 8/16-19, 1988, V18N4

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Famous Machines and Software that didn't

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Famous Machines and Software that didn't
Newsgroups: alt.folklore.computers
Date: Sat, 04 Nov 2000 00:13:56 GMT
"David C. Barber" writes:
o IBM FS: The Future System I recall hearing of for awhile, but don't recall the details. Hopefully someone can provide them. I was very new in computers at the time.

this was recently raised on ibm-main mailining list.

random refs:
https://www.garlic.com/~lynn/2000f.html#16
https://www.garlic.com/~lynn/2000f.html#17
https://www.garlic.com/~lynn/2000f.html#18
https://www.garlic.com/~lynn/2000f.html#19
https://www.garlic.com/~lynn/2000f.html#26
https://www.garlic.com/~lynn/2000f.html#27
https://www.garlic.com/~lynn/2000f.html#28
https://www.garlic.com/~lynn/2000f.html#29
https://www.garlic.com/~lynn/2000f.html#30
https://www.garlic.com/~lynn/2000f.html#31
https://www.garlic.com/~lynn/2000f.html#37

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Reason Japanese cars are assembled in the US (was Re: American bigotry)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Reason Japanese cars are assembled in the US (was Re: American bigotry)
Newsgroups: alt.folklore.military,rec.autos.driving,alt.autos.honda,rec.autos.makers.toyota,rec.autos.makers.honda
Date: Sun, 05 Nov 2000 15:12:08 GMT
"edward_ohare" writes:
There were some thoughts that higher US costs were largely due to higher health care costs and because the US used a higher percentage of its industrial and engineering capacity to build military equipment than Japan did.

I believe that there was an article in the Wash. Post in the late '70s (early '80s?) calling for an 100% unearned profits tax on the US automobile industry. Supposedly before the quotas, avg. US car price was around $6k & the purpose of the quotas was to give the US industry additional profits so that it could remake itself. With the quotas, Japanese car makers realized that they could sell as many luxuary cars as they could sell "cheap" cars. Both the quotas and the shift in (Japanese) product supposedly allowed the US industry to increase the avg. price over a relatively short period to $13k. The point of the article was that represented greater than billion dollars in increased profits which appeared to all go to executives and shareholders ... and they found no evidence any was re-invested in making the industry more competitive.

In the late '80s, the industry did finally take a better crack at reinventing itself. For instance, up until then the avg. development cycle time in the US for a new product was approx. seven years (from concept to delivery). The Japanese had shorten that to three years for the lexus, infinity, etc, products. The result was that the Japanese could adapt nearly two & half times faster to changing customer circumstances (than the US). The C4(?) project helped turn out the initial S10 in three years and also supposedly vastly improved quality.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

IBM 3340 help

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 3340 help
Newsgroups: alt.folklore.computers
Date: Mon, 06 Nov 2000 20:34:26 GMT
John Ferrell writes:
The DASD sub system operated through more than one control device. The interface under test needed to be selected, because the diagnostics were not multipath capable. There was at least one OS that gave only one warning of dropping a path. A dead path could go undetected if the operator missed the message.

33xx "string switch" allowed a string of disks to be connected to two different control units (3830 & 3880 controller could connect to four different channels, a "string" of disks connected to two different controllers allowed disk connectivity attachment for up to eight different channels/processors).

I remember 3380s coming in A & B boxes ... where (I think) "A" boxes where "head of string" and contained the logic allowing attachment to multiple controllers (an "A" box and three "B" boxes for 16 drives in a string?).

The I/O architecture had misc problems with string switches. Standard architecture provided for channel-busy, controller-busy, and device-busy indications. String-switch wasn't a resource concept provided for in the i/o architecture ... so string-switch effectively used multiple device-busies to simulate a string-switch busy.

Data transfers & things like search operations would "busy" shared resources (i.e. channel, controller, string-switch). Any device in a string doing something like a multi-track search (worst case about 1/3 sec. for a 3330) would cause the related string-switch to be busy and all other devices (i.e. up to 15) on the same string to be unavailable (regardless of additional controllers or channels).

random refs (alta vista)
http://www.storagetek.com/State_of_Texas/item_code_non_93921/3370_307.html
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Reason Japanese cars are assembled in the US (was Re: American bigotry)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Reason Japanese cars are assembled in the US (was Re: American bigotry)
Newsgroups: alt.folklore.military,rec.autos.driving,alt.autos.honda,rec.autos.makers.honda
Date: Mon, 06 Nov 2000 23:49:41 GMT
"edward_ohare" writes:
The Chrysler LH cars hit the street 22 months after they decided to build them, and that was for a car which shared few parst with previous designs.

During the trade restrictions, Ford made heavy investment in front wheel drive, converting all their high volume lines from rear to front drive. So did GM. Toyota in particular lagged behind in moving to front wheel drive.

There is no doubt the US was at a competitive disadvantage, but not all this can be blamed on the auto industry. The US has for 55 years incurred the expense of protecting Japan's sources of oil and ores needed for its industry, at no cost to Japan except for the indignity of being treated like a protectorate.

What if a bunch of the engineers who worked for military contractors were instead working for US auto makers?


Must of been having a senior momemt, wash post article on 100% unearned profits tax had to have been early '80s not late '70s.

I only saw some of the C4 stuff at GM and didn't really see the other manufactures. However, with seven year cycle, you would tend to have 2-3 concurrent overlapping efforts with staggered offset of 2-4 years so new products were coming to market on more frequent bases. Also, early in a cycle, you might have two or more competing teams since predictions out seven years could be open to interpretation and you would want to cover contingencies. Moving from 7 year cycle to 3 year cycle might result in 80% reduction in the number of required engineers (competitive forces w/military for the top 10-50 engineers in the country could still be a factor tho).

I do know that i bought a malibu for over $5k in '76 and it was around $13k something like 8 years later (and that represented closer to the avg. price of US car sold). Of course, inflation accounted for some of the change but I think that during this period, finance periods increased from 3 year to 5 year (possible indication of prices increasing faster than wages, also would have helped precipitated the 5 year warranties that started to show up in the early '80s).

C4 addressed radical changes in the end-to-end process ... not so much what was being done ... but how they went about doing it (as well as the cost of doing it). Drastic reductions in cost/time also implied reductions in number of people. I would expect that just quantity of engineers would tend to result in longer cycles, larger number of different models and fewer common components.

One example given C4 was corvette design because the skin/surface done by the designers tended to have tight volume tolerances. Seven year cycle resulted in several re-engineering & re-design activities because of minor modifications of basic components done by other divisions and/or suppliers (change in delco battery during the cycle required modifications to the exterior surface).

With respect to foriegn exchange in one of the following refs, I remember being in tokyo when exchange rate was 300/$ and since seen it drop below 100/$ (the relative cost of their products in the US market increased by a factor of 3 over a period of 25-30 years).

In some of the following there are references to a benefit of the purchase of AMC to Chrysler because AMC had already started takeup of Honda/Japanese manufacturing techniques.

misc. refs (many have since gone 404, so replaced with wayback machine URLs):
https://web.archive.org/web/20010204034500/http://www.media.chrysler.com/wwwhome/hist.htm
https://web.archive.org/web/20010222205419/http://www.inu.net/davidstua/chrysler_case_part1.htm
https://web.archive.org/web/20010217003458/http://www.michiganinbrief.org/text/appendix/append-J.htm
http://mrtraffic.com/millennium.htm
https://web.archive.org/web/20000829050939/http://www.sbaer.uca.edu/docs/proceedingsII/97sma228.txt
https://web.archive.org/web/20010222203823/http://www.rindsig.dk/viden/3-sem/Rising-sun.html
https://web.archive.org/web/20010104022000/http://www.iitf.nist.gov/documents/committee/cat/breaking_barriers_conf.html

from the above iitf/nist ref:
General Motors took a different tack by replacing the dissimilar hardware and software that existed throughout the corporation and creating an integrated system known as C4 (computer-aided design, computer-aided manufacturing, computer-integrated manufacturing and computer-aided engineering). As part of its C4 program, GM linked the design, manufacturing and assembly teams that were previously unable to communicate with each other. GM made the decision that although its legacy systems represented a sizable capital investment, it was important that the entire manufacturing process be overhauled to ensure interoperability and interconnectivity among all the players on the new network, including suppliers.


http://www-personal.umich.edu/~afuah/cases/case3.html

misc pieces from the above
The Rise of Foreign Competition (1970-1980)

By the early 1970s American automakers were facing strong global competition both at home and abroad. Japanese automakers in particular made a significant impact on the industry by introducing smaller, less expensive, and more fuel-efficient cars to the American market. This coincided with the oil crisis, which resulted in higher gasoline prices and a shift in consumer tastes toward greater fuel efficiency. Other advantages of the Japanese automakers resulted from their use of just-in-time (JIT) inventory controls, modern manufacturing techniques, and quality control and management practices.

By 1980, American carmakers had lost about one-fourth of the fastest growing segment of the domestic market - small cars - to Japanese producers. In response to pressure from the United States government, Japanese automobile producers implemented voluntary export restraints (VER) on their auto exports to United States. This VER agreement limited Japanese imports to not more than 1.68 million vehicles per year.

Restructuring of American Automobile Production (1980-1990)

While under the limitation of VER in the early 1980s, Japanese (and other) automobile companies shifted their strategies to foreign direct investment, setting up new facilities to produce cars locally in United States. Leaders were Honda, Mazda, Nissan, and Toyota, which collectively invested $5.3 billion in North American-based automobile assembly plants between 1982 and 1991.2 This was viewed as a response to VER - the Japanese automobile firms wanted to circumvent the threat of protectionist trade legislation. However, it was also a response to higher production costs at home and the sharp rise in the value of the Japanese Yen against the U.S. dollar during 1987, which dramatically increased the cost of exporting both automobiles and component parts from Japan to other markets.

To halt further erosion of their position, protect their remaining share of the domestic market, and prepare for competition in the global economy, the American automakers implemented programs to duplicate the world's best manufacturing practices. This included efforts to apply Japanese-style manufacturing practices such as JIT inventory control and leaner production systems to reduce costs. Yet, there were still important differences.


--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Al Gore and the Internet (Part 2 of 2)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet (Part 2 of 2)
Newsgroups: comp.society.futures,alt.folklore.computers
Date: Tue, 07 Nov 2000 04:05:25 GMT
rh120@konichiwa.cc.columbia.edu (Ronda Hauben) writes:
The NREN initiative was being discussed in the early 1990's. It claimed it would be support for a research and education networking inititive.

That initiative somehow disappeared, and instead the NSFNET (the backbone of the Internet in the US) was given to private interests.

A major change in Internet policy was made without any public discussion of why this would be desirable. And it was done at a time when there was officially the claim there would be support for a research and education network.

The only public discussion that seems to have been held about this happening was the online NTIA conference held by the U.S. Dept of Commerce in November 1994. During this conference there were many people explaining why it was not appropriate to privatize the public US backbone to the Internet.


note that the NSFNET1 & NSFNET2 contracts were for very specific $$$ and point-to-point links between a relatively small number of locations. By the early 90s, the non-NSFNET portions of the internet was couple hundred times larger than the NSFNET portions. Based on principles like efficiency of scale, COTS, etc ... it would have made sense to transition the relative modest NSFNET pieces to the commercial internet.

Also, there was extensive drive in the early '90s to get programs off the gov. dole, extensive incentive to get gov. contractors to find commercial funding/ooutlets for large portions of their activities, migration to COTS offerings and other initiatives (ARPA's TRP technology reinvestment program, startup funding to migrate all possible gov. developed technologies into commercial sector; NIST ATP ... advanced technology program, NASA AITP, etc)

NSFNET was in large respect to demonstrate feasability of high-speed service offering. Once that was demonstrated, it would have consistant with all the other technology activities of the period to transition to COTS/commercial offerings.

Other instances were a number of conferences in the early to mid-90s where the national labs were attempting to "sell" a lot of technology to the medical industry.

There use to be lots of technology reuse program references on the web current alta-vista search turns up
http://www.stanford.edu/group/scip/sirp/US-NII-slides.html

"ARPA TRP" turns up a few more ... sample
http://www.eng.auburn.edu/~ding/emc/emc.html
https://web.archive.org/web/20010222212533/http://www.eng.auburn.edu/~ding/emc/emc.html
http://sebulba.mindtel.com/21698/nasa/ames.html
https://web.archive.org/web/20010225040040/www.quasar.org/21698/nasa/ames.html
http://www.uscar.org/pngv/progplan/9_0.htm
https://web.archive.org/web/20010217102712/http://www.uscar.org/pngv/progplan/9_0.htm
http://me.mit.edu/groups/lmp/industry.htm
http://logic.stanford.edu/cit/commercenet.html

including
http://nii.nist.gov/cat/tp/t940707.html
https://web.archive.org/web/20010222211057/http://nii.nist.gov/cat/tp/t940707.html

the following from the above
Information Infrastructure Project (IIP)

Brian explained that the IIP is part of the Kennedy School's Science, Technology and Public Policy Program, which is directed by Lewis Branscomb. IIP began in 1990 addressing issues in the commercialization of the Internet and the development of the NREN. It now encompasses a wide variety issues related to the NII.

IIP addresses issues by bringing together experts in an interdisciplinary setting. Strategies to protect intellectual property, industrial extension networking, CALS, public access to the Internet, and standards are some of the topics that have been addressed. Brian's group publishes quarterly, a two-volume Information Infrastructure Sourcebook. The project also runs the Information Infrastructure Forum, which is hosted in Washington by the Annenberg Washington Program. This year they have held fora on competition policy, interoperability, and enabling interconnection -- and next, long term economic issues.


for "NIST ATP", alta vista turns up maximum number (20pages/200)

federal technology transfer web site:
http://www.federallabs.org/

technology transfer legislative history web site (going back to 1980)
http://www.dtic.mil/techtransit/refroom/laws/

some of the entries from above
Small Business Technology Transfer (STTR) Program 1992 (PL 102-564)

Established a 3 year pilot program - Small Business Technology Transfer (STTR), at DoD, DoE, HHS, NASA, and NSF. Directed the Small Business Administration (SBA) to oversee and coordinate the implementation of the STTR Program. Designed the STTR similar to the Small Business Innovation Research SBIR program. Required each of the five agencies to fund cooperative R&D projects involving a small company and a researcher at a university, federally-funded research and development center, or nonprofit research center.

National Department of Defense Authorization Act for 1993 (PL 102-25)

Facilitated and encouraged technology transfer to small businesses.

National Department of Defense Authorization Act for FY 1993 (PL 102-484)

Established the DoD Office of Technology Transition Extended the streamlining of small business technology transfer procedures for non-federal laboratory contractors. Directed DoE to issue guidelines to facilitate technology transfer to small businesses. Extended the potential for CRADAs to some DoD-funded Federally Funded Research and Development Centers (FFRDCs) not owned by the government.

National Department of Defense Authorization Act for 1994 (PL 103-160)

Broadened the definition of a laboratory to include weapons production facilities of the DoE.

National Technology Transfer and Advancement Act of 1995 (PL 104-113) [also known as the "Morella Act"]


--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Al Gore and the Internet (Part 2 of 2)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet (Part 2 of 2)
Newsgroups: comp.society.futures,alt.folklore.computers
Date: Tue, 07 Nov 2000 15:45:19 GMT
Anne & Lynn Wheeler writes:
http://www.federallabs.org/

technology transfer legislative history web site (going back to 1980)

http://www.dtic.mil/techtransit/refroom/laws/


misc entries from the federal technology transfer web site ... with respect to the organization:

The Federal Laboratory Consortium for Technology Transfer (FLC) was organized in 1974 and formally chartered by the Federal Technology Transfer Act of 1986 to promote and to strengthen technology transfer nationwide. Today, more than 700 major federal laboratories and centers and their parent departments and agencies are FLC members.

sample listing of technology transfer success stories is at:
http://www.federallabs.org/servlet/LinkAreaFramesetServlet?LnArID=SuccessStories&LnArRegion=National

one of the "stories" titled "The Great Government Giveaway"
http://www.businessweek.com/smallbiz/0006/te3685116.htm
https://web.archive.org/web/20010109054800/http://www.businessweek.com/smallbiz/0006/te3685116.htm

and a couple more entries from the legislative history web site ...
Stevenson-Wydler Technology Innovation Act of 1980 (PL 96-480)[15 USC 3701-3714]

Focused on dissemination of information. Required Federal Laboratories to take an active role in technical cooperation. Established Offices of Research and Technology Application at major federal laboratories. Established the Center for the Utilization of Federal Technology (in the National Technical Information Service).

Bayh-Dole Act of 1980 (PL 96-517)

Permitted universities, not-for-profits, and small businesses to obtain title to inventions developed with governmental support. Provided early on intellectual property rights protection of invention descriptions from public dissemination and FOIA. Allowed government-owned, government-operated (GOCO) laboratories to grant exclusive licenses to patents.

Federal Technology Transfer Act of 1986 (PL 99-502)

Made technology transfer a responsibility of all federal laboratory scientists and engineers. Mandated that technology transfer responsibility be considered in employee performance evaluations. Established principle of royalty sharing for federal inventors (15% minimum) and set up a reward system for other innovators. Legislated a charter for Federal Laboratory Consortium for Technology Transfer and provided a funding mechanism for that organization to carry out its work. Provided specific requirements, incentives and authorities for the Federal Laboratories. Empowered each agency to give the director of GOCO laboratories authority to enter into cooperative R&D agreements and negotiate licensing agreements with streamlined headquarters review. Allowed laboratories to make advance agreements with large and small companies on title and license to inventions resulting from Cooperative R&D Agreements (CRDAs) with government laboratories. Allowed Directors of GOGO laboratories to negotiate licensing agreements for inventions made at their laboratories. Provided for exchanging GOGO laboratory personnel, services, and equipment with their research partners. Made it possible to grant and waive rights to GOGO laboratory inventions and intellectual property. Allowed current and former federal employees to participate in commercial development, to the extent there is no conflict of interest.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet (Part 2 of 2)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet (Part 2 of 2)
Newsgroups: comp.society.futures,alt.folklore.computers
Date: Tue, 07 Nov 2000 16:05:20 GMT
Anne & Lynn Wheeler writes:
including
http://nii.nist.gov/cat/tp/t940707.html
https://web.archive.org/web/20010222211057/http://nii.nist.gov/cat/tp/t940707.html

the following from the above

Information Infrastructure Project (IIP)


btw, the IIP web site:
http://ksgwww.harvard.edu/iip/
https://web.archive.org/web/20010124080800/http://ksgwww.harvard.edu/iip/

another organization at harvard that has participated &/or assisted in studies is the Program on Information Resources Policy
http://pirp.harvard.edu/
https://web.archive.org/web/20010202091600/http://pirp.harvard.edu/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet (Part 2 of 2)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet (Part 2 of 2)
Newsgroups: comp.society.futures,alt.folklore.computers
Date: Tue, 07 Nov 2000 16:55:00 GMT
Anne & Lynn Wheeler writes:
note that the NSFNET1 & NSFNET2 contracts were for very specific $$$ and point-to-point links between a relatively small number of locations. By the early 90s, the non-NSFNET portions of the internet was couple hundred times larger than the NSFNET portions. Based on principles like efficiency of scale, COTS, etc ... it would have made sense to transition the relative modest NSFNET pieces to the commercial internet.

a list of the (16) NSFNET backbone sites (as of jan. 1992) can be found at:
https://www.garlic.com/~lynn/2000d.html#73

by comparison, a (simple grep) list of just the com domains from Oct. 1990 domain list
https://www.garlic.com/~lynn/2000e.html#20

it only picks up some of the non-us "coms" and doesn't pick up edu, org, net, etc domains ... but it still lists 1742 com domains.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Famous Machines and Software that didn't

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Famous Machines and Software that didn't
Newsgroups: alt.folklore.computers
Date: Tue, 07 Nov 2000 22:08:56 GMT
"Charlie Gibbs" writes:
"It's a good thing the iAPX432 never caught on - otherwise a truly horrible Intel architecture might have taken over the world."

I was unpacking some stuff last week that had been in storage for the past year or two and ran across:

Introduction to the iAPX 432 Architecture (171821-001) copyright 1981, Intel iAPX 432 Object Primer (171858-001, Rev. B) iAPX 432 Interface Processor Architecture Reference Manual (171863-001)

out of the introduction
The B5000 architecture had the right approach; it attempted to raise the level of the archtecture using the best available programming methodology, (c. 1960), which largely reduced to "use Algol", and the architecture supported Algol very effectively. But in the 1970s and 1980s problems have arisen for which Algol and the programming methodology of the early 1960s offer no solution.

These problems have led other manufactuers, whose earlier computers had more conventional architectures, to recognize the wisdom of raising the level of the hardware-software interface. Consider, for example, the IBM System 38, IBM's most recent architecture. Not only have the designers of the System 38 followed the Burroughs approach to architecture support for high-level languages, they have also included most of the operating system in the hardware as well. It seems inevitable that the fundamental problems facing the computer industry will force more and more manufactuers to take this approach.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet (Part 2 of 2)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet (Part 2 of 2)
Newsgroups: alt.folklore.computers
Date: Fri, 10 Nov 2000 15:42:48 GMT
Brian Inglis writes:
I live in Silicon Valley, and drive^h^h^h^h^h^h creep/park on a freeway system which was never privatized. It is hard to imagine where the internet would be today if a government model similar to their model that "5 mph is a perfectly reasonable average speed on a freeway" had prevailed. (The other brilliant government model which I appreciate dearly in this area is the model that it makes sense for people to spend an average of $20/whatever worth of their time every day in a bridge toll line, so that the government can collect its $2 toll.)
...
Wally Bass


for some reason, wally's post still hasn't shown up at the news site here (>12 hrs after Brian's has shown up). Anyway, hi wally, long time, no see.

In any case, I believe that it was the government/PUCC that was telling the phone company that it wasn't necessary to upgrade the central office that served the cupertino area ... since nobody really wanted things like high-speed internet access (i.e. it wouldn't be allowed a rate increase to cover the cost of the equipment upgrade since the existing central office equipment was good for another 20 years and nobody really cared about high-speed internet access).

As pointed out, for people that really wanted high-speed internet access, they could really get it with the existing central office equipment ... but instead of 512kbit access costing $49/month it would only cost something like $1200/month (a silly 24 times more, after all its just money).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet (Part 2 of 2)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet (Part 2 of 2)
Newsgroups: alt.folklore.computers
Date: Sat, 11 Nov 2000 16:14:24 GMT
rh120@aloha.cc.columbia.edu (Ronda Hauben) writes:
The NSFNET was public in Michigan where I got on in 1988 as it was being paid for by US federal government and state of Michigan funds.

By 1992 I was able to use the MERIT connection from Michigan via the NSFNET to get access to the Cleveland Free-Net. The Cleveland Free-Net was a free access point to Usenet, to Internet mailing lists, etc.


i think that there is an implied assumption that the funds provided by the federal & state governments fully covered the cost of NSFNET backbone and/or even a majority of NSFNET backbone costs &/or that the NSFNET backbone economic situation was stable over a period longer than the duration of the specific NSFNET contracts (and/or was ever intended to be).

Given a high-speed technology service demonstration of known duration it is highly likely that commercial entities donated the majority of the resources to make the NSFNET demonstrations a success. Possibly motivation for this strategy was creating a demand for bandwidth that would later be migrated to real world environment.

In the mid-80s there was enormous amounts of dark fiber putting the telecommunication industry in real chicken & egg bind; bandwidth intensive applications wouldn't be developed w/o lowered tariffs, but it would take bandwidth intensive applications several years to appear, mature, and propagate into the market place. Across the board reduction of tariffs by factors of 5-10 times w/o corresponding increase in use would put the industry at severe financial risk.

A solution was a controlled incubator environment (like NSFNET and supercomputing centers) where embryonic high bandwidth applications could emerge and acquire some maturity. Once out of the embryonic stage, there would be migration from the incubator to the real world (while maintaining an economically viable telecommunication industry).

The relationship between commercial contributions and gov. funding is possibly clearer in state of cal. situation where pacbell created an internet program for educational and non-profit institutions. Rather than doing a special tariff for non-profit internet services ... they tariffed the non-profit internet connections at the going rate and then provided a grant program that non-profit institutions could apply to and receive funding to cover the costs of their internet connectivity (and it was probably also easier to provide supporting documentation with regard to non-profit tax deductions).

As a result, I would claim that there was much less economic confusion surrounding the nature of free services.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet (Part 2 of 2)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet (Part 2 of 2)
Newsgroups: alt.folklore.computers
Date: Sat, 11 Nov 2000 16:39:39 GMT
Anne & Lynn Wheeler writes:
A solution was a controlled incubator environment (like NSFNET and supercomputing centers) where embryonic high bandwidth applications could emerge and acquire some maturity. Once out of the embryonic stage, there would be migration from the incubator to the real world (while maintaining an economically viable telecommunication industry).

the original NSFNET contract was signed Nov. 1987 ... ref:
https://www.garlic.com/~lynn/2000e.html#10

by 1992, the number of NSFNET sites was only up to 16 ... ref:
https://www.garlic.com/~lynn/2000d.html#73

the people with direct access to NSFNET would have been numbered in the tens of thousands at best (i.e. much less than the sum of all possible individuals at each of the 16 institutions).

In general, gov grants (like NSF) are to aid in technology research and development and not usually to create government service buearacracies (especially that would be in competition with commercial entities).

misc. other history
https://www.garlic.com/~lynn/2000d.html#72

from the above reference:
In 1987, BITNET and CSNET merged to form the Corporation for Research and Educational Networking (CREN). A key feature of CREN and its predecessors is that they were entirely dependent on voluntary user fees; BITNET from the beginning and CSNET after the expiration of its initial five year NSF grant.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

TSS ancient history, was X86 ultimate CISC? designs)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSS ancient history, was X86 ultimate CISC?  designs)
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.super
Date: Wed, 15 Nov 2000 23:54:50 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
In article <QgxQ5.9395$O5.224967@news.itd.umich.edu>, sarr@engin.umich.edu (Sarr J. Blumson) writes:
|> In article <8utl2n$678$1@pegasus.csx.cam.ac.uk>,
|> Nick Maclaren wrote:
|> >
|> >Given all the extra information, I am still not sure where to
|> >categorise the 370/67's virtual memory systems on the scale of
|> >research, experimental, prototype and production :-)
|>
|> It was sold to real customers, who were asked to pay money for it.

Which is unusual for research software (though not by any means unknown), but common for all of the others :-(

Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QG, England.
Email: nmm1@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679


a majority of the 360/67 machines probably ran CP/67 production on it, and a few others ran MTS (michigan terminal system). There were also at lease two service bureaus formed in the late '60s that offered commercial time-sharing services using modified versions of CP/67 running on 360/67s

Lots of production work was done with CP/67 by a wide variety of different kinds of customers ...

misc ref:
https://www.garlic.com/~lynn/2000.html#1

other random refs:
https://www.garlic.com/~lynn/2000f.html#6
https://www.garlic.com/~lynn/2000e.html#0
https://www.garlic.com/~lynn/2000e.html#15
https://www.garlic.com/~lynn/2000e.html#16
https://www.garlic.com/~lynn/2000d.html#30

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
Newsgroups: alt.folklore.computers
Date: Thu, 16 Nov 2000 00:01:44 GMT
lwinson@bbs.cpcn.com (lwin) writes:
The IBM histories suggest that certain functions, said as dynamic address translation, were deemed by the designers to not be cost effective in the overall design. IBM was looking more into batch processing, while MIT was into time sharing.

I think, given the state of the art of software and hardware at the time (c1962) IBM was correct. Time sharing would require considerable horsepower that wasn't available at the time. Considering the effort to just get S/360 out the door that was enough.


see melinda's paper has a lot of history information about what was bid on multics as well as associated 360/67 stuff
https://www.leeandmelindavarian.com/Melinda#VMHist

some extracts from melinda's paper ...
https://www.garlic.com/~lynn/99.html#126
https://www.garlic.com/~lynn/99.html#127

also misc. other references (also refs from reply to tss posting)
https://www.garlic.com/~lynn/2000.html#1
https://www.garlic.com/~lynn/2000f.html#6
https://www.garlic.com/~lynn/2000e.html#0
https://www.garlic.com/~lynn/2000e.html#15
https://www.garlic.com/~lynn/2000e.html#16
https://www.garlic.com/~lynn/2000d.html#30

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key https://www.garlic.com/~lynn/

360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
Newsgroups: alt.folklore.computers
Date: Thu, 16 Nov 2000 00:33:47 GMT
lwinson@bbs.cpcn.com (lwin) writes:
The IBM histories suggest that certain functions, said as dynamic address translation, were deemed by the designers to not be cost effective in the overall design. IBM was looking more into batch processing, while MIT was into time sharing.

I think, given the state of the art of software and hardware at the time (c1962) IBM was correct. Time sharing would require considerable horsepower that wasn't available at the time. Considering the effort to just get S/360 out the door that was enough.


also note that while 360/67 cp/67 install base was a tiny fraction of the total 360 install base ... it was still larger than the multics install base. it isn't so much that the 360/67 cp/67 effort wasn't as large or much larger than other time-sharing efforts ... it is more of a case that the market place appetite for 360 batch services just dwarfed everything else .. i.e. nuts & bolts data processing, payroll, check processing, statements, inventory, etc ... regularly scheduled production operation that doesn't have interactive requirements.

In fact, one relatively recent observation regarding the two things considered primarily responsible for 100% uptime of a critical financial service since approx. 1992 were

ims hot standby (i.e. a fault fall-over technology) automated operater (getting the human element totally out of the loop).

random refs:
https://www.garlic.com/~lynn/99.html#71
https://www.garlic.com/~lynn/94.html#2
https://www.garlic.com/~lynn/99.html#107
https://www.garlic.com/~lynn/99.html#136a

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive designs)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive designs)
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.super
Date: Thu, 16 Nov 2000 10:55:28 GMT
jot@visi.com (J. Otto Tennant) writes:
My memory is hazy, but I think that the 370/138 and 370/148 were the first 307's with VM. Customers who bought 370/155 or 370/165 machines were mildly annoyed (to say the least) to discover the price of the VM upgrade.

the machines were 370/135, 370/145, 370/155, and 370/165s. The 135 & 145 had the support for virtual memory as shipped (but microcode load was required for enabling). The 155 & 165 didn't and there had to be a hardware upgrade.

There was also a battle over implementing the full 370 relocation architecture. The 165 engineers said that it would take an extra six months to do the full 370 relocation architecture hardware for the 165 ... and so there was eventually a decision made to drop various things from the 370 virtual memory relocation architecture in order to speed the hardware retrofit package out the door.

The 135/145 then had to have an update to the microcode load that corresponded to the 370 relocation architecture hardware subset that was being supported by the 155 & 165.

138 & 148 were later enhancements to the 135 & 145 (which somewhat correspond to the 158 & 168 enhancments).

The 138/148 also introduced the concept of microcode operating system performance assists for VS1 & VM/370. Basically the 138/148 engines were simulating 370 instructions at about a 10:1 emulation. Much of operating instruction code could be moved directly to microcode at one microcode instruction for every 370 instruction (giving a 10:1 performance boots). There were also some special cases that saw much larger than 10:1 ... typically associated with not having to save/restore 370 registers for a subfunction (i.e. microcode could do it using its own domain bypassing the traditional save/restore function call paradigm if it was all within the same 370 domain).

misc. url
https://www.garlic.com/~lynn/2000.html#12
https://www.garlic.com/~lynn/94.html#21

there was also a pentagon papers like event ... before the relocation announcement, a copy of a document was leaked & made it to the press that contained the description of the new relocate hardware features. after a lengthy investigation ... eventually all the copier machines in the company were retrofitted with little serial number under the glass that would show up on all copies made with that machine.

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key https://www.garlic.com/~lynn/

TSS ancient history, was X86 ultimate CISC? designs)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSS ancient history, was X86 ultimate CISC? designs)
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.super
Date: Thu, 16 Nov 2000 11:19:27 GMT
sarr@engin.umich.edu (Sarr J. Blumson) writes:
In article <8utl2n$678$1@pegasus.csx.cam.ac.uk>, Nick Maclaren wrote:

Given all the extra information, I am still not sure where to categorise the 370/67's virtual memory systems on the scale of research, experimental, prototype and production :-)

It was sold to real customers, who were asked to pay money for it.


and while TSS/360 may not have been considered an outstanding commercial success, it wasn't because it was treated as a R&D effort ... there were significant resources poured into it ... i've heard that mohansic lab may have peaked at between 1000-1200 people working on TSS/360 (this may have been a precursor to FS which had even more resources ... and never even made it to announcement).

The aggregate of all the related 360/67 time-sharing activities (tss/360, cp/67, etc ) may not seem significant compared to the 360 batch market which so dwarfed them ... and yet at the same time .. the total resources poured into those "insignificant" time-sharing activities were possibly as large as the aggregate of all other resources going into time-sharing activities going on at the time (independent of the issue of the effectiveness of those resources).

random refs:
https://www.garlic.com/~lynn/2000.html#64
https://www.garlic.com/~lynn/2000f.html#18
https://www.garlic.com/~lynn/99.html#2
https://www.garlic.com/~lynn/99.html#64
https://www.garlic.com/~lynn/94.html#46
https://www.garlic.com/~lynn/94.html#53
https://www.garlic.com/~lynn/95.html#1
https://www.garlic.com/~lynn/98.html#11
https://www.garlic.com/~lynn/98.html#12

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive designs)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive  designs)
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.super
Date: Fri, 17 Nov 2000 00:59:26 GMT
James Cownie writes:
IIRC this microcode trick was also used for the "APL assist" where parts of the APL interpreter were dropped into the 148 microcode with corresponding large performance improvements.

this was origiannl done on the 145 for the APL assist ... while the guys that did the actual microcode assist were in endicott ... the people that helped with the analysis of what kernel functions to drop into microcode were the palo alto microcode group that were responsible for implementing the APL assist (first on the 145).

they did a special microcode load ... which implemented in microcode a periodic sampling event that would wake up on regular intervals and check the PSW. A counter was incremented based on what it observed as to the PSW activity. The table of where the CPU was spending its time was used to compliment the work that is referenced in

In the following reference ...
https://www.garlic.com/~lynn/94.html#21

the person that wrote the 145 microcode for the APL assist and the above mentioned microcode address sampler was in the same group as the person that I worked with to do the analysis in the indicated reference i.e. two methods were used to do the kernel analysis, one was a microcode address sampler and the other was software changes that hooked into the inter-module call routine which created time-stamped record whenever the inter-module call function was invoked ... giving the current time value (64 370 tod clock), call-frome/return address, and call-to address. The above reference gives the data reduction from the software inter-module call analysis.

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key https://www.garlic.com/~lynn/

360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
Newsgroups: alt.folklore.computers
Date: Fri, 17 Nov 2000 16:17:50 GMT
lwinson@bbs.cpcn.com (lwin) writes:
What I'm trying to say was that with S/360, IBM _was_ making a major change just with that. Adding time sharing on top of that would have been too much--adding too much cost for too few users.

None the less, the basic architecture design of S/360 proved to be quite adaptable for future work.


the tss/360 effort was a very large effort independent of the BPS, TOS, DOS, PCP, MFT, MVT, etc batch oriented efforts for the 360.

Many of the batch efforts were also significantly independent (big difference between the DOS & PCP efforts).

some of them had better commercial success than others.

the migration from existing commercial batch operations (things like payroll) tended to be a lot more straight-forward (compared to the existing batch, commercial market take-up of a new paradigm) ... and also tended to have a significant amount of business process value dependency associated with it (frequently much easier to see the downside of not getting out payroll on time).

As a student, I had done a HASP hack on MVT18 that supported 2741&tty terminals with a CMS editor syntax (which can be traced back to CTSS) for CRJE functions (i.e. conversational remote job entry ... i.e. editing simulated card decks that would be submitted as a file to the batch processer ... as if the file had been created by reading cards).

IBM did go thru various CRJE-like offerings ... including TSO showing up late in the MVT product cycle.

I would claim that it wasn't that IBM didn't offer time-sharing in addition to batch ... but various other factors shaped the market:

1) things were still maturing and numerous factors constrained not being able to offer batch and time-sharing as a single package (it was even difficult to offer a single batch packaged offering ... witness 360 DOS & PCP and their descendants still around today).

2) majority of the existing market was commercial batch dataprocessing and upgrade to a batch paradigm was more straightforward for the majority of the market than a paradigm switch (like lots of ibsys cobol to 360 cobol translation went on).

3) online, interactive can be viewed as quite anti-thetical to a number of the commercial batch business processes ... where there is a strong motivation to eliminate all human-related vaguries from consistent delivery of the functions supported.

I've even posted a number of times

1) that the various "interactive" derived platforms tend to have an implicit design point that if something goes wrong ... you send a message to the person ... and the person figures out what to do (or maybe just throw it away totally, because people wouldn't be able to fiure it out).

2) 7x24 operations (including emerging web server industry) requires an implicit design point that a person doesn't exist ... and that all deviations have to be trappable programatically at several levels, default system, application, etc. This is a design point that batch oriented services have addressed significantly better than the interactive-derived platforms.

3) simple example I encountered a number of instances during the late '80s and early '90s of not being able to trap various ICMP packets at the application level and appropriately handle ... something that was doing in the '70s in a totally different environment providing 7x24 online, unattended services.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
Newsgroups: alt.folklore.computers
Date: Fri, 17 Nov 2000 23:07:49 GMT
John Ferrell writes:
There must have been some DAT developement work done with the 360/40. I have seen notes in the logics reffering to it.

see melinda's
https://www.leeandmelindavarian.com/Melinda#VMHist

various things extracts
CP-40 and CMS

In the Fall of 1964, the folks in Cambridge suddenly found themselves in the position of having to cast about for something to do next. A few months earlier, before Project MAC was lost to GE, they had been expecting to be in the center of IBM's time-sharing activities. Now, inside IBM, "time-sharing" meant TSS, and that was being developed in New York State. However, Rasmussen was very dubious about the prospects for TSS and knew that IBM must have a credible time-sharing system for the S/360. He decided to go ahead with his plan to build a time-sharing system, with Bob Creasy leading what became known as the CP-40 Project. The official objectives of the CP-40 Project were the following:

1. The development of means for obtaining data on the operational characteristics of both systems and application programs;

2. The analysis of this data with a view toward more efficient machine structures and programming techniques, particularly for use in interactive systems;

3. The provision of a multiple-console computer system for the Center's computing requirements; and

4. The investigation of the use of associative memories in the control of multi-user systems. 22

The project's real purpose was to build a time-sharing system, but the other objectives were genuine, too, and they were always emphasized in order to disguise the project's "counter-strategic" aspects. Rasmussen consistently portrayed CP-40 as a research project to "help the troops in Poughkeepsie" by studying the behavior of programs and systems in a virtual memory environment. In fact, for some members of the CP-40 team, this was the most interesting part of the project, because they were concerned about the unknowns in the path IBM was taking. TSS was to be a virtual memory system, but not much was really known about virtual memory systems. Les Comeau has written: Since the early time-sharing experiments used base and limit registers for relocation, they had to roll in and roll out entire programs when switching users....Virtual memory, with its paging technique, was expected to reduce significantly the time spent waiting for an exchange of user programs.


...
Creasy and Comeau were soon joined on the CP-40 Project by Dick Bayles, from the MIT Computation Center, and Bob Adair, from MITRE. Together, they began implementing the CP-40 Control Program, which sounds familiar to anyone familiar with today's CP. Although there were a fixed number (14) of virtual machines with a fixed virtual memory size (256K), the Control Program managed and isolated those virtual machines in much the way it does today. 28 The Control Program partitioned the real disks into minidisks and controlled virtual machine access to the disks by doing CCW translation. Unit record I/O was handled in a spool-like fashion. Familiar CP console functions were also provided.

This system could have been implemented on a 360/67, had there been one available, but the Blaauw Box wasn't really a measurement tool. Even before the design for CP-40 was hit upon, Les Comeau had been thinking about a design for an address translator that would give them the information they needed for the sort of research they were planning. He was intrigued by what he had read about the associative memories that had been built by Rex Seeber and Bruce Lindquist in Poughkeepsie, so he went to see Seeber with his design for the "Cambridge Address Translator" (the "CAT Box"), which was based on the use of associative memory and had "lots of bits" for recording various states of the paging system. Seeber liked the idea, so Rasmussen found the money to pay for the transistors and engineers and microcoders that were needed, and Seeber and Lindquist implemented Comeau's translator on a S/360 Model 40.

Comeau has written:

Virtual memory on the 360/40 was achieved by placing a 64-word associative array between the CPU address generation circuits and the memory addressing logic. The array was activated via mode-switch logic in the PSW and was turned off whenever a hardware interrupt occurred. The 64 words were designed to give us a relocate mechanism for each 4K bytes of our 256K-byte memory. Relocation was achieved by loading a user number into the search argument register of the associative array, turning on relocate mode, and presenting a CPU address. The match with user number and address would result in a word selected in the associative array. The position of the word (0-63) would yield the high-order 6 bits of a memory address. Because of a rather loose cycle time, this was accomplished on the 360/40 with no degradation of the overall memory cycle.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
Newsgroups: alt.folklore.computers
Date: Sat, 18 Nov 2000 17:48:03 GMT
nouce@multics.ruserved.com (Richard Shetron) writes:
One of the things yo have to remember is until 1977, Honeywell didn't even acknowledge Multics existed. Salesmen weren't allowed to talk about it. I think they sold about 20 to 25 sites this way. If a customer wanted Multics, they basically had to strongarm Honeywell into selling a Multics system. The rumors I heard is it could take several months and the customer had to start talking about IBM systems in the Honeywell office before they could buy a Multics system.

note that CP/67 faced a similar lack of support from the marketing & sales people (even tho it was used extensively inside for development).

however, there were a number of 360/67s that had been sold to customers on the basis of the (relatively) massive tss/360 effort ... a product which was encountering difficulties. There were some number of CP/67 success stories which were being presented and talked about at the various user group meetings (share & to lesser extent guide).

Customers could strong-arm marketing & sales people to get a copy ... or could effectively bypass them ... and order cp/67 directly or even get a copy from another customers. At the time, the software wasn't charged for and shipped with full source as part of the standard distribution.

I was an undergraduate at a university that went through that cycle, had bought a 360/67 on the basis of TSS/360 time-sharing marketing pitch ... and eventually had to do various other things because of all the difficulties that TSS/360 was having.

Through various iterations, contact was made with the CP/67 group and in Jan. of 1968, some of the group came out to the university to do an "installation" (up until that CP/67 was only being used in Cambridge by the development group and out at Lincoln Labs). Then at the spring Share user group meeting in Houston, CP/67 had an "official" product announcement ... using the university as a "reference" instllation.

I got to play with CP/67 through-out 1968 ... rewriting many pieces, developing fast-path and cutting critical pathlengths by a factor of 10 (or in some cases a 100), redoing scheduling & dispathing (introducing fair share scheduling and dynamic feedback algorithms), redoing the virtual memory manager (new page replacement algorithms, vastly reduced path length, a thrashing control mechanism that was an alternative to the working set stuff in the literature at the time ... which provided significantly improved thruput), fixing a lot of bugs associated with production systems, etc.

I also put in the TTY/ASCII terminal support ... an implementation that Tom Van Vleck writes up as crashing his system 20-30 times in a single day.
http://www.lilli.com/360-67 (corrected)
https://www.multicians.org/thvv/360-67.html

It was during this period that various of the people that had been involved in CP/67 (both at Cambridge and Lincolng Labs) put together two different start-up spin-offs for offering commercial time-sharing services based on CP/67.

random refs:
https://www.garlic.com/~lynn/99.html#44
https://www.garlic.com/~lynn/93.html#0
https://www.garlic.com/~lynn/93.html#2
https://www.garlic.com/~lynn/93.html#26
https://www.garlic.com/~lynn/93.html#31
https://www.garlic.com/~lynn/94.html#1
https://www.garlic.com/~lynn/94.html#2
https://www.garlic.com/~lynn/94.html#4
https://www.garlic.com/~lynn/94.html#5
https://www.garlic.com/~lynn/94.html#7
https://www.garlic.com/~lynn/94.html#12
https://www.garlic.com/~lynn/94.html#18
https://www.garlic.com/~lynn/94.html#28
https://www.garlic.com/~lynn/94.html#46
https://www.garlic.com/~lynn/94.html#47
https://www.garlic.com/~lynn/94.html#48
https://www.garlic.com/~lynn/94.html#49
https://www.garlic.com/~lynn/94.html#52
https://www.garlic.com/~lynn/94.html#54

VM/370 was the follow-on product to CP/67 adapted for the 370 product line (conversion from 360/67 to 370 which had virtual memory on almost all processors eventually).

I had made some observation that the internal corporate deployment of VM/370 was rather large (for instance the majority of the internal network nodes were vm/370 and that was larger than arpanet/internet up thru about 1985) ... but smaller than the number of VM/370 customer installations. I was also doing custom modifications support of VM/370 that I shipped to (mostly) a very limited number of internal sites (although it had leaked out to ATT longlines at one point) ... The very limited number of internal installation that I directly supported was still larger than the total number of Multics installations.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
Newsgroups: alt.folklore.computers
Date: Sat, 18 Nov 2000 17:59:33 GMT
Anne & Lynn Wheeler writes:
Customers could strong-arm marketing & sales people to get a copy ... or could effectively bypass them ... and order cp/67 directly or even get a copy from another customers. At the time, the software wasn't charged for and shipped with full source as part of the standard distribution.

there is the tale about CERN benchmarking VM/370-CMS against MVS/TSO in the early '70s (CERN has been a long-time VM/370 installation ... one might even claim that the availability of GML on VM/370 at CERN for so long was part of the genesis of HTML)

This was a CERN benchmark ... however the copy of the benchmark report that was provided to IBM was immediately classified "IBM Confidential Restricted" (distributed on need to know basis only, the only higher classification was registered, where each copy is numbered and regular, frequent security audits requiring copies kept in double lock cabinets).

The strategic forces inside IBM had moved from TSS/360 time-sharing to MVS/TSO (sort-of time-sharing) ... with CP/67 and VM/370 always being odd-man-out.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
Newsgroups: alt.folklore.computers
Date: Sat, 18 Nov 2000 20:35:53 GMT
sort of in the category of truth is stranger than fiction ... the CP/67 and VM/370 products were treated as non-strategic by the sales & marketing forces world-wide even tho a significant portion of all internal software development was done on those platforms.

even stranger was that the hdqtrs planning & forecasting functions were based on cp/67 (and then later vm/370). When EMEA hdqtrs moved from the US to Paris ... I hand carried a clone over and installed it.

HONE (field, sales, & marketing support) was first implemented on a CP/67 platform during the early '70s and then migrated to vm/370.

by the end of the 70s, HONE was primary vehicle for sales & marketing support (world-wide) ... many orders couldn't even be placed until they were first run thru a HONE "configurator" (and sales & marketing were still treating the vm/370 product as non-strategic offering).

In the late '70s, the online HONE installation was possibly the largest single system image operation in the world (supporting the tens of thousands of field, sales, & marketing people in the US).

random refs:
https://www.garlic.com/~lynn/97.html#4
https://www.garlic.com/~lynn/98.html#23
https://www.garlic.com/~lynn/99.html#38
https://www.garlic.com/~lynn/99.html#149
https://www.garlic.com/~lynn/99.html#150
https://www.garlic.com/~lynn/2000e.html#6
https://www.garlic.com/~lynn/2000e.html#22
https://www.garlic.com/~lynn/2000f.html#30

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

TSS ancient history, was X86 ultimate CISC? designs)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSS ancient history, was X86 ultimate CISC? designs)
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.super
Date: Sun, 19 Nov 2000 04:12:11 GMT
piercarl@sabi.Clara.co.UK (Piercarlo Grandi) writes:
The 360/67 was a "hand" modified (midrange) 360 with experimental VM hardware bolted on, and run the original CP/67 (plus CMS) system. It was mostly a research prototype, and FAIK very few were ever done (IIRC the first one in Cambridge was followed by a few others).

there was a special, one of a kind 360/40 with "hand" modified virtual memory hardware. CP/40 was initially built on this hardware. Later, when production 360/67s became available (shared much in common with 360/65 but with relocation hardware and other features, and was a production machine). I don't know the eventual number of 360/67s actually shipped but i would guess over 100 but probably less than 1000.

The 370/168 was a heavy production machine, near the top of the IBM mainframe range for a while, and had the production VM subsystem built in, and could run the released version of VM/370 as CP/67 had been renamed upon release as a product. Bot the 370/168 and VM/370 on it were sold in significant numbers.

370 was initially shipped before the virtual memory announcement. Later, when it was announced, it was available on (nearly) every 370 model; 370/115, 370/125, 370/135, 370/145, 370/155, & 370/165.

The 370 line was later enhanced with models 138, 148, 158, and 168 (all with virtual memory).

some extracts concerning 360/40 & cp/40.
https://www.garlic.com/~lynn/2000f.html#59

other recent postings to alt.folklore.computers
https://www.garlic.com/~lynn/2000f.html#56
https://www.garlic.com/~lynn/2000f.html#58
https://www.garlic.com/~lynn/2000f.html#60
https://www.garlic.com/~lynn/2000f.html#61

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Cryptogram Newsletter is off the wall?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cryptogram Newsletter is off the wall?
Newsgroups: sci.crypt
Date: Sun, 19 Nov 2000 16:02:14 GMT
Simon Johnson writes:
well i suppose this models reality. If a crook wants to steal the contents of a safe. He really has two options:

1. Break the Safe 2. 'Talk' to the guy who owns the safe.

I'd put my bet that breaking the human is easier than the safe! The problem with digital security is that there has to be human invervention and trust somewhere (and even if there wasn't it wouldn't be useful). We can never make things impossible for an attacker, just harder. If we have AES vs. XOR then clearly the AES is much more likely to be harder to compromise.


there is a much smaller gap between the paper presentation of some information and a person writing a signature on that piece of paper ... compared to the presentation of some digital information and the application of a digital signature to that digital information.

two issues ... 1) was the person actually presented what they were signing and 2) how close a correlation is there between the person's intent and the application of a signature.

in the digital world ... there is a lot larger gap in case #1 and #2.

for instance, when a person is using a pen to apply a signature to a paper document ... the probability of that pen wondering off and signing other pieces of paper at the same time is relatively low.

basically digital signature technology is method of authenticating digital information ... there has to be a lot of additional infrastructure wrapped around it to establish correlation between the digital signature authentication technology and a person's intent.

digital signature likely reduces the probability that there is counterfeit/fraud once the signature is applied. however, digital signature infrastructure widesn the gap between what the person sees and the actual signing operations (opening up new avenues for fraud).

random refs:
https://www.garlic.com/~lynn/aadsmore.htm#schneier

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Cryptogram Newsletter is off the wall?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cryptogram Newsletter is off the wall?
Newsgroups: sci.crypt
Date: Sun, 19 Nov 2000 16:30:23 GMT
Bruce Schneier writes:
The mathematics of cryptography, no matter how strong, cannot bridge the gap between me and my computer. Because the computer is not trusted, I cannot rely on it to show me what it is doing or do what I tell it to. Checking the calculation afterwards doesn't help; the untrusted computer can't be relied upon to check the calculations properly. It wouldn't help to verify the code, because the untrusted computer is running the code (and probably doing the verification). It wouldn't even help to store the digital signature key in a secure module: the module still has to rely on the untrusted computer for input and output.

it is likely even more complex ... 3-factor authentication talks about something you have, something you know, & something you are.

trusted computer can be something you have ... tieing intent to a digital signature probably means more, something like each & ever digital signature has to carry with it some way of prooving that some combination of 3-factor authentication was used for each & every application of that digital signature (including that a something you have trusted computer was used for the actual operation)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
Newsgroups: alt.folklore.computers
Date: Mon, 20 Nov 2000 14:54:31 GMT
Dennis Ritchie writes:
I can understand IBM's reluctance to put a full paging/VM facility into early S/360, but it's hard to forgive the lack of even base/bounds relocation. Even in batch operation, it made the operator's realtime bin-packing decisions much more difficult. Once a job had started, it had to continue in the same physical memory locations even if it was rolled out (i.e. swapped).

while not part of the 360 architecture & instruction sets ... a large percentage of the machines in the field had base/bound hardware support ... that was there as part of various 14xx/7xxx emulation hardware (which was available on large percentage of machines).

one group implemented a extension on a 360/50 supporting virtual machine like function using the emulation hardware base/bound support. That and a 360/65 version ... supposedly could run on a majority of the 360/50s & 360/65s in customer shops (it was done mostly by one person out of the seattle boeing sales office). However, it didn't include swapping batch regions.

The CP? (senior moment here) ... the PL/I conversational time-sharing system done on an 360/50 base that ran on top of OS/MFT. For performance, it also offered a 360/50 microcode enhancement (hardware) that put a lot of the PL/I interpreter into machine language. This may have used the base/bound hardware support also (I don't remember for sure).

As noted in Melinda's paper, IBM had extensive support at MIT for 7094 and CTSS. After project mac went with GE ... the ibm boston programming (on the 3rd floor of 545 technology sq) ... developed the PL/I conversational time-sharing system for the 360/50 (that ran as subregion under standard ibm batch system). This was almost totally unrelated to the CP-40, CP/67, virtual machine time-sharing work that went on at ibm cambridge scientific center on the 2nd & 4th floors of 545 technology sq.

random note, jean sammet was at the boston programming center (
https://www.garlic.com/~lynn/2000d.html#37).

Slightly different work ... but huntsville labs. (& Brown Univ?) did a custom version of OS/MVT (release 13 ... circa '67) that used the 360/67 relocation hardware but not for paging ... simply for managing storage fragmentation. They had bought a two-processor 360/67 (another of the TSS promises) with multiple 2250s (large display screen) for online design application. The problem was that each 2250 effectively ran as a (very) long running batch region under MVT and after some interval ran into severe storage fragmentation problems. OS/MVT (release 13, running on duplex 360/67) was modified to dispatch application regions in virtual memory mode. No paging was supported, but it greatly simplified supporting contiguous regions of application execution code & data.

OT ... in late '68 (very early '69?_, when BCS was formed ... the huntsville duplex was shutdown and shipped to seattle. Some of the people may have came with it ... and I believe one of the people from the university (brown?).

Spring break '69 I taught a 40hr cp/67 class to the core BCS technical team (I was still an undergraduate taking classes) ... there may have been 10 people (both BCS and ibm team assigned to BCS). BCS by that time had possibly 30 people total (it sort of had been initially formed as a adjunct to corporate hdqtrs data processing ... which had primarily been payroll on a single 360/30).

By that time, supposedly airospace computing center reported to BCS; hdqtrs was barely out of being a 360/30 ... and they joked that aerospace during a 2-3 year period had $100m worth of 360 equipment continuely sitting in hallways at the renton data center waiting to be installed, i.e. 360 equipment was arriving faster than ibm could install it so it was continuely being staged in the hallways ... at any one time, there might be 10-20 360/65 & 360/75 and associated stuff sitting around in the hallways (i.e. goldfish swallowing a whale ... little corporate politics went on).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Building so big it generates own weather?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Building so big it generates own weather?
Newsgroups: alt.folklore.military,alt.folklore.science,alt.folklore.urban,alt.aviation.safety,alt.fan.cecil-adams,seattle.general,sci.skeptic,rec.aviation.military,sci.physics
Date: Tue, 21 Nov 2000 02:32:50 GMT
grante@visi.com (Grant Edwards) writes:
In article <3a1704c2.94077266@news.primenet.com>, berkut@NOprimenet.com wrote:

However, test and demo flights do sometimes occur over populated areas. One test piolot did a barrel roll in an early 707 over Lake Washington during the Seafair regatta. Supposedly the VIPs on board never new the plane rolled as the piolot was good enough to keep it in a perfect one G rotational field But I'm sure that is an urban legend.

nope. it's true. and it was the prototype.

Even the constant 1G part?


the story i heard (about 15 years later at boeing) was that the plane was heavily instrumented including 50 gal barrels in place of seats and quite of bit of wiring and plumbing for doing various weight distribution flight tests. quite a bit of that tore loose during the roll.

... 707 would have predated seafair ... would have still been goldcup, if i remember right ... mid-50s was slow-moe era (slow-moe 4? and slow-moe 5? taking first place).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

TSS ancient history, was X86 ultimate CISC? designs)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSS ancient history, was X86 ultimate CISC? designs)
Newsgroups: comp.arch
Date: Tue, 21 Nov 2000 15:06:55 GMT
jeffreyb@gwu.edu (Jeffrey Boulier) writes:
Could that have been a code name for UTS, a mainframe Unix? I think that Amdahl developed it around that time. Unlike Multics, UTS is still around, but I don't know how large its market share is anymore...

Yours Truly, Jeffrey Boulier


gold ... for Au ... Amdahl unix. i believe it was the work of the same person that did earlier unix port to interdata (sort of a 360 look-a-like).

there was simpson's brand new operator system (but i don't remember if it was called ASPEN or not) ... sort of derivative of the RASP work he did while at IBM (there were some litigation, and people checking the code that there was zero RASP code, RASP was sort of a name hack on HASP which something that simpson did in the '60s). simpson/dallas work going on about the same time as gold. my suggestion was that the dallas work be used as underlying platfrom for Au (something like the TSS/unix thing did for AT&T ... from my standpoint there seemed to be quite a bit of nih & competition between dallas and sunnyvale).

random refs:
https://www.garlic.com/~lynn/2000c.html#81
https://www.garlic.com/~lynn/2000.html#76
https://www.garlic.com/~lynn/2000.html#64

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

TSS ancient history, was X86 ultimate CISC? designs)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSS ancient history, was X86 ultimate CISC? designs)
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 21 Nov 2000 15:22:25 GMT
Anne & Lynn Wheeler writes:
there was simpson's brand new operator system (but i don't remember if it was called ASPEN or not) ... sort of derivative of the RASP work he

... seens like the dallas-thing was called aspen ... following is slightly related. Key Logic's stuff is outgrowth of Tymshare's Gnosis (I had known several of the people and got called in by M/D to due diligence on gnosis when M/D bought tymshare ... I still have a copy of the gnosis documentation somewhere in boxes)

more info on keykos
http://www.cis.upenn.edu/~KeyKOS
https://web.archive.org/web/20010301032856/www.cis.upenn.edu/~KeyKOS/

from some long forgotten archive see aspen ref in the attach ... (4th paragraph).

Start-Up Firm To Test MVS Replacement

From November 25, 1985 issue of Information Week

Byline: Paul E. Schindler, Jr.

Key Logic's system said to support 500 transactions per second

The first radically new IBM-mainframe operating system in years is now being tested in Cupertino, Calif. And the company that's preparing it for market is looking for just a few transaction-processing users to act as beta sites.

Just seven months after its founding, Key Logic is asking prospective users whether they want to achieve higher performance on IBM mainframes by replacing the MVS and VM operating systems with its new KeyKOS program.

Early benchmarks show that the firm's operating system supports as many as 500 transactions per second. That's faster than transaction processors from industry leaders Tandem Computers Inc., Cupertino, Calif., and Stratus Computer Inc., Marlboro, Mass. According to the Gartner Group Inc., a consulting firm in Stamford, Conn., Tandem offers about 150 transactions per second, Stratus about 50, and IBM's CICS peaks at 70 transactions per second.

Key Logic's early users will get the chance to run one of the few alternatives to the basic IBM mainframe operating system. Although Amdahl Corp. sells a mainframe Unix called UTS, and may someday release a souped-up MVS-like operating system developed under the code-name Aspen, UTS is not new to the market and Aspen is not a radical departure.

KeyKOS is both. It offers high performance, reliability, security, and programmer productivity, the company says. It should be ideally suited for remote operations, since it was itself developed remotely. According to Steve Gimnicher, Key Logic's development manager, "The original architects never saw the machine it ran on." They were in Cupertino developing KeyKOS by communicating with a mainframe in Dallas.

The KeyKOS transactional operating system was written from scratch, but it wasn't all done in the seven months Key Logic has been in business. KeyKOS stems from a decade's work on the system at Tymnet's Tymshare service, now part of McDonnell Douglas Corp.

Key Logic consists of 11 employees who have a total of 200 years of experience. Despite that, says Vincent A. Busam, vice president of development, KeyKOS isn't being generally released yet. However, the firm is ready to support a handful of selected installations while it improves its documentation and builds its service and support organization.

In addition to supporting as many as 500 transactions per second on an IBM 3083 JX, or 130 transactions per second on an IBM 4381, benchmarks show that KeyKOS reduces elapsed time for some heavy input/output programs by 71% compared with IBM's VM/CMS. It cuts CPU resources by 30% for the same programs.

The risk of trying KeyKOS is relatively low, since KeyKOS runs as a guest under VM, and any VM/CMS program can run under KeyKOS. In the guest mode, its high performance is not realized, but all functionality can be tested.

Busam believes he knows who Key Logic's first customers will be: those in the transaction-processing gap. Although IBM's TPF2 (formerly the Airline Control Program) can handle as many as 1,000 transactions per second, it is not cost-effective below 300 transactions per second. That leaves a performance gap in the 150-to-300 transactions per second range, where KeyKOS should fit well, Busam believes.

Gimnicher notes that there are other advantages as well, especially in all-IBM shops that hesitate to move to a radically different architecture.

Programmers continue to deal with the same hardware they have used for years. In a three-week course, they can learn how to program efficiently under KeyKOS using standard IBM programming languages. Experience indicates that programmers writing applications for KeyKOS produce 40% less code than do programmers for other operating systems, including IBM's MVS and VM.

This kind of productivity is made possible by two new computing principles upon which KeyKOS is based. The first principle deals with communications. Programs, tasks, files, or even guest operating systems are all known as objects that communicate via messages. Objects cannot communicate with each other unless given the "keys" (Key Logic calls them capabilities) to do so. This ensures both security and reliability. A failure in one object cannot affect any other object, ensuring continued operation.

The other principle deals with memory. As with IBM's System/38, for example, KeyKOS treats all main memory and disk storage as one virtual memory. Programmers do not have to deal with disk I/O routines or file management -- KeyKOS does.

On the West Coast, analysts predict Key Logic will give IBM and Tandem competition in the transaction-processing market. But not everyone is so enthusiastic. Mike Braude, a Gartner Group analyst, says: "The big problem, of course, is vendor viability." In short, users want to know if Key Logic will be around. But users should note that the same concern was expressed about Tandem in 1974 and Stratus in 1980.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

TSS ancient history, was X86 ultimate CISC? designs)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSS ancient history, was X86 ultimate CISC? designs)
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 21 Nov 2000 16:28:47 GMT
Anne & Lynn Wheeler writes:
work going on about the same time as gold. my suggestion was that the dallas work be used as underlying platfrom for Au (something like the TSS/unix thing did for AT&T ... from my standpoint there seemed to be quite a bit of nih & competition between dallas and sunnyvale).

checking some random references ... it seems that ASPEN (which was supposedly going to be a "MVS-killer" ... but built on a page-mapped architecture (similar to RASP) ... was killed in late '87 ... but possibly the underlying pinnings eventually did become the base platfrom for Au/uts.

A big problem for (at least mainframe) operating systems has been hardware support with device error & anomaly handling .. typically 10 or more code than the straightline device drivers & i/o supervisor. That was in part, UTS deployment as a CP guest (vm/cms) ... relying on cp error & anomaly code. UTS platformed on the low-level pieces of ASPEN would have been able to rely on the ASPEN error & anomaly handling code. This would have been similar to the TSS/Unix deployment for AT&T ... with somewhat better job of eliminating function & pathlength duplication between the low-level micro-kernel and the unix subsystem.

random refs:
https://www.garlic.com/~lynn/2000c.html#69

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

HASP vs. "Straight OS," not vs. ASP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HASP vs. "Straight OS," not vs. ASP
Newsgroups: bit.listserv.ibm-main
Date: Wed, 22 Nov 2000 01:16:17 GMT
jbroido@PERSHING.COM (Jeffrey Broido) writes:
As for your point 4., I don't know about CRJE, but when I was a student years earlier, my school used Cornell University's mainframe via CRBE. They ran MFT and HASP, and CRBE was, indeed, supported. Also, didn't the HASP crew more-or-less invent RJE, not to mention STR and BSC protocols? In any case, we had over 30 remotes via BSC RJE, mostly 360/20s and 1130s, so if it didn't support RJE, you could have fooled us. As I recall, HASP had it first and the code was lifted, more or less intact, for ASP. Correct me if I'm wrong.

on MVT 18/HASP base ('69) ... i stripped out the HASP 2780 support (in order to gain back some size/addressability) and put in CRJE support with cms editor syntax ... supporting 2741s & TTYs.

misc. asp stuff from earlier this year:
https://www.garlic.com/~lynn/2000.html#76
https://www.garlic.com/~lynn/2000.html#77

misc. stuff on hasp/JES2 networking
https://www.garlic.com/~lynn/99.html#33

OT:
https://www.garlic.com/~lynn/2000f.html#68
https://www.garlic.com/~lynn/2000f.html#69
https://www.garlic.com/~lynn/2000f.html#70

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

SET; was Re: Why trust root CAs ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SET; was Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Wed, 22 Nov 2000 16:00:04 GMT
"David Thompson" writes:
as SET does prevent disclosing payment (card) information. (The X9.59 approach has the same limitation, as I read it.)

the x9.59 standard requirement was to specify the data elements that need signing preserve the integrity of the financial infrastructure for all electronic retail payment (account-based) transactions.

one of the data elements is the hash of the order details ... but not the actual order details.

the x9.59 standard doesn't actually specify messages ... but there has been work on mapping the x9.59 signed data elements within existing financial standards messages and protocols (supporting deployment of x9.59 in both currently non-existent financial infrastructures as well as existing financial infrastructures).

x9.59 shopping can be done via SSL web ... then sending off a purchase request message and getting a reply. x9.59 shopping can be also done from offline cdrom and an x9.59 purchase request email and getting an x9.59 purchase request reply email (i.e. one requirement on the x9.59 standard definition was that it could be done in a single round-trip). Or x9.59 shopping could be done at point-of-sale ... i.e. the x9.59 standard was not defined to be an Internet-specific standard ... but the requirement on the work group was a standard for all electronic account-based retail transactions preserving the integrity of the financial infrastructure (it is rather difficult to enforce ssl-based shopping experience when physically present in a merchant's store).

x9.59 does specify that the account number is used in "authenticated" transactioons, i.e. non-x9.59 transactions with the x9.59 account-number are not to be approved. many/most issuers support mapping multiple account-numbers to the same account i.e. various implementations might include x9.59 only accounts, accounts with mixture of x9.59 account-numbers and non-x9.59 account-numbers, family member account-numbers, etc. Rather than having to keep the x9.59 account-number secret in order to prevent fraudulent transactions ... fraudulent x9.59 transactions are prevented by requiring that they all be end-to-end strong integrity and authenticated.

work as also been done on authenticated X9.59-like transactions using AVS for hardgood drop shipment ... signed transaction goes to shipper authorizing address and gets a transaction code supplied to the merchant. merchant applies the transaction barcode to the package. the shipper reads the bar-code and looksup the routing code. Eventually shipper applies real address label for final drop. The hardgood shipping authorization can be piggybacked in the same transaction/email with the payment instruction (which the merchant has to forward to the shipping service for fulfillment). Again this could be a web-based experience, something like a cdrom/email based experience, and/or physically in the store and wanting the goods shipped back home.

Also, x9.59 & x9.59-like definitions are privacy neutral ... not divulging any privacy information ... while still providing end-to-end strong integrity.

work is also going on to advance it to ISO international standard as well as define support within existing ISO 8583 (i.e. international payment card protocol standard). ABA (american bankers association) servers as the secretaiat of both X9 (US) and TC68 (iso/internation) financial standards organization.

there is also the trade-off between BBB/consumer-report certification using certificates vis-a-vis something like an online BBB web-site. Trust tends to come in many forms; brand, advertisement, word-of-mouth, prior experience, certification, etc. when influencing directing the shopping experience. One of the trust issues is that in terms of number of transactions ... the vast majority of transactions (possibly approaching 90+% are brand, advertisement, word-of-mouth, and/or prior experience based-trust). For the remaining that involve people actually performing some sort of certification checking ... various certification entities have expressed interest in on-line web-based certification service ... since that creates a tighter binding between them and their relying parties (the old saw about the thing called certificates being an offline paradigm ... targeted for operations that need to work in an offline environment).

misc. refs:
http://www.x9.org/ US financial standards body
http://www.tc68.org/ ISO financial standards body
https://www.garlic.com/~lynn/ ... some information on x9.59

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Cryptogram Newsletter is off the wall?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cryptogram Newsletter is off the wall?
Newsgroups: sci.crypt
Date: Wed, 22 Nov 2000 16:06:50 GMT
junk@nowhere.com (Mark Currie) writes:
Smart cards can help and a remote secure signing server may also, but the thing that worries me is that the basic principle is the same. You are letting a machine act as a proxy for signing. There is no biological binding such as in a hand-written signature. Smart cards are starting to become complex machines in their own right. They now have operating systems and they can download and execute third party code.

note that smart cards don't have to become more complex ... they could become much simpler ... and not allow download & execution of third party code.

to some extent the existing prevalent smartcard paradigm is left over from the mid-80s defining portable computing capability using the technology trade-offs that existing at that point in time. to large extent the mid-80s trade-offs that resulting in smartcard paradigm were replaced in the early 90s with PDAs and cellphones ... i.e. mid-80s had relatively inexpensive complex computing chips that could be packaged in portable form-factors ... but cost-effective portable input/output capability didn't really exist.

The PDAs and cellphone technologies have largely obsoleted the technology trade-off decisions from the mid-80s that gave rise to much of the existing smartcard paradigm.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Metric System (was: case sensitivity in file names)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Metric System (was: case sensitivity in file names)
Newsgroups: alt.folklore.computers
Date: Wed, 22 Nov 2000 17:28:02 GMT
glass2 writes:
The story is told that one of the big, west-coast IBM locations used to use the waste heat from their computer installation to heat the office buildings. Then, when they upgraded to bigger, faster, more efficient computers, there was less waste heat produced, and they had to add auxilary heaters to keep the buildings warm.

supposedly when pc/rts (& then rs/6000) were being deployed in almaden building ... they requested that people not turn them off at night. the problem was that the building air condition system could not reach equilibrium (either heating or cooling) with the thermal swings caused by turning the machines off at night and turning them on in the morning.

going back further ... when ibm declined to buy the university 709 (serial number was less than five, possibly three) for a museum, it was sold for scrap (vaquely remember that it required 20ton rated air condition unit). guy that bought it, set it up in a barn ... and would run it on cold days with the doors open and big industrial fans.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Florida is in a 30 year flashback!

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Florida is in a 30 year flashback!
Newsgroups: alt.folklore.computers
Date: Thu, 23 Nov 2000 19:36:06 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
In today's Ottawa Citizen, there were pictures of Floridian vote counters holding punched cards up to the light! I remember doing that back when I knew the entire EBCDIC punch code.

Another trick in those days: to find the changes made in a source program, print out the old and new versions, then hold (or tape) them to a window and scan for differences. Of course, this method only worked for scant hours at this time of year at my latitude.


the other trick was reading the holes in a "binary" program executable card deck ... finding the correct card containing executable needing changing and duping the card ... except for the code to be changed (on an 026 or 029) and "multi-punching" the binary change into the new/replacement card (34 year flashback).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

8086 Segmentation (was 360 Architecture, Multics, ...)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 8086 Segmentation (was 360 Architecture, Multics, ...)
Newsgroups: alt.folklore.computers,comp.arch
Date: Thu, 23 Nov 2000 19:38:14 GMT
Dennis Yelle writes:
Yes. I never understood that either, but that reason was repeated over and over again back in those days. Wow, 8 extra wires. How expensive that would have been? Can someone explain this to me?

remember that ibm already had experience with a 68k "PC" from its instrumentation division ... and so there was some amount of experience at the time producing a 68k machine.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Reading wireless (vicinity) smart cards

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Reading wireless (vicinity) smart cards
Newsgroups: alt.technology.smartcards
Date: Thu, 23 Nov 2000 19:41:00 GMT
rnjmorris writes:
Hello,

Can anyone point me to resources or companies dealing with the reading of vicinity smart cards (i.e. cards which can be read at a distance of up to 50cm away)?

I'm looking for a hardware solution for a handheld computer.

Many thanks,

Rod Morris


ISO 14443 defines proximity card standard ... actually 4-5 substandards.
http://www.iso.org

then there is bluetooth for 1m-2m.
http://www.bluetooth.com

... also try search engines on iso 14443 & bluetooth

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

TSS ancient history, was X86 ultimate CISC? designs)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSS ancient history, was X86 ultimate CISC? designs)
Newsgroups: alt.folklore.computers,comp.arch,comp.sys.super
Date: Fri, 24 Nov 2000 16:09:54 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
Can you provide time frame for the virtual /40? From (far too many) years ago I once had a Functional Characteristics book for the (production) 360/40, and it contained a closeup picture of the Operator Control Panel -- and between the POWER ON and POWER OFF buttons in the picture was a hand-drawn outline of a toggle switch, below which was the label "PREFIX". Sadly, that manual is long gone to the Great Wastebasket In The Sky.

note that not only does the CP/40 survive today in the VM implementation ... but all the (ibm) mainframe machines now have a reduced version of it implemented directly in hardware ... referred to as "logical partitions" (or LPARS).

random refs:
https://www.garlic.com/~lynn/2000.html#8
https://www.garlic.com/~lynn/2000.html#63
https://www.garlic.com/~lynn/2000.html#86
https://www.garlic.com/~lynn/2000b.html#51

from melinda's web site
https://www.leeandmelindavarian.com/Melinda#VMHist
In the Fall of 1964, the folks in Cambridge suddenly found themselves in the position of having to cast about for something to do next. A few months earlier, before Project MAC was lost to GE, they had been expecting to be in the center of IBM's time-sharing activities. Now, inside IBM, "time-sharing" meant TSS, and that was being developed in New York State. However, Rasmussen was very dubious about the prospects for TSS and knew that IBM must have a credible time-sharing system for the S/360. He decided to go ahead with his plan to build a time-sharing system, with Bob Creasy leading what became known as the CP-40 Project. The official objectives of the CP-40 Project were the following:

1. The development of means for obtaining data on the operational characteristics of both systems and application programs;

2. The analysis of this data with a view toward more efficient machine structures and programming techniques, particularly for use in interactive systems;

3. The provision of a multiple-console computer system for the Center's computing requirements; and

4. The investigation of the use of associative memories in the control of multi-user systems.

The project's real purpose was to build a time-sharing system, but the other objectives were genuine, too, and they were always emphasized in order to disguise the project's "counter-strategic" aspects. Rasmussen consistently portrayed CP-40 as a research project to "help the troops in Poughkeepsie" by studying the behavior of programs and systems in a virtual memory environment. In fact, for some members of the CP-40 team, this was the most interesting part of the project, because they were concerned about the unknowns in the path IBM was taking. TSS was to be a virtual memory system, but not much was really known about virtual memory systems. Les Comeau has written: Since the early time-sharing experiments used base and limit registers for relocation, they had to roll in and roll out entire programs when switching users....Virtual memory, with its paging technique, was expected to reduce significantly the time spent waiting for an exchange of user programs.

22 R.J. Adair, R.U. Bayles, L.W. Comeau, and R.J. Creasy, A Virtual Machine System for the 360/40, IBM Cambridge Scientific Center Report 320-2007, Cambridge, Mass., May, 1966.

================

What was most significant was that the commitment to virtual memory was backed with no successful experience. A system of that period that had implemented virtual memory was the Ferranti Atlas computer, and that was known not to be working well. What was frightening is that nobody who was setting this virtual memory direction at IBM knew why Atlas didn't work. 23

Creasy and Comeau spent the last week of 1964 24 joyfully brainstorming the design of CP-40, a new kind of operating system, a system that would provide not only virtual memory, but also virtual machines. 25 They had seen that the cleanest way to protect users from one another (and to preserve compatibility as the new System/360 design evolved) was to use the System/360 Principles of Operations manual to describe the user's interface to the Control Program. Each user would have a complete System/360 virtual machine (which at first was called a "pseudo-machine"). 26

The idea of a virtual machine system had been bruited about a bit before then, but it had never really been implemented. The idea of a virtual S/360 was new, but what was really important about their concept was that nobody until then had seen how elegantly a virtual machine system could be built, with really very minor hardware changes and not much software.

------------------------------------------------------------

23 L.W. Comeau, "CP-40, the Origin of VM/370", Proceedings of SEAS AM82, September, 1982, p. 40.

24 Creasy had decided to build CP-40 while riding on the MTA. "I launched the effort between Xmas 1964 and year's end, after making the decision while on an MTA bus from Arlington to Cambridge. It was a Tuesday, I believe." (R.J. Creasy, private communication, 1989.)

25 R.J. Creasy, General Description of the Research Time-Sharing System with Special Emphasis on the Control Program, IBM Cambridge SR&D Center Research Time-Sharing Computer Memorandum 1, Cambridge, Mass., January 29, 1965. L.W. Comeau, The Philosophy and Logical Structure of the Control Program, IBM Cambridge SR&D Center Research Time-Sharing Computer Memorandum 2, Cambridge, Mass., April 15, 1965.

26 For the first few weeks, the CSC people referred to their concept as a "pseudo-machine", but soon adopted the term virtual machine after hearing Dave Sayre at IBM Research use it to describe a system he had built for a modified 7044. Sayre's M44 system was similar to CP-40, except for the crucial difference of not providing a control program interface that exactly duplicated a real machine. The CP-40 team credited Sayre with having "implanted the idea that the virtual machine concept is not necessarily less efficient than more conventional approaches." (L. Talkington, "A Good Idea and Still Growing", White Plains Development Center Newsletter, vol. 2, no. 3, March, 1969.) "The system built by Dave Sayre and Bob Nelson was about as much of a virtual machine system as CTSS---which is to say that it was close enough to a virtual machine system to show that 'close enough' did not count. I never heard a more eloquent argument for virtual machines than from Dave Sayre." (R.J. Creasy, private communication, 1990.)

27 "Dick Bayles was not only a great programmer, he was also the fastest typist I have ever seen." (W.J. Doherty, private communication, 1990.) "When Dick Bayles sat down [at a keypunch], he wrote code as fast as it could punch cards. Yes, the machine was slower than Bayles composing code on the fly." (R.J. Creasy, private communication, 1989.)

==================

One of the fun memories of the CP-40 Project was getting involved in debugging the 360/40 microcode, which had been modified not only to add special codes to handle the associative memory, but also had additional microcode steps added in each instruction decoding to ensure that the page(s) required for the operation's successful completion were in memory (otherwise generating a page fault).

The microcode of the 360/40 comprised stacks of IBM punch card-sized Mylar sheets with embedded wiring. Selected wires were "punched" to indicate 1's or 0's. Midnight corrections were made by removing the appropriate stack, finding the sheet corresponding to the word that needed modification, and "patching" it by punching a new hole or by "duping" it on a modified keypunch with the corrections. 32

Back during that last week of 1964, when they were happily working out the design for the Control Program, Creasy and Comeau immediately recognized that they would need a second system, a console monitor system, to run in some of their virtual machines. Although they knew that with a bit of work they would be able to run any of IBM's S/360 operating systems in a virtual machine, as contented users of CTSS they also knew that they wouldn't be satisfied using any of the available systems for their own development work or for the Center's other time-sharing requirements. Rasmussen, therefore, set up another small group under Creasy to build CMS (which was then called the "Cambridge Monitor System"). The leader of the CMS team was John Harmon. 33 Working with Harmon were Lyndalee Korn and Ron Brennan. Like Multics, CMS would draw heavily on the lessons taught by CTSS. Indeed, the CMS user interface would be very much like that of CTSS.

Since each CMS user would have his own virtual machine, CMS would be a single-user system, unlike CTSS. This was an important factor in the overall simplicity and elegance of the new system. 34 Creasy has written that one of the most important lessons they had learned from their CTSS experience was "the necessity of modular design for system evolution. Although [CTSS was] successful as a production system, the interconnections and dependencies of its supervisor design made extension and change difficult." 35

------------------------------------------------------------

32 R.U. Bayles, private communication, 1989. "The Model 40 was a Hursley (UK) product, announced in 1964. It used the first programmable ROS (invented by Tony Proudman, I believe) called Transformer Read-Only Storage (developed in 1961/2). In the Model 40 the circuit on the Mylar sheets wound around 60 cores, hence allowing the storage of 60-bit words; the Model 40 had 4096 60-bit words. It was this re-programmable storage that made the Model 40 modifiable, as you describe." (M.F. Cowlishaw, private communication, 1990.)

33 J.B. Harmon, General Description of the Cambridge Monitor System, IBM Cambridge SR&D Center Research Time-Sharing Computer Memorandum 3, Cambridge, Mass., May 12, 1965.

34 Bob Creasy has commented, "Simplicity was important because of our limited resource. I didn't expect the design [of CMS] to hold for more than a couple of years. We recognized the importance of multiple processes in a single-user environment, but we could not afford the complexity. To put it another way, we weren't smart enough to make it simple enough." (R.J. Creasy, private communication, 1990.) 35 R.J. Creasy, "The Origin of the VM/370 Time-Sharing System", IBM Journal of Research and Development, vol. 25, no. 5, September, 1981, p. 485.

================

CP-40 would be far more modular than CTSS, in that it would be divided into two independent components. In the words of Bob Creasy:

A key concept of the CP/CMS design was the bifurcation of computer resource management and user support. In effect, the integrated design was split into CP and CMS. CP solved the problem of multiple use by providing separate computing environments at the machine instruction level for each user. CMS then provided single user service unencumbered by the problems of sharing, allocation, and protection. 36 As the weeks went by and the real power of the virtual machine concept unfolded before them, their excitement grew. In discussing the decision to create exact replicas of real machines, Les Comeau has written, "It seems now that the decision to provide a Control Program interface that duplicated the System/360 architecture interface was an obvious choice. Although it was, given our measurement objective, it wasn't, given our in-house interactive system objective." 37 He credits "the strong wills and opinions of the group" for providing further motivation for selecting such a well-defined interface 38 between the CP and CMS components:

I think that most designers recognize the need for good separation of function in programming system design, but compromise becomes the rule very early in the effort. With the particular group assembled to build CP/CMS, the personalities reinforced that design principle, rather than compromising it.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Cryptogram Newsletter is off the wall?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cryptogram Newsletter is off the wall?
Newsgroups: sci.crypt
Date: Fri, 24 Nov 2000 16:14:58 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
SYN bombing is an illuminating example. Every busy web server continually suffers from orphan TCP SYN's that cannot be distinguished from intentional SYN attacks. Every time a random PC is disconnected just after sending a SYN to start fetching an HTTP page, the target HTTP server will see a TCP/IP packet that cannot be distinguished from a SYN attack. The only distinguishing characteristic of a SYN attack is enough orphan SYN's to cause problems, and that depends more on the nature of the system under "attack" than on other people's intentions, good or otherwise.

some number of the orphan SYNs would go poof if there was some way of communicating ICMP not available/reachable up the stack.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/




next, previous, subject index - home