List of Archived Posts

2002 Newsgroup Postings (08/08 - 08/29)

misc. old benchmarks (4331 & 11/750)
misc. old benchmarks (4331 & 11/750)
IBM 327x terminals and controllers (was Re: Itanium2 power
misc. old benchmarks (4331 & 11/750)
misc. old benchmarks (4331 & 11/750)
PKI, Smart Card, Certificate Verification
IBM 327x terminals and controllers (was Re: Itanium2 power
computers and stuff
Avoiding JCL Space Abends
Avoiding JCL Space Abends
PKI, Smartcard, Certificate Chain Verification
Serious vulnerablity in several common SSL implementations?
old/long NSFNET ref
Difference between Unix and Linux?
NASA MOC (mainframe mission operations computer) being powere d
Okay, we get it
s/w was: How will current AI/robot stories play when AIs are
s/w was: How will current AI/robot stories play when AIs are
Unbelievable
Vnet : Unbelievable
Vnet : Unbelievable
Vnet : Unbelievable
Vnet : Unbelievable
Vnet : Unbelievable
computers and stuff
miscompares per read error
DEC eNet: was Vnet : Unbelievable
computers and stuff
computers and stuff
computers and stuff
computers and stuff
general networking is: DEC eNet: was Vnet : Unbelievable
Looking for security models/methodologies
general networking is: DEC eNet: was Vnet : Unbelievable
30th b'day .... original vm/370 announcement letter (by popular demand)
... certification
... certification addenda
RCA Spectra architecture
GOTOs cross-posting
Vnet : Unbelievable
hung/zombie users
How will current AI/robot stories play when AIs are real?
MVS 3.8J and NJE via CTC
how to build tamper-proof unix server?
how to build tamper-proof unix server?
How will current AI/robot stories play when AIs are real?
OT (sort-of) - Does it take math skills to do data processing ?
OT (sort-of) - Does it take math skills to do data processing ?
MVS 3.8J and NJE via CTC
MVS 3.8J and NJE via CTC
OT (sort-of) - Does it take math skills to do data processing ?
SSL Beginner's Question
Dump Annalysis
general networking
general networking
Moore law
Moore law
History of AOL
OT (sort-of) - Does it take math skills to do data processing ?
History of AOL
IBM-Main Table at Share
arrogance metrics (Benoits) was: general networking
OT (sort-of) - Does it take math skills to do data processing ?
OT (sort-of) - Does it take math skills to do data processing ?
History of AOL
History of The Well was AOL
OT (sort-of) - Does it take math skills to do data processing ?

misc. old benchmarks (4331 & 11/750)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: misc. old benchmarks (4331 & 11/750)
Newsgroups: alt.folklore.computers
Date: Thu, 08 Aug 2002 21:52:53 GMT
a random bencharmk done in early 80s both running some flavor of C under some flavor of unix

4331          vax 11/750
add LONG                  1 msec/add       1 msec/add
add SHORT                 2 msec/add       4 msec/add
add FLOAT                 7 msec/add      19 msec/add
ram read SHORT            7 msec/byte      4 msec/byte
ram read LONG             9 msec/byte      2 msec/byte
ram read CHAR            19 msec/byte      8 msec/byte
ram write SHORT           8 msec/byte      3 msec/byte
ram write LONG            4 msec/byte      2 msec/byte
ram write CHAR           11 msec/byte      7 msec/byte
ram copy SHORT            8 msec/byte      4 msec/byte
ram copy LONG             4 msec/byte      2 msec/byte
ram copy CHAR            16 msec/byte      7 msec/byte
multiply SHORT           16 msec/mult     13 msec/mult
multiply LONG            16 msec/mult      9 msec/mult
multiply FLOAT           21 msec/mult     42 msec/mult
divide SHORT             20 msec/div      12 msec/div
divide LONG              20 msec/div       9 msec/div
divide FLOAT             21 msec/div      74 msec/div

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

misc. old benchmarks (4331 & 11/750)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: misc. old benchmarks (4331 & 11/750)
Newsgroups: alt.folklore.computers
Date: Fri, 09 Aug 2002 12:39:14 GMT
"Rupert Pigott" writes:
Cycle counts for instructions don't really give you a good picture of how pipelining, prefetching and caching affect the picture though... :/

Hence I like stuff like SPEC marks... Not perfect sure, but they give the system + tool chain a bit of a workout - which I prefer.


the numbers had a little bit ... but not very complex. they were actually gathered by first doing a loop with nothing ... and then looping the specific operpation a couple million times ... then taking the avg. per operation.

from same source as
https://www.garlic.com/~lynn/2002i.html#12 CDC6600 - just how powerful a machine was it?

linpack numbers ... what i don't know from the previous is which model 750 was actually used for the comparion ... compared to the following linpack numbers. The 4331 and 4341 started shipping spring of 1979.


VAX 11/785 FPA             .20
IBM 4341                   .19
VAX 11/785 FPA             .18
VAX 11/780 FPA             .14
VAX 11/750 FPA             .12
VAX 11/780 FPA             .11
VAX 11/750 FPA             .096
VAX 11/750                 .057
IBM 4331                   .038
VAX 11/725 FPA             .037
VAX 11/730 FPA             .036
VAX 11/750                 .029

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM 327x terminals and controllers (was Re: Itanium2 power

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 327x terminals and controllers (was Re: Itanium2 power
limited?)
Newsgroups: comp.arch
Date: Fri, 09 Aug 2002 13:00:15 GMT
dale@edgehp.invalid (Dale Pontius) writes:
CUT=Control Unit Terminal, means the terminal (3277,8,9) is really dumb, and the smarts are in the control unit. (3272) DFT=Distributed Function Terminal means that the terminal (3278, 3279) has gotten smarter, though the control unit (3274, 3174) has, too. What has really happened is that new function (Programmed Symbols, GDDM, etc) has been made available.

possibly fading memory on my part ... i just vaguely remembered terms CUT & DFT.

original 3272/3277 ... had keyboard logic in the keyboard. it was possible to modify the 3277 keyboard to do different things.

3274/3278/etc moved that logic back to the control unit. a 3274 had special logic that you could attach 3277 to 3274 ... but it was no longer possible to modify 3278 keyboard to do keystroke stuff. the 3274/3278 did have a key that was something like ">>" & "<<" which was double fast cursor motion ... supported by microcode logic in the 3274 controller.

3278s weren't supported on 3272 controllers. 3278 terminals & 3274 controllers were initially introduced together.

it was possible to take apart the 3277 keyboard ... and little wirewrap change the initial character repeat delay as well as the character repeat rate. this could be used to speed up the cursor motion on the screen. It was also possible to install a FIFO box that handled the interlock between keystrokes and screen write (i.e. the box would hold the keystroke(s) if the screen was being written instead of locking the keyboard). The repeat delay/rate & FIFO box for the 3277 keyboard was not possible on the later 327x ... because the keyboard logic had moved back to the controller (3274).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

misc. old benchmarks (4331 & 11/750)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: misc. old benchmarks (4331 & 11/750)
Newsgroups: alt.folklore.computers
Date: Fri, 09 Aug 2002 13:29:28 GMT
other data from
http://home.debitel.net/user/groener1/

some vax-11/7xx


Computer VAX-11/730
Gebaut ab/Build from 1982.04
Anfangspreis/Price at start DM 150.000,00
Hauptspeicher RAM KB 1.024,00
Max. RAM KB 2.048,00
Betriebssystem/OS VMS
HDD 2 x 10 MB

Computer VAX-11/750
Gebaut ab/Build from 1980.10
Anfangspreis/Price at start DM 230.000,00
Max. RAM KB 512,00
HDD 250 MB -

Computer  VAX-11/780
Angekündigt/Announced 1975
Gebaut ab/Build from 1977.10
Anfangspreis/Price at start DM 450.000,00
Hauptspeicher RAM KB 128,00
Max. RAM KB 16.000,00
Cache 8 KB
HDD 14 MB - 1 GB

some ibm 43xx

Computer 4331
Gebaut ab/Build from 1979
Hauptspeicher RAM KB 512,00
Max. RAM KB 1.024,00

Computer 4341
Gebaut ab/Build from 1979
Hauptspeicher RAM KB 2.048,00
Max. RAM KB 8.000,00

Computer 4361
Angekündigt/Announced 1983.09
Gebaut ab/Build from 1983.IV
Hauptspeicher RAM KB 2.048,00
Max. RAM KB 4.096,00

Computer 4381
Angekündigt/Announced 1983.09
Gebaut ab/Build from 1984.04
Max. RAM KB 16.000,00

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

misc. old benchmarks (4331 & 11/750)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: misc. old benchmarks (4331 & 11/750)
Newsgroups: alt.folklore.computers
Date: Fri, 09 Aug 2002 17:18:46 GMT
"GerardS" writes:

| 4341-1            370/4341 |    .88 |
| 4341-2            370/4341 |   1.50 |
| 4341-9            370/4341 |    .52 |
| 4341-10           370/4341 |    .75 |
| 4341-11           370/4341 |   1.10 |
| 4341-12           370/4341 |   1.65 |

note engineering, pre customer ship 4341-1 on rain/rain4 from
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe


                      158             3031              4341

Rain              45.64 secs       37.03 secs         36.21 secs
Rain4             43.90 secs       36.61 secs         36.13 secs

also times approx;
145                168-3              91
145 secs.          9.1 secs          6.77 secs

===================

and linpack


IBM 370/158               .22
IBM 4341 MG10             .19

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI, Smart Card, Certificate Verification

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI, Smart Card, Certificate Verification
Newsgroups: alt.technology.smartcards
Date: Fri, 09 Aug 2002 17:37:30 GMT
m.lyubich@computer.org (Mykhailo Lyubich) writes:
I've been looking for a PKI system which uses the smartcard and this smart card is able to verify the certificate chain on-card.

For example somebody sends me his public key with the certificate. I want to verify the certificate before to use the public key for the encryption. I want to do the verification on the card as I assume that the card is less vulnerable environment to run the verification procedure than the host.


what are the systemic vulnerabilities and why?

if you have a certificate & a public key of the CA ... you could call a subroutine and verify the certificate with the CA's public key ... and then on return branch to successful or non-successful.

are you talking about the code that you are running ... and the instructions in memory that might have vulnerabilities?

would things improve if the subroutine ... instead of doing the actual crypto operations to verify the certificate signature with the CA's public key ... call another subroutine that past the data to a smartcard and got back an answer???

if the point of vulnerability is the executing code in memory ... then the decision code on return from calling the subroutine to do the verification is as vulnerable as the code doing the verification (aka rather than try and mess around with all the gorpy crypto code involved in verify a signature ... just zap the branch instruction that implements the branch between "correct" or "incorrect" verification)

moving a very small piece of a systemic whole operation to a smartcard doesn't do much if you leave everything else vulnerable ... the code that evaluates the decision based on verified or not-verified results is still vulnerable, the code that implements some policy based on the code the evaluates the decision is still vulnerable.

at the very simplest .... validating a signature is a very complex compare instruction (a signature verification consists of calculating a hash of the certificate ... and essentially decrypting the digital signature with the public key to arrive at a hash and then compare the two hashs).


COMPARE   HASH1,HASH2
BRANCH EQUAL to GOOD
BRANCH NOT EQUAL to BAD

Lets say you move the COMPARE instruction into a smartcard (effectively what the certificate verification amounts to). The branch instructions and the code at "good" and "bad" are still vulnerable unless they are moved too.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM 327x terminals and controllers (was Re: Itanium2 power

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 327x terminals and controllers (was Re: Itanium2 power
limited?)
Newsgroups: comp.arch
Date: Sat, 10 Aug 2002 03:32:41 GMT
Anne & Lynn Wheeler writes:
possibly fading memory on my part ... i just vaguely remembered terms CUT & DFT.

talk about fading memory .... it is also what i get for posting from my laptop while on the road.

ok ... here is reference to 3272/3277 "ANR" protocol
https://www.garlic.com/~lynn/2001m.html#17 3270 Protocol

here is some timing numbers vis-a-vis ANR & DFT ... where a lot of the logic had been moved out of the head & keyboard and back into the microprogramming in the 3274 controller
https://www.garlic.com/~lynn/2001m.html#19 3270 Protocol

in the above ... lots of CMS systems had quarter second avg. system response and numerous (like SJR) had .11 second trivial system response for the 90th percentile (i.e. not avg, 90 percent of trivial responses were .11 second or less).

aka ... 3272 hardware time avg. .086 seconds ... while 3274 hardware time normal processing was .530 seconds.

in addition to not being able to fix some of the human factors by hacking the keyboard ... it was not possible to achieve under .5 second response time ... no matter how you cut it (the controller hardware time was over & above system response ... and there could also possible queuing contention from other terminals keeping the 3274 busy).

3274s had to carry along ANR support .... to accommodate legacy 3277s laying around. Even tho 3274s exhibited all the additional controller overhead ... early PC cards could be found that were ANR because even on 3274s with the additional controller overhead ... ANR got something like three times the thruput of DFT.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

computers and stuff

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computers and stuff.
Newsgroups: comp.lang.asm370
Date: Sat, 10 Aug 2002 19:48:19 GMT
"John Hastings" writes:
^ that's what I said... Every system has a use, irregardless of the attitude that people cop, acting like they know everything. The problem is that when people don't know enough, and choose the wrong sytem for their usage.

this could be a discussion about the merits of corvettes and garbage trucks ... and why doesn't somebody put a 30 cubit metric trunk in a corvette so it can also be sold to municipal sanitation departments as a garbage truck.

there are a broad range of vehicles .... some very specialized and others that are less so .... and a large variety that are targeted at the consumer market (or at least a market that may not have 20-30 years of detailed experience in designing and implementing vehicles).

lots of people may choose less than optimal vehicle for their purposes ... it doesn't necessarily make it wrong ... it just makes it less than optimal (at least for some set of metrics associated with optimal). in some cases, the cost of making a more informed decision may be higher than any improvement in the decision.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Avoiding JCL Space Abends

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Avoiding JCL Space Abends
Newsgroups: bit.listserv.ibm-main
Date: Sun, 11 Aug 2002 15:06:56 GMT
edgould@AMERITECH.NET (Edward Gould) writes:
Hour or two does not come close to what a some MF batch do. As for backing up files.. fuzzy backups are OK unless you want recovery. If the files are a DBMS then fine. but plain old files I want data entegrity. The DBMS should provide data entegrity. DB2 has a locking mechanism. I am sure that the DBMS for UNIX has one as well (don't they????).

DBMS typically have locking & logging for transactional integrity (ACID properties)

then there is separate issues for no-single-point-of-failure ... aka some form of replicated data. replicated data may include things like journal (aka effectively long-lived log entries).

some of the DBMS backups are actually fuzzy plus a backup of the sequential journal (starting at least as far back in time in the journal when the fuzzy backup started and continuing thru when the fuzzy backup ended). recovery consists of restoring the fuzzy backup and then reruning the transactions from the journal backup. ADSM (now tivoll) has provided this kind of backup across a wide-range of platforms.

at least at the time when we were doing ha/cmp ... one of the issues with unix platfrom DBMS in parallel environment was distributed lock manager and fast commit. fast commit is basically write ahead log (i.e. rather than logging "before" image entries in the log, "after" images went into the log ... as soon as an "after" image was on disk in the log, the transaction was considered committed). As part of some of the distributed lock manager work ... I had worked on being able to pass cached images with the log. The approach at the time was to always flush modified cached images to "home" location in the database prior to letting a lock float to another location. The issue was allowing the "home" DBMS location to become really stale with the current state of the record possibly distributed across the write-ahead logs of several distributed processors. The issue after an outage was to correctly merge all the write-ahead logs during recovery in the correct original transaction order.

random acid properaties
https://www.garlic.com/~lynn/aadsm8.htm#softpki19 DNSSEC (RE: Software for PKI)
https://www.garlic.com/~lynn/aepay10.htm#27 [dgc.chat] XML/X - part I
https://www.garlic.com/~lynn/2001.html#6 Disk drive behavior
https://www.garlic.com/~lynn/2002d.html#5 IBM Mainframe at home
https://www.garlic.com/~lynn/2002e.html#44 SQL wildcard origins?

random lock manager refs
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002e.html#71 Blade architectures
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002f.html#4 Blade architectures
https://www.garlic.com/~lynn/2002f.html#5 Blade architectures
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures

random no single point of failure refs:
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/94.html#16 Dual-ported disks?
https://www.garlic.com/~lynn/94.html#33a High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/aadsm2.htm#availability A different architecture? (was Re: certificate path
https://www.garlic.com/~lynn/aadsm9.htm#pkcs12 A PKI Question: PKCS11-> PKCS12
https://www.garlic.com/~lynn/2001.html#34 Competitors to SABRE?
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2002h.html#40 [survey] Possestional Security

& ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

random adsm refs:
https://www.garlic.com/~lynn/2001n.html#66 Holy Satanism! Re: Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002e.html#3 IBM's "old" boss speaks (was "new")
https://www.garlic.com/~lynn/2002h.html#29 Computers in Science Fiction

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Avoiding JCL Space Abends

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Avoiding JCL Space Abends
Newsgroups: bit.listserv.ibm-main
Date: Sun, 11 Aug 2002 22:33:45 GMT
edgould@AMERITECH.NET (Edward Gould) writes:
VSAM was a steping stone for DB2. But very few people, IMO, (and it was probably because of cost) got off of the VSAM wagon and went to DB2. The file integrity issue was not IBM's issue (to be fair) as much as it was a cost and complexity issue for the users. The users jumped on it as a cheap replacement and because of integrity issues (and other reasons) sharing of VSAM files was not on IBM's list of TODO's because they were pushing DB2.

System/r was the original relational database at bldg.28. There was quite a bit of differences between the VSAM/IMS/STL (bldg. 90) guys and the whole relational thing. Eventually there was technology transfer of system/r to endicott for sql/ds. Eventually and somewhat later, one of the people in the following reference did much of technology transfer from endicott of sql/ds back to STL for DB2 (before leaving to work for another company):
https://www.garlic.com/~lynn/95.html#13 SSA

random system/r stuff
https://www.garlic.com/~lynn/2000.html#18 Computer of the century
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000b.html#55 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2001d.html#44 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001i.html#32 IBM OS Timeline?
https://www.garlic.com/~lynn/2002e.html#26 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#44 SQL wildcard origins?
https://www.garlic.com/~lynn/2002g.html#58 Amiga Rexx
https://www.garlic.com/~lynn/2002g.html#59 Amiga Rexx
https://www.garlic.com/~lynn/2002g.html#60 Amiga Rexx
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2002h.html#17 disk write caching (was: ibm icecube -- return of
https://www.garlic.com/~lynn/2002h.html#76 time again
https://www.garlic.com/~lynn/2002i.html#69 Hercules and System/390 - do we need it?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI, Smartcard, Certificate Chain Verification

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI, Smartcard, Certificate Chain Verification
Newsgroups: sci.crypt
Date: Sun, 11 Aug 2002 22:17:26 GMT
umardikar@yahoo.com (Upendra) writes:
Do you think that the hardware security modules or hosts with PCI cards can solve this issue? (e.g. nCipher, Atalla devices) These devices also have trusted paths. Smart cards, to my knowledge, have means to authenticate but not encrypt.

Ofcourse, it depends on where you want to setup this PKI system. If it is just 1 PC then I don't know if going with Hardware makes sense. But in a corporate environment it may make sense.


most of these boxes .... either are for hardware acceleration (not protection) .... i.e. high speed signing, verification, encryption, and/or they hold/process secret/private keys (reducing risk and vulnerability to secret/private keys).

the only other partial domain processing .... (aka 90percent of the processing was on a general purpose computer and the rest was on a custom chip involved secret algorithms).

this other stuff is more akin to wearing a flak vest that only covers the belly button. sometimes there is confusion that because public keys and asymmetric algorithms are somehow related to private keys .... that if hardware tokens are used to protect private keys and private key operations .... then there is some magic property conveyed with regard to doing something for public key operations.

thread drift from alt.technology.smartcards
https://www.garlic.com/~lynn/2002k.html#5 PKI, Smart Card, Certificate Verification

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Serious vulnerablity in several common SSL implementations?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Serious vulnerablity in several common SSL implementations?
Newsgroups: sci.crypt
Date: Mon, 12 Aug 2002 14:00:58 -0600
fgrieu@micronet.fr (Francois Grieu) writes:
Found on Bugtraq, archived at Google
http://groups.google.com/groups?as_umsgid=3DPine.BSO.4.33.0208031620550.8=632-100000@moxie.thoughtcrime.org

My edited summary of Mike Benham's report:
Several popular implementations of SSL contains a vulnerability that allows for an active, undetected, man in the middle attack. Problem is that some SSL implementations fail to check that intermediate certificates have valid CA Basic Constraints. This allows the holder of a valid CA-signed certificate for ANY domain to generate a CA-signed certificate for ANY OTHER domain, that will get accepted by affected implementations of SSL.

I don't know enough about SSL to assert if this is a bug, and where. But this does seem plausible, and serious.

Random thoughts:
- how can Joe User trust SSL?
- is it that current software has outgrown our capacity to check it?
- or that we are not willing to pay the price (and time to market) to check things =E0 la Common Criteria EALx?
- or that the world must evolve thru trial and error, rather than the Edsger Dijkstra way?


the crypto and the key exchange & recording is fairly straight forward in something like PGP .... most people can fairly follow all the steps and the crypto is fairly easily isolated.

going to SSL isn't so much the technical pieces .... it is the large explosion in the number of dependent (business) processes and the introduction or a huge number of additional points of exploit/vulnerability.

it isn't so much it has outgrown the software technology ... it is somewhat pushing the limits of the number of interconnected processes that are all points of vulnerability/exploits; lots of processes, lots of steps, lots of interconnected steps, lots of places where things can go wrong.

it is the opposite of KISS.

frequently one of the security tenets tend to be simplicity and one of the non-security (or unsecurity) tenets is complexity (aka the more complexity and the more steps and the more interconnections ... the higher the probability that there will be at least one lapse).

Common Criteria EALs still have a way to go in this area. One view of protection profiles, common criteria, etc .... is that the orange book was a generic set of evaluation/certification critera for general purpose computers in non-networked environment.

As things got more & more complex (and networked) .... it became harder and harder to get a general purpose certification. in some sense, protection profile defines a restricted domain .... including possibly the application .... which does a very focused evaluation. It doesn't try and certify a general purpose computer for all possible applications running on that computer in all possible environments ... it frequently just certifies a specific piece in a very specific environment and set of conditions.

I'm looking at something of an interesting example now of getting a higher EAL evaluation/certification that includes a FIPS186-2/X9.62/ecdsa application. It is not clear if there have actually been any crypto evaluations. There are crypto device standards in FIPS140. However, if i look at various other higher level evaluations .... they don't actually include a crypto application ... but possibly just the hardware and some small piece of software running on that hardware. Any crypto app that is deployed on that platform isn't actually part of the evaluation/certification.

So the current state of the art .... is somewhat the technology of some subportion of the individual componets ... rarely looking at the completely deployed component and even less looking at the complex interaction of a large set of different components. And that is just the technology .... in a complex CA environment there are potentially a large number of non-technology vulnerabilities.

one might compare the current environment more to civil engineering rather than science. you have large number of complex bridges and complex skyscrapers. some fail and some don't. analysis of the ones that fail ... result in updating design methodologies. After a few thousand years ... the body of engineering knowledge improves and there tends to be fewer failures.

for any design that is subject to vulnerabilities, exploits, or failures, KISS is better than complex. KISS implies that some number of people can examine it and understand it. Somewhat related to KISS is the actual ability to have people examine it. Understanding is an issue. Complexity or simplicity is an issue in understanding. Ability to examine and study is also an issue in understanding.=20

KISS also tends to reduce the number of components .... the larger the number of components (technical or business processes or other kinds) and the larger the number of interactions between the components the large the number of points of failure.

So good design might be then restated as a) understanding and b) points-of-failue. KISS frequently improves both the ability to understand and reduce the number of failure points.

I just had to put in the above paragraph ... it is sort of a paraphrase from Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd

random eal refs:
https://www.garlic.com/~lynn/aadsm12.htm#13 anybody seen (EAL5) semi-formal specification for FIPS186-2/x9.62 ecdsa?
https://www.garlic.com/~lynn/aadsm12.htm#14 Challenge to TCPA/Palladium detractors
https://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002h.html#71 history of CMS
https://www.garlic.com/~lynn/2002h.html#84 history of CMS
https://www.garlic.com/~lynn/2002j.html#82 formal fips186-2/x9.62 definition for eal 5/6 evaluation
https://www.garlic.com/~lynn/2002j.html#84 formal fips186-2/x9.62 definition for eal 5/6 evaluation
https://www.garlic.com/~lynn/2002j.html#86 formal fips186-2/x9.62 definition for eal 5/6 evaluation

some generic ssl, vulnerability, and assurance comments
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
https://www.garlic.com/~lynn/subintegrity.html#fraud
https://www.garlic.com/~lynn/subintegrity.html#assurance

the ssl cert threads imply a much simpler design with nearly all of the CA-related components eliminated aka CAs for ssl domain name certificates have critical dependency on the domain name infrastructure ... tweaking the domain name infrastructure directly can provide a simpler and higher level of integrity while totally eliminating CAs and certificates for SSL operation.
https://www.garlic.com/~lynn/x959.html#aads

some specifc MITM discussions:
https://www.garlic.com/~lynn/aadsm11.htm#39 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda
https://www.garlic.com/~lynn/aadsmail.htm#variations variations on your account-authority model (small clarification)
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001.html#68 California DMV
https://www.garlic.com/~lynn/2001b.html#0 Java as a first programming language for cs students
https://www.garlic.com/~lynn/2001e.html#83 The Mind of War: John Boyd and American Security
https://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
https://www.garlic.com/~lynn/2001i.html#28 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001k.html#1 Are client certificates really secure?
https://www.garlic.com/~lynn/2001m.html#41 Solutions to Man in the Middle attacks?
https://www.garlic.com/~lynn/2002c.html#4 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2002c.html#5 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#43 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002d.html#47 SSL MITM Attacks
https://www.garlic.com/~lynn/2002d.html#50 SSL MITM Attacks
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002j.html#38 MITM solved by AES/CFB - am I missing something?!
https://www.garlic.com/~lynn/2002j.html#58 SSL integrity guarantees in abscense of client certificates

--=20 Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/=20

old/long NSFNET ref

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: old/long NSFNET ref.
Newsgroups: alt.folklore.computers
Date: Tue, 13 Aug 2002 05:41:12 GMT

Preliminary Announcement:                                       3/28/86
_________________________

PROGRAM ANNOUNCEMENT

CONNECTIONS TO NSF'S NATIONAL SUPERCOMPUTER ACCESS NETWORK - (NSFnet)

NETWORKING PROGRAM

INTRODUCTION
____________

The National Science Foundation established the Office of Advanced
Scientific Computing (OASC) in response to the concern that academic
research has been severely constrained by the lack of access to
advanced computing facilites.  Several reports found that advanced
computers have become an important resource in making new discoveries;
that there is an immediate need to make supercomputers available to US
researchers; and that computer networks are required to link
researchers to supercomputers and to each other.

The OASC has initiated three programs:  The Supercomputer Centers
Program to provide Supercomputer cycles;  the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

The Centers Program has been providing advanced scientific computing
cycles since 1984 through its six resources centers at Purdue
University, University of Minnesota, Colorado State University, Boeing
Computer Services, AT&T Bell Labs, and at Digital Productions.  In
addition, five new national supercomputer centers were funded in
1985.  These new centers, at the University of Illinois, Cornell
University, the San Diego Supercomputer Center located at the
University of California San Diego campus, the John Von Neumann Center
located near Princeton University, and the Pittsburgh Center, will
begin full operation in the first half of 1986.

The OASC Networking Program will provide remote access to these NSF
supercomputer centers.  During calendar year 1986 the first steps will
be taken to establish NSFnet to connect the NSF-funded supercomputer
centers to a large number of researchers with high bandwidth
communications.  NSFnet will greatly enhance the ability of scientists
and engineers to access the centers and to communicate with each
other.  Eventually NSFnet is expected to support the whole academic
research community as a general-purpose communications network.

Current networks comprising the initial phase of NSFnet include:  the
ARPANET, the Supercomputer Centers' Consortia Networks at the John Von
Neumann Consortium (Princeton) and the San Diego Supercomputer Center
(San Diego), the Illinois Supercomputer center network, a
Supercomputer Center Backbone Network linking all the NSF
supercomputer centers together, the various state research networks
(such as the Merit Computer network in Michigan and the planned New
York State Education and Research Network (NYSERnet)), and, most
importantly, the local campus-wide networks.  NSFnet will be built as
an Internet, or "network of networks", rather than as a separate, new
network.

Because NSFnet is a network of networks, a common set of networking
protocol standards are required for NSFnet, and the OASC has
determined that the DARPA/DOD protocol suite (TCP/IP and associated
application protocols) shall be the initial NSFnet standard.

DESCRIPTION OF PROJECT
______________________

The purpose of this announcement is to encourage proposals to connect
to NSFnet from all U.S. academic research institutions that support or
plan to support NSF supercomputer users.  In general, it is expected
that institutions will propose to connect to NSFnet by installing an
IP Router/Gateway, or a gateway computer system supporting the TCP/IP
communications protocols.  This gateway will link the campus-wide
network to NSFnet by means of medium speed communication circuits
(56,000 bits per second) connected to one of the component networks of
the NSFnet:  the NSF Supercomputer Center Networks, Consortia
Networks, the NSFnet Backbone Network, or a State or Regional network
connected to NSFnet.  Connections to the ARPAnet may also be proposed,
but it should be noted that the number of available connections is
limited, and that the network configuration is determined by the
Defense Communications Agency.  The gateway systems proposed must be
available to all researchers at the institution.  Ideally, the
institution will have installed a high-speed campus network, and have
adopted the TCP/IP protocols as standard.  Where other networking
protocols are used on the campus, the institution will be responsible
for the installation of any additional network gateway/relay systems
required to resolve the protocol conversion issues.

NSF is also interested in receiving proposals from academic research
institutions for the installation of network gateway systems and
communications services which would connect Regional and State-wide
networks to NSFnet, where such networks support NSF supercomputer
users at several research institutions or consortia.  Ideally, the
regional network and the connected campus networks, will have adopted
the TCP/IP protocols as standard, but where other networking protocols
are used, the regional network will be responsible for ensuring that
all researchers have transparent access to the NSFnet.

In the case of campus gateway systems, one year grants of up to
$50,000 may be provided to support the purchase and installation of
the gateway system and to fund communications circuits to connect to
NSFnet.  Typically the NSF grants for the IP router/gateway or gateway
computers will be between $10,000 and $30,000, with $20,000 - $30,000
per year for communication circuits   Campuses will be expected to
fund the campus network, the support costs of the campus
gateway system (space, air-conditioning, maintenance, local
supercomputer user support, manpower, etc.), and the cost of the
networking connections after the initial funding period has ended.
Funding for communications circuits for up to two additional years may
be available.

In the case of regional or state-wide networks, additional funding may
be provided, on a case by case basis, for a period of up to three
years, to support the development and/or enhancement of the network.

A total of up to 40 awards are planned for the two years 1986 and
1987.  Support for this program is contingent on the availability of
funds.  This announcement does not obligate the NSF to make any awards
if such funding is not available.

PROPOSAL SUBMISSION INFORMATION
_______________________________

A.  Who May Submit

U.S. academic institutions with scientific and engineering graduate
research and education programs are invited to submit proposals.
Proposals involving multi-institutional arrangements for regional
networks should be made through a single, lead institution.  Proposed
participants must endorse the proposal as submitted.

The Foundation welcomes proposals on behalf of all qualified engineers
and scientists.  NSF strongly encourages women, minorities, and the
physically handicapped to particpate fully in the program described in
this announcement, both as investigators and as students.

B.  Principal Investigator

The individual designated as principal investigator will be
responsible for management and staffing and procurement, use, and
maintenance of equipment.

C.  Timing of Submission

To be considered for funding in FY1986, proposals should be submitted
to the Foundation on or before June 1, 1986; proposals received after
this date will be considered for funding in FY1987.

D.  Rights to Proposal Information

A proposal that results in an NSF award will become part of the record
of the transaction and will be available to the public on specific
request.  Information or material that the Foundation and the awardee
organization mutually agree to be of a privileged nature will be held
in confidence to the extent permitted by law, including the Freedom of
Information Act (5 U.S.C. 552).  Without assuming any liability for
inadvertent disclosure, NSF will seek to limit dissemination of such
information to its employees and, when necessary for evaluation of the
proposal, to outside reviewers.  Accordingly, any privileged
information should be in a separate, accompanying statement bearing a
legend similar to the following: "Following is (proprietary) (specify)
information that (name of proposing organization) requests not be
released to persons outside the Government, except for purposes of
evaluation."

E.  Unsuccessful Proposals

An applicant whose proposal for NSF support has been declined may
request and receive from the cognizant program officer the reasons for
the action.  In addition, the principal investigator/project director
will obtain verbatim copies of reviews, although not the names of
reviewers.

F.  Participation in Research and Research-related Activities

o  The Foundation provides awards for research in the sciences and
      engineering.  The awardee is wholly responsible for the conduct
      of such research and preparation of the results for
      publication.  The Foundation, therefore, does not assume
      responsiblitiy for such findings or their interpretation.

o  The Foundation welcomes proposals on behalf of all qualified
      scientists and engineers, and strongly encourages women and
      minorities to compete fully in any of the research and
      research-related Programs described in this document.

o  In accordance with Federal statutes and regulations and NSF
      policies, no person on grounds of race, color, age, sex,
      national origin, or physical handicap shall be excluded from
      participation in, denied the benefits of, or be subject to
      discrimination under any program or activity receiving financial
      assistance from the National Science Foundation.

o  The National Science Foundation has TDD (Telephonic Device for
      the Deaf) capability which enables individuals with hearing
      impairment to communicate with the Division of Personnel and
      Management for information relating to NSF programs, employment,
      or general information.  This number is (202) 357-7492.

G.  Proposal Contents

The proposal should be prepared in accordance with the enclosed NSF
Form 83-57, Grants for Scientific and Engineering Research, which is
available from the Forms and Publications unit, Room 233, National
Science Foundation, Washington, DC 20550.  Note in particular the
standard cover page, executive summary, and budget formats.  Each
proposal should reflect the unique combination of the proposing
institution's interests and capabilities and should discuss the
features of the gateway system in sufficient detail to be evaluated in
accordance with the criteria listed in this announcement.  In order to
facilitate review, the proposals should contain only material
essential for the review.  Since reviewers will be asked to review
more than one proposal, lengthy proposals are not recommended.
Proposals should be securely fastened together but not placed in ring
binders.  A total proposed budget should be submitted for the project
including all cost sharing committments.  The proposal must describe:

o   Major existing and/or planned research and education projects
        that need access to advanced computer systems, and the
        benefits that such access is expected to provide;

o   Current connections to wide area networks available on campus,
        and current methods of supercomputer access. eg., 1200 baud
        dial-up, travel to supercomputer site, BITNET, ARPANET, etc.;

o   The proposed NSFnet Gateway system, including the campus
        network connections and the Gateway System hardware and
        software capabilities;

o   The user support services to be provided to NSF supercomputer
        users on campus;

o   The expertise of the implementing personnel and their
        experience with the TCP/IP protocols;

o   In the case of regional or state-wide networks, existing or
        potential connection to other campuses or research
        institutions, and the network connections and communications
        protocols supported;

o   Campus Network plans to integrate all networks currently
        available to the institution and description of the campus
        computers to be connected;

o   The budget requested including cost sharing by the
        Institution, by local and state government, as well as
        industry discounts.

H.  Proposal Evaluation

Evaluation of proposals in response to this Solicitation will be
administered by the Office of Advanced Scientific Computing.
Evaluations of the proposals received prior to June 1 are expected to
be completed by August 1, 1986, with awards to follow shortly
thereafter.

The primary criterion for evaluation will key on the effect of the
proposed arrangement on the advancement of science and engineering
research using supercomputers.

Other Evaluation criteria are listed below:

o   Quality of the plan to provide institution-wide (state or
        region-wide) access to the NSFnet gateway;

o   Quality of plans to develop campus wide high speed networks to
        connect research departments to the NSFnet gateway;

o   Technical expertise in computer networking (especially TCP/IP
        based networking) or plans to develop such expertise;

o   Quality and cost-effectiveness of gateway hardware and
        software configuration proposed (including cost sharing);

o   Quality of the supercomputer user support services and support
        staff planned.

WHERE AND HOW TO SUBMIT PROPOSALS
 _________________________________

Twenty copies of the proposal should be submitted to:

Data Support Services Section
Attn:  Office of Advanced Scientific Computing
National Science Foundation
Washington, DC  20550

One copy of the proposal must be signed by the principal investigator
and an official authorized to commit the institution in business and
government affairs.

For inquiries, contact the Office of Advanced Scientific Computing
(202) 357-9776.

==================================================

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

announcement of award:
https://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet

random other nsfnet refs:
https://www.garlic.com/~lynn/internet.htm
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/98.html#59 Ok Computer
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#37a Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#37b Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#40 [netz] History and vision for the future of Internet - Public Question
https://www.garlic.com/~lynn/99.html#138 Dispute about Internet's origins
https://www.garlic.com/~lynn/99.html#146 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000.html#49 IBM RT PC (was Re: What does AT stand for ?)
https://www.garlic.com/~lynn/2000c.html#26 The first "internet" companies?
https://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#78 Free RT monitors/keyboards
https://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
https://www.garlic.com/~lynn/2000d.html#19 Comrade Ronda vs. the Capitalist Netmongers
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#56 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#58 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#59 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#71 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#73 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#74 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#5 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#11 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#28 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#29 Vint Cerf and Robert Kahn and their political opinions
https://www.garlic.com/~lynn/2000e.html#31 Cerf et.al. didn't agree with Gore's claim of initiative.
https://www.garlic.com/~lynn/2000f.html#44 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#47 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#50 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001i.html#6 YKYGOW...
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#45 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#80 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#82 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#85 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#86 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Difference between Unix and Linux?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Difference between Unix and Linux?
Newsgroups: alt.folklore.computers
Date: Tue, 13 Aug 2002 13:50:17 GMT
jmfbahciv writes:
This is on my "Mysteries of my Life" list. We found that a sense of humor boosted sales.

my little contribution from the resource manager. modules were all named with a three letter prefix and three letters that some how denoted the function. kernel dispatcher was xxxDSP, kernel scheduler was xxxSCH, kernel paging had several, xxxPAG, xxxPGT, etc.

one of the primary kernel components for the resource manager was xxxSTP from the '60s tv advertisement "the racer's edge".

random ref:
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

NASA MOC (mainframe mission operations computer) being powere d

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: NASA MOC (mainframe mission operations computer) being powere d
down for the last time.
Newsgroups: bit.listserv.ibm-main
Date: Tue, 13 Aug 2002 20:57:33 GMT
Dennis.Roach@USAHQ.UNITEDSPACEALLIANCE.COM (Roach, Dennis) writes:
Apollo 5 360-75
1986 5 370-168
Today 2 9121-610


slightly related from a posting to an early '80s mailing list discussing the y2k problem (but strayed into other time related issues):
https://www.garlic.com/~lynn/2000.html#94 Those who do not learn from history...

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Okay, we get it

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Okay, we get it.
Newsgroups: alt.folklore.computers
Date: Wed, 14 Aug 2002 13:43:26 GMT
jmfbahciv writes:
Huh. Worldcom, ala MCI a.k.a. something else actually delivered a service that AT&T had deemed to be beneath them; this was known as long distance service upon demand for a reasonable price.

What is really mystifying to me is how these companies keep overlooking their cash cows. In the case of AT&T, the BoDs actually did their damnedest to destroy it (it looks like they're succeeding).


my impression is that (tariff & pricing) somewhat also kept ISDN from taking off ... DSL finally came along at much higher data rate and much lower cost. I had heard 2nd hand stories that the tariff/pricing for ISDN was kept that way because of worries that it might impact other cash cows.

I had ISDN for a couple years before 56kbit hardware compression modems ... and two-channel ISDN was frequently not any better than 56kbit hardware compressed (maybe slightly worse) ... at significantly higher cost.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

s/w was: How will current AI/robot stories play when AIs are

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: s/w was: How will current AI/robot stories play when AIs are
 real?
Newsgroups: alt.folklore.computers
Date: Wed, 14 Aug 2002 14:21:56 GMT
"Rupert Pigott" writes:
To be honest I view RISC & CISC as buzzwords, I don't think they really mean anything. The term RISC seems to have been coined as something to keep IBM PHB's happy.

my early impressions of 801 were

1) a reaction to FS ... which was about as far in the opposite direction as possible ... somewhat object oriented but in the hardware/microcode, claim was that worse case was that hardware would go thru five levels of indirection for a simple load or store instruction. there was claim that if FS was implementated using the fastest current 370 hardware technology (195) that FS applications would have thruput comparable to equivalent 370 application running on a 145 (one of the lower-end 370 implementations), aka about a 10:1 slowdown.

2) attempt to design a processor that would fit on a single chip.

i believe that the term RISC actually came out of some educational institution.

Later on there was various comments out of 801 group about hardware/software trade-offs ... and lack of feature/function in the hardware could be compensated by better compiler technology ... aka given the state-of-the-art at a specific point in time .... a) the range of things that were possible and not possible in hardare and b) the range of things that were possible and not possible in software ... having the hardware do slightly less and making the compiler smarter was claimed to have been a better solution.

25 years ago ... 801 was going to have a totally closed operating system environment ... with large amounts of stuff being done at compile time. basically everything that might currently be classified as priviledges would be done at compile time and be validated at bind time. actual application execution didn't involve any "system calls" (in the sense that there might be state & priviledge change involved) because everything executed at the same priviledge level. One thing this impacted was that there were only going to be 16 virtual memory registers ... restricting the number of concurrent, unique "memory" objects to 16. The justification was that inline application code would change virtual memory registeres as easily as it could change values in any other register (there were no runtime issues of checking access privileges/rules when changing addressing). This continued up thru ROMP and RIOS ... even tho the operating system platform had changed to unix ... and change of access to memory objects required system calls as part of enforcing access rules.

Those trade-offs have changed over time ... the range of possible/not-possible for hardware chips has significantly changed in the last 25-30 years.

I would contend that the original propagation of the terms of RISC and CISC might have been more do to the academic community simplifying descriptions for their constituency (students) than possibly the commercial world (in part because i believe the term RISC originated in the academic community). so a totally off-the-wall question is do college students and PHBs fall into similar classification?

and lots of random stuff:
https://www.garlic.com/~lynn/95.html#5 Who started RISC? (was: 64 bit Linux?)
https://www.garlic.com/~lynn/95.html#6 801
https://www.garlic.com/~lynn/95.html#9 Cache and Memory Bandwidth (was Re: A Series Compilers)
https://www.garlic.com/~lynn/95.html#11 801 & power/pc
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/97.html#5 360/44 (was Re: IBM 1130 (was Re: IBM 7090--used for business or
https://www.garlic.com/~lynn/98.html#25 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#26 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#27 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#129 High Performance PowerPC
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000.html#16 Computer of the century
https://www.garlic.com/~lynn/2000.html#59 Multithreading underlies new development paradigm
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000c.html#3 RISC Reference?
https://www.garlic.com/~lynn/2000c.html#4 TF-1
https://www.garlic.com/~lynn/2000c.html#9 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000c.html#12 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000d.html#28 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000d.html#31 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#17 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#21 OT?
https://www.garlic.com/~lynn/2000f.html#27 OT?
https://www.garlic.com/~lynn/2000f.html#28 OT?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2000f.html#40 Famous Machines and Software that didn't
https://www.garlic.com/~lynn/2000f.html#56 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#18 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001c.html#84 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#12 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#44 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001f.html#30 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001f.html#45 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#23 IA64 Rocks My World
https://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
https://www.garlic.com/~lynn/2001h.html#69 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001l.html#50 What makes a mainframe?
https://www.garlic.com/~lynn/2001n.html#42 Cache coherence [was Re: IBM POWER4 ...]
https://www.garlic.com/~lynn/2001n.html#46 Blinking lights
https://www.garlic.com/~lynn/2001n.html#65 Holy Satanism! Re: Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#43 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#23 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#29 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002c.html#1 Gerstner moves over as planned
https://www.garlic.com/~lynn/2002c.html#19 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2002c.html#40 using >=4GB of memory on a 32-bit processor
https://www.garlic.com/~lynn/2002d.html#10 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#27 iAPX432 today?
https://www.garlic.com/~lynn/2002e.html#44 SQL wildcard origins?
https://www.garlic.com/~lynn/2002f.html#42 Blade architectures
https://www.garlic.com/~lynn/2002g.html#5 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#14 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#39 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#77 Pipelining in the past
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?
https://www.garlic.com/~lynn/2002h.html#23 System/360 shortcuts
https://www.garlic.com/~lynn/2002h.html#63 Sizing the application
https://www.garlic.com/~lynn/2002i.html#60 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002i.html#81 McKinley Cometh
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002j.html#8 "Clean" CISC (was Re: McKinley Cometh...)
https://www.garlic.com/~lynn/2002j.html#20 MVS on Power (was Re: McKinley Cometh...)
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

s/w was: How will current AI/robot stories play when AIs are

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: s/w was: How will current AI/robot stories play when AIs are
 real?
Newsgroups: alt.folklore.computers
Date: Wed, 14 Aug 2002 18:39:10 GMT
"Rupert Pigott" writes:
Stanford ?

Maybe I've got my history in a tangle then... But RISC does seem to have been more of a marketing label than something which corresponds to design tradeoffs. I'm sure some people know the difference but it sure as hell ain't the marketoids or general public. :)


random ref from search engine
http://www.cs.washington.edu/homes/lazowska/cra/risc.html
http://compilers.iecc.com/comparch/article/90-02-017
http://www-cse.stanford.edu/class/sophomore-college/projects-00/risc/about/interview.html
http://citeseer.nj.nec.com/context/98585/0
http://www.amigau.com/aig/riscisc.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unbelievable

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unbelievable.
Newsgroups: alt.folklore.computers
Date: Thu, 15 Aug 2002 18:50:56 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
The August 15 Ottawa Citizen has an item titled Linux battles growing pains, by Rachel Konrad of AP. She's reporting on the LinuxWorld con- ference, probably in San Francisco. The gist of the item is that Linux is becoming a serious mainstream operating system. However, when I read

IBM, which provides consulting services to companies interested in Linux, has more than 4,600 Linux projects with corporate clients. IBM says it has saved more than $10 million a year by moving its internal e-mail system to Linux servers.

I'm left somewhat incredulous.

On p. 13 of The REXX Language by M.F. Cowlishaw, there's a reference to how the development was done. "IBM has an internal network, known as VNET, that links over 1600 mainframe computers in 45 countries." That book is dated 1985.

For the life of me, I can't understand how IBM saves 10 Big Ones moving from an existing system to Linux servers.


i've posted numerous references to the size of the internal network ... with the claim that the internal network was larger than arpanet/internet up until possibly sometime in 85. the internal network had passed 1000 nodes in '83 ...
https://www.garlic.com/~lynn/99.html#112
https://www.garlic.com/~lynn/internet.htm

and 2000 nodes in 85 (given publishing times ... there would have been latency between the time the words were written and the time the book was actually published).

some random datapoints:

1) sometime in the late '90s i saw diagram of several "mail" centers spread around the world that were on the internal network and also gatewayed to the internet

2) a communication in the mid '90s from a CSC-alumni about their work on lotus note scaling issues supporting 512-node SP system

3) Yahoo news a couple days ago mentions 320,000 employees
http://story.news.yahoo.com/news?tmpl=story&u=/nm/20020814/tc_nm/tech_ibm_jobcuts_dc_9

so assuming there was some number of 512-node machines in these centers would the "ten big ones" represent closer to a two percent savings or a fifty percent savings? At 320,000 employees, $10m works out to a savings of about $31/employee (not very much especially if it is a one time capital cost savings amortized over 3-5 years .... works out to possibly as little as $6/person/annum).

As an aside, I remember some statement in the 80s about there being 485,000 employees world-wide and for all I know it might have peaked higher.

slightly related to some of the "mainframe" growth issues during the period:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

misc. old refs to 1985 internal network
https://www.garlic.com/~lynn/2000e.html#13 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000g.html#14 IBM's mess (was: Re: What the hell is an MSX?)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
https://www.garlic.com/~lynn/2001e.html#32 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
https://www.garlic.com/~lynn/2001j.html#50 Title Inflation
https://www.garlic.com/~lynn/2001l.html#34 Processor Modes
https://www.garlic.com/~lynn/2001l.html#35 Processor Modes
https://www.garlic.com/~lynn/2001l.html#45 Processor Modes
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002k.html#12 old/long NSFNET ref

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Vnet : Unbelievable

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vnet : Unbelievable.
Newsgroups: alt.folklore.computers
Date: Thu, 15 Aug 2002 21:09:37 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Size is but one aspect. Sure, but the ARPAnet had greater architectural diversity (heterogenity), more interesting service functionality, initially a more open environment, less hierarchical-ity (SNA, s.v.p.?), (for shear numbers, I think the DEC enet was larger, faster, ping Barb, but less architectural (H/w and S/w) diversity), and a host of other issues which escape me at the moment. Email isn't everything.

until the great change-over in 1/1/83 ... arpanet was total homogeneous using IMPs .... with a variety of processors that happened to be hooked to the IMPs.

the core vnet technology effectively had a layered approach effectively with gateway function in every node and the ability of supporting a wide variety of different kinds of protocols and types of systems from the start (also the core vnet technology had no SNA content). arpanet didn't get heterogeneous internet and gateway support until 1/1/83 change-over. One of my claims that the internal network was larger than arpanet was the ease of being able to add nodes becuase it effectively had heterogeneous and gateway support from the start. The other claim is that one of the main reasons that the internet started an explosive growth and eventually overtook the internal network in size was the introduction of heterogenous network and gateway support on 1/1/83.

A secondary claim regarding the internet exceeding the size of the internal network by '85 (within three years of the introduction of heterogeneous and gateway support for the internet) was the explosion of PCs and workstations as nodes.

PCs saw an intitial huge uptake because of the availability of terminal emulation (merket penetration). Basically a single keyboard/display on the desk could both do 3270 mainframe operation as well a switch to emerging applications that ran only local on the PC or workstation (everybody 3270 could be replaced with PC).

By the mid-80s there was effectively a huge organizational and revenue inertia to not allow those installed machines to be converted from terminal emulation paradigm to full network node paradigm. There is some folklore that the initial implementation of TCP/IP in VTAM ran so much faster than LU6.2 that it was set back with specific directions that the implementation is obviously faulty and the only way that there could possibly be a correct TCP/IP implementation if the result had lower thruput than LU6.2.

prior homogeneous/heterogeneous mentions
https://www.garlic.com/~lynn/99.html#44 Internet and/or ARPANET?
https://www.garlic.com/~lynn/2000.html#74 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/2000e.html#13 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#14 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#30 Is Tim Berners-Lee the inventor of the web?
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Vnet : Unbelievable

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vnet : Unbelievable.
Newsgroups: alt.folklore.computers
Date: Fri, 16 Aug 2002 01:32:28 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
No problem in 1983. But IBM and AT&T also poopooed it in 1968/9. SNA started when? When were the first packet voice experiments? I think the ARPAguys targetted voice around 1975 (although I only read that years later). --- snip --- You left out RSCS and Ira Fuchs and Bitnet.

at csc there was a push to try and get peachtree (aka S/1) as the microprocessor for the 3705 .... instead of the UC.5 that they were planning on using. UC.5 was significantly less capable microprocessing than peachtree used in the S/1.

SNA started with VTAM & NCP ... what 74? .... loosing memory cells too. Prior to VTAM there was TCAM running on the 370 and ???? (something else) running in the 3705. Melinda's paper mentions VM/370 adding native support for NCP in january, 1975.

there is also some folklore that the design of VTAM & NCP was significantly motivated by a project that I was involved in as an undergraduate which did the original plug-compatible 360 control unit and supposedly originated the 360 PCM controller business.
https://www.garlic.com/~lynn/submain.html#360pcm

There have been jokes that SNA is not a system, not a network, and not an architecture .... it was a terminal control infrastructure designed to manage tens of thousands of terminals and possibly also to have a really, really complex interface between SSCP/PU5 (vtam) and NCP/PU4 (running in the 3705). The first instance of a network layer supposedly within the SNA camp was APPN. The SNA group non-concurred with the announcement of APPN ... and the announcement was held up for six weeks while the issue was escalated. The announcement letter for APPN was finally carefully crafted to make sure there were no statements that indicated that APPN and SNA were in any way related.

VNET was announced as RSCS in January 1975 (also referenced in Melinda's paper):
https://www.leeandmelindavarian.com/Melinda#VMHist

re: attached from Melinda's paper

1) much of the early "SUN" network referenced below was derived from networking code written by TUCC for HASP; in fact much of the source code still carried the characters "TUCC" in cols. 68-71.

2) I have hardcopy of the internal network dated 4/15/77
https://www.garlic.com/~lynn/2002j.html#4 HONE

from the melinda's paper.
J. The birth of VNET

VNET, IBM's internal network, united and strengthened the VM community inside IBM in the same way that VMSHARE united and strengthened the VM community in SHARE and SHARE Europe. The VNET network, like many of the other good things we have today, was put together ''without a lot of management approval'', to quote Tim Hartmann, one of the two authors of RSCS. VNET arose because people throughout IBM wanted to exchange files. It all started with Hartmann, a system programmer in Poughkeepsie, and Ed Hendricks, at the Cambridge Scientific Center. They worked together remotely for about ten years, during which they produced the SCP version of RSCS (which came out in 1975), and the VNET PRPQ (which came out in 1977). After that, RSCS was turned over to official developers.

The starting point for RSCS was a package called CPREMOTE, which allowed two CP-67 systems to communicate via a symmetrical protocol. Early in 1969, Norm Rasmussen had asked Ed Hendricks to find a way for the CSC machine to communicate with machines at the other Scientific Centers. Ed's solution was CPREMOTE, which he had completed by mid-1969. CPREMOTE was one of the earliest examples of a service virtual machine and was motivated partly by the desire to prove the usefulness of that concept. CPREMOTE was experimental and had limited function, but it spread rapidly within IBM with the spread of CP-67. As it spread, its "operational shortcomings were removed through independent development work by system programmers at the locations where [new] functions were needed." 116 Derivatives of CPREMOTE were created to perform other functions, such as driving bulk communications terminals. One derivative, CP2780, was released with VM/370 shortly after the original release of the system.

By 1971, CPREMOTE had taught Hendricks so much about how a communications facility would be used and what function was needed in such a facility, that he decided to discard it and begin again with a new design. After additional iterations, based on feedback from real users and contributions of suggestions and code from around the company, Hendricks and Hartmann produced the Remote Spooling Communications Subsystem (RSCS). When the first version of RSCS went out the door in 1975, Hendricks and Hartmann were still writing code and, in fact, the original RSCS included uncalled subroutines for functions, such as store-and-forward, that weren't yet part of the system. The store-and-forward function was added in the VNET PRPQ, first for files, and then for messages and commands. Once that capability existed, there was nothing to stop a network from forming. Although at first the IBM network depended on people going to their computer room and dialing a phone, it soon began to acquire leased lines. The parts of IBM that were paying for these lines were not always aware of what they were paying for. Since the network grew primarily because the system programmers wanted to talk to one another, a common way of acquiring leased lines for the network was to go to one's teleprocessing area and find a phone circuit with nothing plugged into it.

The network was originally called SUN, which stood for ''Subsystem Unified Network'', but at first it wasn't actually unified. It was two separate networks that needed only a wire across a parking lot in California and a piece of software (which became the RSCS NJI line driver) to make them one. Hartmann spent some time in California reverse-engineering the HASP NJI protocol, which hadn't really been written down yet, and finally got that last link up late one evening. Wishing to commemorate the occasion, he transferred some output from a banner printing program running on his system in Poughkeepsie through the network to a printer in San Jose. His co-worker in San Jose, Ken Field, the author of the original HASP NJI code, thought Tim's output was pretty nifty, so he asked for more copies and taped them up on the walls before finally going home to get some sleep. When Field got back to work late the next morning, he found the place in an uproar over the apparent unionization attempt. The banners had read: Machines of the world unite! Rise to the SUN! After that got quieted down, the network began to grow like crazy. At SHARE XLVI, in February, 1976, Hendricks and Hartmann reported that the network, which was now beginning to be called VNET, spanned the continent and connected 50 systems. At SHARE 48, a year later, they showed this map of the network. By SHARE 52, in March, 1979, they reported that VNET connected 239 systems, in 38 U.S. cities and 10 other countries. In August, 1982, VMers celebrating VM's tenth birthday imprudently attempted to hang the current VNET network map up at SCIDS. By that time, a circuit analysis program was being used to generate the network


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Vnet : Unbelievable

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vnet : Unbelievable.
Newsgroups: alt.folklore.computers
Date: Fri, 16 Aug 2002 02:29:02 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
You left out RSCS and Ira Fuchs and Bitnet.

random of bitnet & earn (european bitnet) refs:
https://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#39 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000c.html#61 TF-1
https://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
https://www.garlic.com/~lynn/2000e.html#15 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000f.html#22 Why trust root CAs ?
https://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000g.html#39 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001c.html#19 What is "IBM-MAIN"
https://www.garlic.com/~lynn/2001h.html#65 UUCP email
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#57 Computer Naming Conventions
https://www.garlic.com/~lynn/2002d.html#33 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2002e.html#6 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2002i.html#44 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002j.html#29 mailing list history from vmshare
https://www.garlic.com/~lynn/2002j.html#75 30th b'day

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Vnet : Unbelievable

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vnet : Unbelievable.
Newsgroups: alt.folklore.computers
Date: Fri, 16 Aug 2002 03:20:54 GMT
Tom Van Vleck writes:
I interviewed with Jim Gray before being offered the job. He had left IBM after a notorious internal conversation within IBM about the difficulty of getting computer resources for developers which contrasted IBM's approach with Tandem's, which was known as the "Tandem Letters." Perhaps Lynn has more on this topic.

Certainly when I started at Tandem, I was issued a (block mode) terminal to take home so I could work from home on evenings and weekends.


i 'now nothun, i 'now nothun ... they tried to hang the whole thing on me ... datamation even said i did it
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#6 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2001j.html#31 Title Inflation

share is having the 30th anv. announcement of VM/370 in san fran next week. As an undergraduate, I got to be part of the announcement of its predecessor, cp/67 at the spring '68 share meeting in houston (i.e. 35 years next spring).

jim and I used to have "fridays" (before he left for tandem ... they did continue after he left) and we would get people from research and GPD software ... and even sometimes disk engineers.

there is an attempt to have a fridays next week somewhere in the south san jose area (few of us can stay up to nearly daybreak drinking beer anymore tho).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Vnet : Unbelievable

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vnet : Unbelievable.
Newsgroups: alt.folklore.computers
Date: Fri, 16 Aug 2002 14:20:00 GMT
jmfbahciv writes:
I don't have numbers, sorry. I do know that areas got invented because we had too many nodes for the field that was defined for the node number. But that was DECnet and long ago. I think you're talking about a time later than that.

one of the problems with the "SUN", JES2/NJI, etc. implementation inherited from TUCC was that the "network nodes" fit in spare slots in the 255 psuedo device table. HASP implemented spooling by defining a whole pool of psuedo unit record devices (using a one byte index) ... which that it intercepted and mapped to spool ... and then managed the real unit record devices from spool. Network nodes were then managed as spare entries in the psuedo device table. A typical installation might have 40 to 80 psuedo devices defined .... leaving something like 200 slots available for defining network nodes.

the vm/370 rscs/vnet implementations never had such a restriction. it was one of the reasons that real NJI nodes could never be anything but boundary/end nodes on the internal network ... since they had a feature that if any network transmission they saw would be tossed if either the source of the destination weren't defined in their local table (even if the destination was for them ... and just the source was unknown ... they would still toss it).

At no time did the NJI implementation support the size of the internal network. NJI eventually was enhanced to raise the psuedo device table from 255 to 999 ... but that was after the internal network was larger than 999.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

computers and stuff

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computers and stuff.
Newsgroups: comp.lang.asm370,alt.folklore.computers
Date: Fri, 16 Aug 2002 14:37:52 GMT
Liam Devlin writes:
Lotus was the PC's killer app - it made the PC an indispensible business machine. People already had word processing (although it certainly improved on the PC) and companies don't care about games.

the fundamental killer app was that there was a single keyboard and display on the desk that could do both mainframe terminal emulation and also execute local programs .... one of the emergant local killer apps was spreadsheats .... originally visicalc and eventually lotus.

basically the "cost" of the PC could be nearly covered by it being a mainframe terminal placed on the desk .... and the local program execution then was frosting on the cake ... almost for free. The human factors of not having two keyboards and two displays on the desk was significant ... but as significant, or more so was not having duplicate costs. Selling into the significant business mainframe terminal market allowed a critical mass install base to be reached very quickly ... that would have never happened if it had just been targeted at the home hobbiest market.

My brother at one time was regional apple marketing rep (largest region in continental us ... at least in sq. miles) ... and when he would come into town there sometimes were dinners with the some of the people developing mac (before announce) and I would get to argue with them about taking advantage of the business market to reach critical mass.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

miscompares per read error

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: miscompares per read error
Newsgroups: comp.arch.storage,alt.folklore.computers
Date: Fri, 16 Aug 2002 21:09:23 GMT
"Bill Todd" writes:
It's a subject I've been curious about for many years, but have seen very little information about. Gray & Reuter (in 'Transaction Processing') talk about 'reliable read' operations which compare both mirror copies (they suggest a time-stamp to identify the one that the 'wild write' failed to update) but indicate that most systems choose not to incur the additional overhead of this protection. And a few systems (I believe IBM's AS/400 is one) use larger-than-normal disk sectors to include an explicit identifier (e.g., file ID plus offset) which, while it may not always help identify a failed write, will at least catch the incorrect location the 'wild' write hit).

the mainframe controller group in san jose spent huge resources and effort working on architecture and design for mainframe disks during much of the '80s ... but it never happend. Part of the reason was a huge legacy problem where customers had applications that explicit formated tracks in all sort of wierd sizes. The basic data unit of this design was 4k byte records (plus the additional hidden header and trailer) .... there were also objects referred to as "King Pages" which was additional out-of-band stuff.

In the mid-80s, I did somewhat of a software implemention using the vm/370 "spool" file system where things were much more tightly regulated/constrained (i.e. closed environment with no direct access to physical characteristics). This had both additional information before the start of actual data as well as a logical sequence number and other stuff in the trailer (time-stamp sort of implies ordering with the rest of the world ... careful sequence number can be sufficient to provide necessary ordering within a constrained environment ... some papers have descriptions of this as virtual time).

was able to demonstrate that in worst case scenario ... it was possible to physically read every track and reconstruct whatever recoverable information that might exist (i.e. every record effectively contained "self-describing" information).

I also took the opportunity to implement various added functions and performance improvements .... so the additional overhead was effectively the space on disk:
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
https://www.garlic.com/~lynn/2001n.html#7 More newbie stop the war here!
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

DEC eNet: was Vnet : Unbelievable

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DEC eNet: was Vnet : Unbelievable.
Newsgroups: alt.folklore.computers
Date: Fri, 16 Aug 2002 21:23:27 GMT
jcmorris@mitre.org (Joe Morris) writes:

BITNET    435
ARPAnet  1155
CSnet     104 (excluding ARPAnet overlap)
VNET     1650
EasyNet  4200
UUCP     6000
USENET   1150 (excluding UUCP nodes)

there were desk things given out for passing 1000 nodes in '83 and 2000 nodes in '85. i've got the specific announcement for what was the 1000th node and the date ....
https://www.garlic.com/~lynn/99.html#112

but don't have the same for the 2000th node.

the '83 one is clear plexiglass tripod that a clear plexiglass ball sits in. Embedded in the ball (sitting here on my desk) is 1000 nodes at the top and "vnet 1983" & IBM at the botto. In the middle is a stylized flat map of the world with 31 little red dots and red lines connecting the dots.

I don't know where the '85 one is ... I seem to remember it more like a wall plaque and is some box someplace.

Note also that EARN had started in late '83 which may or may not be included int the VNET list:
https://www.garlic.com/~lynn/2001h.html#65

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

computers and stuff

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computers and stuff.
Newsgroups: comp.lang.asm370,alt.folklore.computers
Date: Sat, 17 Aug 2002 00:17:49 GMT
Liam Devlin writes:
Visicalc was the Apple's killer app & Lotus did the same for the PC. The connectivity came later.

while visicalc may have started on apple II ... i remember having it on ibm/pc well before any lotus;

random refs:
http://www.bricklin.com/history/vcexecutable.htm
http://www.bricklin.com/visicalc.htm
http://www.bricklin.com/history/othersites.htm

other history sites from above
http://www.bricklin.com/history/othersites.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

computers and stuff

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computers and stuff.
Newsgroups: comp.lang.asm370,alt.folklore.computers
Date: Sat, 17 Aug 2002 15:27:34 GMT
J. Clarke writes:
Visicalc was available on the PC, but it wasn't the "killer app" for the PC because it didn't do anything that the version that ran on the Apple II didn't do. Lotus took full advantage of the PC's larger address space and better display.

the point wasn't that it had to be the one & only killer app. the point was that having a single display & keyboard that could do both mainframe terminal emulation and at least some (emergant) local apps.

Nomainally a single killer app or silver bullit has to justify the expense and operation nearly by itself.

The assertion that I made was that given that the PC costs and function could be nearly covered by its use as a mainframe terminal ... and that any use for local processing was gravy. Given that assertion ... visicalc on the PC didn't have to do more than on the apple II .... it just had to do as much as the apple II and not require both a mainframe terminal and an apple II to sit both side by side on the same desk.

The killer app in that sense was not so much the individual applications ... it was the ability to have a number of different kinds of things that previously required multiple keyboards and displays on the desk ... all packaged in a single keyboard/display (aka a human engineering scenario with regard to the size of the footprint on the desk and having to physically switch keyboards/displays).

For a long time ... I got by with PC as a mainframe terminal and turbo pascal's tinycalc ... even tho i had visicalc (and eventually had access to lotus).

The marketing issue based on this assertion wasn't claiming that an IBM/PC was better at running spreadsheets than Apple II (and selling against Apple II market) ... it was that the IBM/PC was better than a mainframe terminal because it could both be a mainframe terminal and something else

There were probably at least two order magnitude larger install base of mainframe terminals than Apple II install base (maybe three order magnitude, there were probably single companies that had more mainframe terminals than the total number of Apple IIs). Visicalc on the IBM/PC could have been worse than the Apple II and it still would meet the criteria in my assertion. My assetion would said that IBM/PC was being sold as a better mainframe terminal into a really really larger market that spent more money on each mainframe terminal than the cost of Apple II. In that market and sales orientation whether or not the Apple II existed, was cheaper, and/or had a better spreaddsheet wouldn't have entered into the equation. The assertion wasn't that the IBM/PC had to be better than an Apple II, the assertion was that the IBM/PC had to be better than a mainframe terminal at relatively the same cost.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

computers and stuff

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computers and stuff.
Newsgroups: comp.lang.asm370,alt.folklore.computers
Date: Sat, 17 Aug 2002 19:21:11 GMT
lesterDELzick@worldnet.att.net (Lester Zick) writes:
I have a feeling we're heading in an unappetizing direction vis-a-vis Apple vs. PC success, but I would like to offer what I remember of that history. Please bear in mind that I left the business in 1980 around the time the Apple II was announced.

At the time the company I worked for had a large number of coax linked 3270 terminals distributed in offices and linked via controller to various MVS mainframes. At the time I left the business there were no PC based mainframe terminal emulators. The first experience I had with these came around 1984, which isn't to say that none came earlier, just that they were not commercially prevalent. And spreadsheet processing was the driving force for the PC that I remember.

The real problem with PC emulation controller driven dumb terminals was the asynchronous communications, generally limited to 1200 baud around 1980. The IBM 3270 ran terminals at speeds up to 9600 baud at least. DEC had some higher speed asynch terminals I believe for their PDP 11 miniframes, but the IBM communications architecture was EBCDIC synchronous rather than ASCII asynchronous and would have required a programmed controller interface at the terminal site.


taking those statement then I might infer

a) the PC market prior to ibm/pc was made up of apple II and some number of other machines. that market had a specific kind of profile(s). ... including the use of visicalc on apple IIs and some number of other machines. The IBM/PC initial introduction in 1981 didn't significantly impact the nature or size of that market and it wasn't able to create new market segment in traditional ibm data processing environment.

b) It wasn't until the introduction of lotus 123 in 1983 that there was a significant new or expanded market for PCs ... specifically the new customers that found the expanded capability of lotus123 on ibm/pc as a new killer app.

c) so prior to lotus123 introduction; there shouldn't be any significant spikes in the total PC market in all of '81 and '82 (period before and after ibm/pc was introduced extending up to the introduction of lotus123).

d) after 1983, the total number of lotus123 sales should correspond fairly closely to the total ibm/pc sales

==================================

i don't have any idea what the market size numbers so I have no way of quantitative comparing your assertion to my assertion.

as to unpleasant ... i may have gotten into some heated debates with some mac developers (before mac was even announced) about the requirement for mainframe terminal emulation but I don't think it was unpleasant. i was somewhat constrained ... as mentioned in an opening post related to this thread ... during the period my brother was regional apple rep (couple state area) having worked his way up from an apple II expert ... random refs:
https://www.garlic.com/~lynn/99.html#28 IBM S/360
https://www.garlic.com/~lynn/2000d.html#22 IBM promotional items?
https://www.garlic.com/~lynn/2000g.html#13 IBM's mess (was: Re: What the hell is an MSX?)
https://www.garlic.com/~lynn/2001e.html#38 IBM Dress Code, was DEC dress code
https://www.garlic.com/~lynn/2001n.html#21 Unpacking my 15-year old office boxes generates memory refreshes
https://www.garlic.com/~lynn/2002k.html#24 computers and stuff

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

computers and stuff

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: computers and stuff.
Newsgroups: comp.lang.asm370,alt.folklore.computers
Date: Sun, 18 Aug 2002 18:55:52 GMT
lesterDELzick@worldnet.att.net (Lester Zick) writes:
Nor do I. I remember that PC sales started off fairly slowly because there was no particular support from IBM via its mainframe base. However, once development on the PC base really came into its own, sales picked up substantially and continued to rise as various manufacturers and software developers jumped into the market.

there is also some segmentation in the mainframe terminal market. a sizeable percentage are for clerks and order/entry people. there is very little advantage offering them local options (aka you are on telephone to a company ordering something ... and you want the person you are talking to ... to be typing in your order ... not playing with some spreadsheet).

the bimodel operation ... would be more desirable for professionals, decision makers and knowledge workers.

this differentiation would change later with terminal emulation screen-scraping utilitaties that would provide development of better human factors (& efficient) interface for clerks, data/entry people, etc ... w/o requiring change to the mainframe system.

total separate line ... my brother use to mention dialing into the corporate s/38 order management system that was used to run the business.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

general networking is: DEC eNet: was Vnet : Unbelievable

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: general networking is: DEC eNet: was Vnet : Unbelievable.
Newsgroups: alt.folklore.computers
Date: Mon, 19 Aug 2002 19:33:26 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
When I had to deal battles for Cray operating systems, another guy took the lead on network protocols, Mark Aaker. Mark has since left computers for law. We were still interested in an interesting new protocol called LINCS which has since died. 4.2BSD had an interesting concept called Protocol Families which was never adequately appreciated. Instead people got wrapped up in UDP (which no one had ever heard of at the time). SGI went to XNS (waste of time). LLNL ignored Families and went directly into kernel hacking (waste of time). And we were fully expecting Ultrix to have DECnet as another protocol family. They may still come back.

wasn't the filemanagement system also called LINCS ... my wife and I were heavily involved in the UNIX port .. and was marketed by General Atomics as unitree (along with the LANL stuff that they were marketing called datatree). wasn't llnl also involved in VMTP (rfc1045, i always remember the number because a) i had done rfc1044 support for ibm mainframe tcp/ip, and it was a reference for the protocol engines technical advisery board ... where i was one of the reps). and sgi people were heavily behind the protocol engines XTP work (pipelining, trailer protocols, etc).

on the hardware side ... LANL pushed the parallel copper HiPPI as standards work for cray channel ... and LLNL pushed the work to turn a serial copper non-blocking switch work into fiber standard (aka FCS).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Looking for security models/methodologies

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Looking for security models/methodologies
Newsgroups: alt.security
Date: Mon, 19 Aug 2002 20:08:23 GMT
"Tim" writes:
I am designing a small web-based application and I'm trying to find information about different security models.

I'm not looking for any technical details at all. What I'd like is a high-level model that describes users, access levels, groups, objects and different methodologies for applying security. What are different ways of assigning access to users, bunches of users, or to various parts of the application, etc.

I've only had very limited success so far. Everything seems to jump right into technical discussions. I'd like to decide what I want to do before I worry about the technical side of how to implement it.

Are there any good resources on line for this? Or any good books?


there is the area of authentication and the area of access control or permissions ... and the area of classifcations (aka you authenticate somebody, then you associate permissions with the entity that you have authenticated ... the permissions are frequently associated with access to objects that have been categorized).

one paradigm approach to this has been NIST's work on Role-based access control. a "role" is abstracted along with all the permissions needed to fulfill that role. then security officers just have to assign a role to somebody ... and the permissions sort of flow from the infrastructure that implements the underlying infrastructure. This is simplying paradigm for security officiers for managing large complex enterprise environments. Issues that have cropped up is that frequently individuals have to be assigned multiple roles ... and the join of all the permission sets for multiple roles opens up unanticipated fraud avenues.

use search engines for RBAC, NIST, role-based access control. There are papers and other information on the NIST web site.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

general networking is: DEC eNet: was Vnet : Unbelievable

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: general networking is: DEC eNet: was Vnet : Unbelievable.
Newsgroups: alt.folklore.computers
Date: Tue, 20 Aug 2002 03:00:35 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
I think this was mostly defore HiPPI as most people were still trying to drive HYPERchannel, and IBM Token Ring was battling the "less efficient" Ethernet.

almaden research got hit on that when they compared 10mbit enet and 16mbyte t/r over the same CAT4 wiring and decided on ....

we also got it when we were publishing numbers for enet and 16mbit t/r as part of this new idea we were pushing for three tiered paradigm and middle layer (especially zapped by the SAA people ... it is interesting that i ran into the executive that had been in charge of SAA today at share in san fran ... he is doing something completely different).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

30th b'day .... original vm/370 announcement letter (by popular demand)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: 30th b'day .... original vm/370 announcement letter (by popular demand)
Newsgroups: bit.listserv.vmesa-l
Date: Mon, 19 Aug 2002 20:52:14 -0600
... following is posting by Joe Morris (mitre) to alt.folklore.computers ... for those that don't have newsgroup access .... or access to google newsgroup search

note that in CP/67, cms had capability to run on the bare iron ... that was disabled in the conversion to vm/370.

===========================================

For nostalga types, here is the text of the original VM/370 announcement, OCR-scanned from the blue letter...

========== begin included text ==========


IBM                  Data Processing Division     PROGRAM ANNOUNCEMENT

VM/370 PROVIDES VIRTUAL MACHINE, VIRTUAL STORAGE, AND TIME SHARING
SUPPORT FOR SIX SYSTEM/370 MODELS

SCP 5749-010

Virtual Machine Facility/370 (VM/370) is System Control Programming for
System/370 Models 135,145, 155 II, 158, 165 II and 168.

Its major functions are:

. Multiple concurrent virtual machines with virtual storage support.
. Time sharing support provided by a conversational subsystem.

Role in Advanced Function Announcement

VM/370 is complementary to OS/VS2, OS/VS1 and DOS/VS, offering our
customers extended capabilities and additional virtual storage-based
functions.

Oriented to the on-line environment, VM/370 can be a significant assist
in the development and installation of new applications, and can help
justify additional equipment through satellite systems, additional
storage and I/O, and CPU upgrades. Use it to help move your customers
to virtual storage systems, and to help them grow when they get there.

VM/370 Highlights

. Virtual machine, virtual storage, and time sharing support.
. The execution of multiple concurrent operating systems, including
DOS, DOS/VS, OS/MFT, MVT, VS1.and VS2, and VM/370 itself.
. Virtual storage facilities for operating systems which do not support
Dynamic Address Translation, such as OS/MFT.
. A general-purpose time sharing system suitable for both problem
solving and program development, available to customers beginning
with a 240K byte Model 135.
. Capability of running many types of batch problem-solving
  applications from a remote terminal with no change in the batch
program.
. Up to 16 million bytes of virtual storage available to each user.
. Capability of performing system generation, maintenance, and system
testing concurrent with other work.
. A high degree of security, isolation, and integrity of user systems.
. The ability for many users to test privileged code in their own
virtual machines.
. An aid in migrating from one operating system to another.
. Device address independence for all supported operating systems.
. Multiple forms of disk protection, e.g., preventing users from
writing and/or accessing specific disks.
. Ability to use virtual machines to provide backup for other systems.
. Options to improve the performance of selected virtual machines.
. Ability to run many System/370 emulators in virtual machines.

Customers who should consider VM/370

. Large, multi-system users: satellite systems for virtual machine
applications and on-line program development.
. Customers not yet large enough to utilize TSO and who are interested
in on-line program development and/or interactive application
programs.
. Universities, colleges, and schools: time sharing applications for
  students, faculty, research and administration.
. Users of non-IBM systems: VM/370 is a strong new IBM entry with many
  advanced functional capabilities.
. Customers considering conversion from DOS to OS or OS/VS: VM/370 can
assist through its virtual machine function, and can supplement the
DOS emulator available with OS systems.
. Mixed systems or mixed release installations, including those using
PS/44 or modified back releases of DOS or OS.
. Customers with high security requirements: operating applications in
separate virtual machines may provide an extra measure of security.
. Current CP/67 users: the features of the virtual storage-based
Control Program 67/Cambridge Monitor System (CP-67/CMS), originally
  designed and implemented in 1968 for use on the System/360 Model 67,
have been refined and improved to form the foundation for VM/370.

Description

VM/370 is a multi-access time shared system with two major elements:

. The Control Program (CP) which provides an environment where multiple
  concurrent virtual machines can run different operating systems, such
as OS, OS/VS, DOS and DOS/VS, in time-shared mode.
. The Conversational Monitor System (CMS) which provides a general-
purpose, time-sharing capability.

Multiple Concurrent Virtual Machines

The control program of VM/370 manages the resources of a System/370 to
provide virtual storage support through implementation of virtual
machines. Each terminal user appears to have the functional
capabilities of a dedicated System/370 computer at his disposal.
Multiple virtual machines may be running conversational, batch, or
teleprocessing jobs at the same time on the same real computer. A user
can define the number and type of I/O devices and storage size required
for his virtual machine application provided sufficient resources are
available with the real machine's configuration.

A customer can concurrently run many versions, levels, or copies of IBM
operating systems under VM/370, including DOS, DOS/VS, OS, OS/VS, and
VM/370 itself. (See sales manual pages for the major restrictions
pertaining to the operation of systems in virtual machines.)

The capability of running multiple virtual machines should assist the
customer in scheduling multiple operating systems and various mixes of
production jobs, tests, program maintenance, and FE diagnostics. It can
aid new systems development, reduce the problems of converting from one
operating system to another, and provide more economical backup
facilities.

Time Sharing

The Conversational Monitor System (CMS) component of the VM/370 system
provides a general-purpose, conversational time sharing facility that
is suitable for general problem solving and program development, and
can serve as a base for interactive applications.

CMS, specifically designed to run under VM/370, provides broad
functional capability while maintaining a relatively simple design.

CMS can help programmers become more productive and efficient by
reducing unproductive wait time. CMS also allows non-programmers such
as scientists, engineers, managers, and secretaries to become more
productive via its problem-solving and work-saving capabilities. CMS
gives the user a wide range of functional capabilities, such as;
creating and maintaining source programs for such operating systems as
DOS and OS on CMS disks; compiling and executing many types of OS
programs directly under CMS; setting up complete DOS or OS compile,
linkedit and execute job streams for running in DOS or as virtual
machines; and transferring the resultant output from those virtual
machines back to CMS for selective analysis and correction from the
user's remote terminal.

Service Classification

VM/370 is System Control Programming (SCP).

Note: VM/370 does not alter or affect in any way the current service
classification of any IBM operating system, language, program product,
or any other type of IBM program while under the control of VM/370.

Language Support for CMS

A VM/370 System Assembler is distributed as a part of the system and is
required for installation and maintenance. All necessary macros are
provided in CMS libraries.

The following is distributed with VM/370 as a convenience to the
customer but is not part of the SCP.

A BASIC language facility consisting of the CALL-OS BASIC (Version 1.1)
Compiler and Execution Package adapted for use with CMS. This facility
will receive Class A maintenance by the VM/370 Central Programming
Service.

The following program products may also be ordered for use with CMS:

  OS Full American National Standard
COBOL V4 Compiler and Library           5734-CB2
  OS Full American National Standard
COBOL V4 Library                        5734-LM2
OS FORTRAN IV (G1)                        5734-F02
OS FORTRAN IV Library Mod I               5734-LM1
  OS Code and Go FORTRAN                    5734-F01
OS FORTRAN IV H Extended                  5734-F03
  OS FORTRAN IV Library Mod II              5734-LM3
FORTRAN Interactive Debug                 5734-F05
OS PL/I Optimizing Compiler               5734-PL1
OS PL/I Resident Library                  5734-LM4
  OS PL/I Transient Library                 5734-LM5
OS PL/I Optimizing Compiler and Libraries 5734-PL3

Further details on language support and execution- time limitations
appear in the manual IBM Virtual Machine Facility/370: Introduction,
and in the Program Product section of the sales manual.

Availability

VM/370 has a planned availability of November 30, 1972, supporting the
Dynamic Address Translation facility on the System/370 Models 135 and
145. Planned support for certain advanced VM/370 facilities, other
System/370 machines, and additional I/O devices will be via Independent
Component Releases on the dates shown below.

ICR1, planned for April 1973, will support the System/370 Models 155
II, the 158, the Integrated File Adapter Feature (4655) for 3330 Model
1 and 3333 Model 1 on the Model 135, and the following additional
VM/370 facilities:

. The Virtual=Real and Dedicated Channel performance options.
. The virtual and real Channel-to-Channel Adapter
. Support of OS/ASP in a VM/370 environment, effective with the
availability of ASP Version 3
. The 3811 Control Unit and the 3211 Printer.

ICR2, planned for August 1973, will support the CMS Batch Facility, the
Model 168, and the Integrated Storage Controls (ISCs) for the 158 and
168.

ICR3, planned for December 1973, will support the 165 II.

See the respective program product announcement letters for planned
availability of the program products for CMS.

Note: VM/370 requires the system timing facilities (i.e., the Clock
Comparator and the CPU Timer).

Maintenance

Maintenance for VM/370 Release 1 will be provided by the VM/370 Central
Programming Service until nine months after the next release of VM/370.

Education

See Education Announcement Letter E72-14 for details of VM/370
Introduction (no charge) and additional educational plans.

Publications

IBM Virtual Machine Facility/370: Introduction (GC20-1800), is
available from Mechanicsburg. Other manuals to be available at a later
date include logic manuals, as well as planning, system generation,
command language, system operator, terminal user, and programmer
guides. Titles and form numbers will be announced in a future
Publications Release Letter (PRL).

Reliability, Availability and Serviceability (RAS)

VM/370 provides facilities which supplement the reliability,
availability, and serviceability (RAS) characteristics of the
System/370 architecture. See the sales manual or the introduction
manual for details.

MINIPERT

VM/370 planning information is available in the MINIPERT Master Library
as an aid to selling and installing System/370.

No RPQs will be accepted at this time.

Detailed information on the VM/370 system is in sales manual pages.

<signed>
W. W. Eggleston
Vice President - Marketing

Release Date: August 2, 1972

Distribution:
   DP managers, marketing representatives and systems engineers
FE managers and program systems representatives

P72-91

========== end included text ========== --
Anne & Lynn Wheeler lynn@garlic.com, https://www.garlic.com/~lynn/

... certification

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: ... certification
Newsgroups: bit.listserv.vmesa-l
Date: Wed, 21 Aug 2002 09:46:29 -0600
At 12:00 AM 8/21/2002 -0500, you wrote:
There is no claim of FIPS compliance or certification for SSL. The question is whether SSLv3/TLSv1 is approved for your use (that's what VM SSL implements).

typically FIPS certification is about FIPS-140-1 level 1, 2, 3, or 4 for cryptographic devices.

you typically wouldn't see FIPS-140 associated with straight software certification.

The other kinds of certification is the old orange book ... aka C2, B3, type stuff. That is in the process of being replaced with common criteria protection profiles ... and stuff like EAL3-low, EAL4-high, etc.

trusted product evaluation program
http://www.radium.ncsc.mil/tpep/

above has pointers to evaluated products but also pointers to some stuff about common criteria

fips page

http://csrc.nist.gov/publications/fips/index.html

see above for fips140, security requireds for cryptographic modules

common criteria
http://csrc.nist.gov/cc/

i also have a merged security glossary from orange book/tcsec, common criteria, fips, and a bunch of other sources:
https://www.garlic.com/~lynn/secure.htm

random other refs:
https://www.garlic.com/~lynn/aadsm12.htm#13 anybody seen (EAL5) semi-formal specification for FIPS186-2/x9.62 ecdsa?
https://www.garlic.com/~lynn/aadsm12.htm#14 Challenge to TCPA/Palladium detractors
https://www.garlic.com/~lynn/2002j.html#86 formal fips186-2/x9.62 definition for eal 5/6 evaluation

--
Anne & Lynn Wheeler lynn@garlic.com, https://www.garlic.com/~lynn/

... certification addenda

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: ... certification addenda
Newsgroups: bit.listserv.vmesa-l
Date: Wed, 21 Aug 2002 10:00:55 -0600
oh and some stuff regarding exploits and weaknesses related to some of the SSL (not necessarily the crypto part) areas (as well as others)

https://www.garlic.com/~lynn/subpubkey.html#sslcerts
https://www.garlic.com/~lynn/subintegrity.html#fraud
https://www.garlic.com/~lynn/subintegrity.html#assurance

specific recent thread:
https://www.garlic.com/~lynn/2002k.html#11 Serious vulnerablity in several common SSL implementations?

--
Anne & Lynn Wheeler lynn@garlic.com, https://www.garlic.com/~lynn/

RCA Spectra architecture

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RCA Spectra architecture
Newsgroups: comp.lang.asm370
Date: Thu, 22 Aug 2002 17:34:20 GMT
"psc_kent" writes:
IBM operated under a 1956 consent decree with the US justice department (still does). The decree required them to publish all of their external interfaces so competitors could work with their machines.

i don't remember getting a whole lot of help & documentation on the channel interface when we were doing the first PCM control unit (but that possibly isn't considered an "external interface").
https://www.garlic.com/~lynn/submain.html#360pcm

some folklore that PCM activity may have contributed to the nature of the pu5/pu4 interface.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

GOTOs cross-posting

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: GOTOs cross-posting
Newsgroups: bit.listserv.vmesa-l
Date: Fri, 23 Aug 2002 09:35:10 -0600
my first year out of school, i got to go to some conferences and heard all about goto-less (and super programmers). at the time, probably 90 percent of my programming was 360 assembler .... so it actually didn't have a whole lot of meaning .... since the only construct was condition or condition-less branch.

I did start writing a program (in PLI)which would read a 360 assembler listing, establish all code blocks (sequences of code w/o branches) and code threads ... ... all possible paths thru the code blocks. I tried to do simple checks on register use before setting and miscellaneous other stuff. I then added semantics for do-while, do-until, if/then, if/then/else and attempted to re-generate psuedo code for the code blocks based on non-branch/goto semantics. For simple things it helped ... but for a surprising amount of code, auto-restructure w/o gotos was really, really ugly. Adding semantics for leave/break and continue helped a little but there were still some surprising amounts of code that was really ugly, really ugly. case for computed-gotos (branch tables) helped some special cases. there was still some amount of assembler code that was rather trivial logic with some selected gotos .... but otherwise would do if/then/else nesting six or seven levels deep even for relatively small assembler modules.

part of the flow analysis was trying to improve some simple analysis for kernel failures involving bad pointers (or other problems with register values) by attempting to reconstruct code flow (backtrack) from the point a failure occurred. Lots of code paths that merged (branch) all to the same point frequently made it difficult to backtrack establishing sequence of events leading up to that point in the code.

Later I wrote a dump (core image at time of failure) analysis program (in rexx, thanks mfc) that had a library of scripts that would automatically examine memory for classes or signatures of typical failure scenarios. at one time it was in use by nearly every PSR and internal site.

The original objective was to demonstrate that IPCS which was running 10+? thousand instructions and had a whole group supporting it ... could be rewritten in REX (as a demonstration of REX capability) in half time over 3 months with ten times the performance and ten times the function. With the help of about 120 appropriately implemented assembler instructions and 3000 rex statements ... it achieved 10 times the performance and 10 times the function.

random dumprx refs:
https://www.garlic.com/~lynn/94.html#11 REXX
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2000b.html#33 20th March 2000
https://www.garlic.com/~lynn/2001c.html#0 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2002g.html#27 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002h.html#37 Computers in Science Fiction

slightly related subject of failure modes with respect to disk engineering lab & total rewrite of i/o subsystem to make it bullet proof (by comparison ... when starting this .... MVS typically had a MTBF of 15 minutes with a single test cell ... an objective was to be able to concurrently operate a dozen test cells with absolutely no system failures)
https://www.garlic.com/~lynn/subtopic.html#disk

assurance related postings
https://www.garlic.com/~lynn/subintegrity.html#assurance

--
Anne & Lynn Wheeler lynn@garlic.com, https://www.garlic.com/~lynn/

Vnet : Unbelievable

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vnet : Unbelievable.
Newsgroups: alt.folklore.computers
Date: Fri, 23 Aug 2002 15:54:06 GMT
Anne & Lynn Wheeler writes:
i 'now nothun, i 'now nothun ... they tried to hang the whole thing on me ... datamation even said i did it

i did find something ... somebody had printed some number of hard copies of the complete stuff and put them in 3-ring tandem (corp) binders and mailed them off to the executive committee. I found a copy of the cover letter that somebody wrote for the mailing.
xxxx xxxxxx
xxx/xxx
xxx-xxxx

TANDEM MEMOS is a collection of documents which relate to the computer conferencing activity that has been going on for the past several months and has been referred to as the Tandem memos or Tandem files. The subjects of this computer conferencing have been primarily associated with the health and vitality of the IBM Corporation. Included is a brief summary by xxx xxxx and a longer summary by xxxxx xxxxx. MIPENVY is a memo composed by Jim Gray prior to departing San Jose Research for the Tandem Corporation. The last document is not directly associated with the TANDEM MEMOS but is an IBM Jargon and General Computing Dictionary compiled by xxxx xxxxxxxx.

The TANDEM MEMOS proper are somewhat difficult to read. The documents have been compiled by collecting the comments and memos in the time sequence that they were written. The division between different documents is primarily based on physical size (or number of lines) rather than subject matter. Memos about specific subjects will appear intermixed with memos on several other subjects and may in fact span several documents. As a result it is very difficult to follow a specific "conversation" without reading the documents in their entirety (possibly several times).

BUSINESS represents a "conversation" that began in the fall of 1980 prior to the start of the Tandem discussions (although it touches on many of the same subjects). TANDEM originally represented an informal trip report that xxxxx xxxxxxx and xxxx xxxxxxx made to Tandem in late March of 1981 to visit Jim Gray. MANAGERS represents comments that were being collected by xxx xxxxxxxx during roughly the same period of time that TANDEM2 and TANDEM3 were being compiled. Starting with IBM1 the naming convention for the documents was changed to reflect that the primary subject matter being discussed was not other corporations, but the IBM company.

xxxx


... snip ... top of post

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

hung/zombie users

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: hung/zombie users
Newsgroups: bit.listserv.vmesa-l
Date: Fri, 23 Aug 2002 14:17:13 -0600
with respect to hung/zombie user subject brought up in one of the share sessions this week .... (at least) both for the resource manager and later (again) for the disk engineering labs ... the system was cleansed of all situations that might result in hung/zombie users.

for the resource manager case, a benchmark suite of 2000+ automated benchmarks were run (taking 3 months elapsed time). some of the benchmarks severely stressed the system. when the stress benchmarks were initially run ... it was guaranteed to crash the system and/or hang users. by the time the resource manager was shipped, the serialization facility in the kernel was almost completely redone to eliminate all situations that were resulting in system failures and/or hung users.

misc. hung/zombie refs:
https://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/95.html#1 pathlengths
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/99.html#198 Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/2000c.html#33 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001f.html#56 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001h.html#18 checking some myths.
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002d.html#0 VAX, M68K complex instructions (was Re: Did Intel Bite Off MoreThan It Can Chew?)
https://www.garlic.com/~lynn/2002f.html#23 Computers in Science Fiction
https://www.garlic.com/~lynn/2002h.html#49 Coulda, Woulda, Shoudda moments?

automated benchmark refs:
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2001f.html#56 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001h.html#18 checking some myths.
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002d.html#0 VAX, M68K complex instructions (was Re: Did Intel Bite Off MoreThan It Can Chew?)
https://www.garlic.com/~lynn/2002h.html#49 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002i.html#53 wrt code first, document later

resource manager specific refs:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

previous ref to work on making i/o subsystem bullet proof for the disk engineering lab
https://www.garlic.com/~lynn/2002k.html#38 GOTOs cross-posting

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

How will current AI/robot stories play when AIs are real?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How will current AI/robot stories play when AIs are real?
Newsgroups: alt.folklore.computers
Date: Fri, 23 Aug 2002 23:30:17 GMT
Charles Richmond writes:
IMHO, this tendency to export program development and support is very distressing. First, there was the out-flow of blue collar jobs (manufacturing) overseas...and we were to be left with an "information economy". Now, the knowledge worker jobs are being sent overseas...so what kind of economy does that leave for Americans to work in???

Life is tough...and then you die.


I vaquely remember two pieces of information from 8-10 years ago ... 1) one was a census report that said only something like 50 percent of high school graduate aged individuals that year could be considered literate and 2) half of technical area PhD graduates from US colleges and universities were foreign born (not taking into account nearly all advanced degree graduates in other countries were non-US).

the premise seemed to be that US was going to be heavily dependent on overseas knowledge workers (some that may come to US for varying periods of times but had some probability of returning home). I've heard tales of whole advanced R&D departments in some companies totally composed of foreign born workers.

In any case, the reports seemed to be heavily oriented towards big segments of the US work force being composed of jobs in the service sector and/or jobs that were subsidized in one way or another. Part of it was that there seemed to be a decreasing percent in absolute literacy ... and information economy transition would significantly raise the literacy requirement. Some possibility that in 20 years possibly as few as 5 percent of US population would be considered qualified for knowledge worker jobs.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

MVS 3.8J and NJE via CTC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MVS 3.8J and NJE via CTC
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 24 Aug 2002 05:33:49 GMT
jgs2@NERSP.NERDC.UFL.EDU (Jack Schudel) writes:
NJE over CTC used BSC protocols, and was defined just like NJE over an EP line. From my notes, NJE 1 came out 11/76. NJE 3 with SNA support came out 3/78, and NJE 3.1 with 3800 enhancements came out in 3/80. That release is basically the same as the JES2 4.1+ release, with the addition of the NJE support. As far as I know, the MVS38j tur(n)key system is running JES2 4.1+, so NJE would probably work if you could get it. HOWEVER: NJE was a priced option, so I doubt you will be able to get it.

/jack


JES2/NJE and RSCS were co-announced as joint products. Both were from the internal network. At about the time of the announcement, the internal network was larger than could be defined in any JES2 configuration ... as a result JES/NJE nodes were both a small minority and also relegated to terminal/end nodes in the internal network (aka JES2 would toss a transmission that it didn't have a definition for either the originating node or the destination node).

the other problem was that JES2/NJE messed up the architecture and design of various layers in the NJE header ... the result was that files from different NJE releases didn't interoperate very well and had a tendency to crash the whole MVS system. The solution was that there tended to be specially modifed NJE drivers in RSCS that communicated with specific JES2 releases ... where the RSCS NJE driver would do a conanical representation of the JES2 header information and then make sure the format transmitted to JES2 was at the corresponding JES2 release level ... so as not to bring down the MVS system. The quirky thing was that RSCS tended to get blamed (rather than jes2 architecture/desgin) when some new JES2 header would leak thru the RSCS filtering and crash some other JES2 system somewhere on the internal network.

total aside (bitnet info from vmshare archives):
http://vm.marist.edu/~vmshare/browse.cgi?fn=BITNET&ft=MEMO#1

random past on jes2/nje
https://www.garlic.com/~lynn/94.html#44 bloat
https://www.garlic.com/~lynn/95.html#7 Who built the Internet? (was: Linux/AXP.. Reliable?)
https://www.garlic.com/~lynn/99.html#40 [netz] History and vision for the future of Internet - Public Question
https://www.garlic.com/~lynn/99.html#160 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000b.html#27 Tysons Corner, Virginia
https://www.garlic.com/~lynn/2000c.html#29 The first "internet" companies?
https://www.garlic.com/~lynn/2000c.html#72 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000e.html#14 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#15 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#29 Vint Cerf and Robert Kahn and their political opinions
https://www.garlic.com/~lynn/2000g.html#25 SSL as model of security
https://www.garlic.com/~lynn/2001.html#5 Sv: First video terminal?
https://www.garlic.com/~lynn/2001c.html#41 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#50 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001e.html#8 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#12 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001g.html#26 distributed authentication
https://www.garlic.com/~lynn/2001g.html#48 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001h.html#37 Credit Card # encryption
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#39 Big black helicopters
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#22 ESCON Channel Limits
https://www.garlic.com/~lynn/2001n.html#27 Unpacking my 15-year old office boxes generates memory refreshes
https://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#59 Computer Naming Conventions
https://www.garlic.com/~lynn/2002c.html#31 You think? TOM
https://www.garlic.com/~lynn/2002e.html#14 EMV cards
https://www.garlic.com/~lynn/2002e.html#36 Crypting with Fingerprints ?
https://www.garlic.com/~lynn/2002e.html#65 Digital Signatures (unique for same data?)
https://www.garlic.com/~lynn/2002e.html#71 Blade architectures
https://www.garlic.com/~lynn/2002g.html#38 Why is DSA so complicated?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#41 Why is DSA so complicated?
https://www.garlic.com/~lynn/2002g.html#42 Why is DSA so complicated?
https://www.garlic.com/~lynn/2002h.html#2 DISK PL/I Program
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#58 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002j.html#75 30th b'day

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

how to build tamper-proof unix server?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: how to build tamper-proof unix server?
Newsgroups: comp.security.unix
Date: Sun, 25 Aug 2002 16:35:07 GMT
kmv_dev writes:
> How does the military build db system to store their classified > information? Anyone with this kind of background is willing > to shed some light?

no background, but try
http://www.radium.ncsc.mil/tpep/
http://www.radium.ncsc.mil/tpep/epl/epl-by-class.html

above lists labeled security protection (B1) for Informix, Oracle, and Sybase.
> Why do you think the digital signatures are the hard part? I thought > it should be the easy part. The hard part seems, to me, how to prevent > root user from messing with the log? Since root can do anything, anyone > with root acess can obviously remove/change the system logs, and do > a bunch of other things. > > I'm thinking if we can tie the system log to a secure database (for > data redundancy), and all system log records are loaded up into > the database. And make sure that no single person has the privilege > to root and dba at the same time (but per above, we still can't solve > the dba human error problem). > > I've not used Trusted Solaris, am just reading on it. It seems > like there's something interesting there.

I've seen unix systems that have been built with r/o filesystems (both cdrom ... and disks with r/w inihibit switch) and logging were to high speed serial write-only connection on a different, isolated machine with a dedicated logging application (i.e. no support for commands, etc over the link). the logging application writes to WORM. KISS.

there is a KISS/complexity trade-off with databases. A common vulnerability is have dba doing some maintanance ... requiring taking the system offline, removing some protections ... and then the dba forgetting to turn on the protections when they put the system back online (not a direct attack, but the more complex ... the higher the probability some mistake is made). there has been some references to security checking tools that have to be run whenever any maint. has been performed ... because of the frquency of mistakes as things get more complex ... even needing just tools implies opportunity for mistakes (again non-KISS).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

how to build tamper-proof unix server?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: how to build tamper-proof unix server?
Newsgroups: comp.security.unix
Date: Sun, 25 Aug 2002 16:50:08 GMT
Anne & Lynn Wheeler writes:
there is a KISS/complexity trade-off with databases. A common vulnerability is have dba doing some maintanance ... requiring taking the system offline, removing some protections ... and then the dba forgetting to turn on the protections when they put the system back online (not a direct attack, but the more complex ... the higher the probability some mistake is made). there has been some references to security checking tools that have to be run whenever any maint. has been performed ... because of the frquency of mistakes as things get more complex ... even needing just tools implies opportunity for mistakes (again non-KISS).

oh yes, slightly related
https://www.garlic.com/~lynn/aepay7.htm#netbank2 net banking, is it safe?? ... security proportional to risk
https://www.garlic.com/~lynn/aepay10.htm#20 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2001h.html#61 Net banking, is it safe???
https://www.garlic.com/~lynn/2001j.html#2 E-commerce security????
https://www.garlic.com/~lynn/2001j.html#5 E-commerce security????
https://www.garlic.com/~lynn/2001j.html#54 Does "Strong Security" Mean Anything?
https://www.garlic.com/~lynn/2001l.html#2 Why is UNIX semi-immune to viral infection?
https://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#11 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#24 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#25 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#28 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002f.html#23 Computers in Science Fiction
https://www.garlic.com/~lynn/2002i.html#72 A Lesson In Security
https://www.garlic.com/~lynn/2002j.html#14 Symmetric-Key Credit Card Protocol on Web Site
https://www.garlic.com/~lynn/2002j.html#63 SSL integrity guarantees in abscense of client certificates

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

How will current AI/robot stories play when AIs are real?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How will current AI/robot stories play when AIs are real?
Newsgroups: alt.folklore.computers
Date: Sun, 25 Aug 2002 20:18:06 GMT
"Charlie Gibbs" writes:
Perhaps kids should be taught that "consumer" is a condescending term intended to promote blind consumption for its own sake (no wonder they get so messed up, since it's the exact opposite of the conservation messages they're also receiving). They should rediscover the older term "customer", and the power that embodies.

one of the other things from the mid-90s stuff was that one of the big mid-west land grant universities had made some statement that they had dumbed down their freshmen text books something like three times over the previous 25 years. random
https://www.garlic.com/~lynn/2002k.html#41 How will current AI/robot stories play when AIs are real?

i think that the newspaper story was something to the effect that half the 18 year olds were functionally illiterate (didn't have the skills to deal with all the things they could be expected to face). as before the issue was that information/knowledge based economy raises the bar for funcitionally literate (and some indication that fewer people were achieving functional literacy even under a static standard).

trying to find quick reference for the newspaper article with simple use of search engines turns up
https://web.archive.org/web/20100413134230/http://www.nifl.gov/nifl/facts/facts_overview.html

which is close but not exactly the same.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OT (sort-of) - Does it take math skills to do data processing ?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT (sort-of) - Does it take math skills to do data processing ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 26 Aug 2002 02:06:28 GMT
Tom.Kelman@SUNTRUST.COM (Kelman.Tom) writes:
I agree on the sad state of our education system. I've been helping the next door neighbor boy with his physical science class. Their learning about the metric system and metric conversions. He doesn't understand when to multiple and when to divide. Also, he doesn't understand about moving the decimal point when multiplying or dividing by powers of 10. I'm not sure he understands what a decimal point is. He is in the 10th grade and has been getting a B average in math up to this point. I wonder what they've been teaching him.

slightly related thread
https://www.garlic.com/~lynn/2002k.html#41 How will current AI/robot stories play when AIs are real?
https://www.garlic.com/~lynn/2002k.html#45 How will current AI/robot stories play when AIs are real?

fortran programming has tended to be extremely math oriented. lots of kernel and system programming have tended to be extremely state, process, & logic oriented.

when i was doing the resource manager ... I somewhat intermixed the two styles ... it drove some number of people crazy (who tended to be oriented towards a single, specific style):
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

somewhat related refs:
https://www.garlic.com/~lynn/2002k.html#40 hung/zombie users
https://www.garlic.com/~lynn/subtopic.html#fairshare

they told me i was somewhat the guinea pig (aka first) for SCP priced software ... I got to spend 6 months with the business people about pricing SCP stuff. afterwards the mvs resource manager came out ... at i believe the same exact price.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OT (sort-of) - Does it take math skills to do data processing ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT (sort-of) - Does it take math skills to do data processing ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 26 Aug 2002 02:32:34 GMT
jchase@USSCO.COM (Chase, John) writes:
E Pluribus Gad!! I remember having to demonstrate understanding of decimal point, etc. as a condition of being allowed to "graduate" from fourth grade! That was in the 1956 - 57 school year....

how 'bout negative & positive numbers .... as in a thermometer ... if the temperature is 20 degrees and it drops 30 degrees ... what is the temperature? (i think there was something like 20-30 problems like this in the 4th grade workbook, in addition to fractions & decimals) small somewhat rural 4th grade.

what got me was the end of 5th grade on an achievement test was
2x+y=5
x+y=3


I thot it was really unfair since I had never heard of algebra in my whole life.

from a somewhat related thread
https://www.garlic.com/~lynn/2002d.html#32 Farm kids
https://www.garlic.com/~lynn/2002d.html#42 Farm kids
https://www.garlic.com/~lynn/2002e.html#37 Would the value of knowledge and information be transferred or shared accurately across the different culture??????

slightly related from a GOTO discussion
https://www.garlic.com/~lynn/2002k.html#38 GOTOs cross-posting

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

MVS 3.8J and NJE via CTC

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MVS 3.8J and NJE via CTC
Newsgroups: bit.listserv.ibm-main
Date: Mon, 26 Aug 2002 19:49:14 GMT
b19141@ACHILLES.CTD.ANL.GOV (Barry Finkel) writes:
We used NJE (called NJI) in 1978 to connect two MVT/ASP data centers. The comments in the NJE Header Macros said "Sun", which stood for SanJose Unified Network, IIRC. I also heard that the IBM Corporate Network was called Sun because the sun never set on the network.

i believe a group in internal san jose installed what was mostly TUCC networking code (at least most of it originally had "TUCC" out in the comment fields). A small percentage of internal network were JES2/JES3/MVS network nodes for a whole variety of reasons .... most associated with severe difficiencies previously outlined:
https://www.garlic.com/~lynn/2002k.html#42 MVS 3.8J and NJE via CTC

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

MVS 3.8J and NJE via CTC

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MVS 3.8J and NJE via CTC
Newsgroups: bit.listserv.ibm-main
Date: Mon, 26 Aug 2002 19:53:10 GMT
efinnell@SEEBECK.UA.EDU (Edward J. Finnell, III) writes:
What was the guy's name that used to carry the Corporate Network around on a printout? Walt Doherty maybe-used to pop-up at SHARE and other technical conferences. Bald as a queue ball, used to drive Betty Od'neal crazy sending her yellow roses while she was President of SHARE.

i recently found a vnet print-out from spring 1977 ... turns out to have been printed on HONE1.
https://www.garlic.com/~lynn/2002j.html#4 HONE, , misc

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OT (sort-of) - Does it take math skills to do data processing ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT (sort-of) - Does it take math skills to do data processing ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 26 Aug 2002 20:11:54 GMT
Joe Simon writes:
Then How'd you know it was algebra ?

i asked ...

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

SSL Beginner's Question

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SSL Beginner's Question
Newsgroups: bit.listserv.ibm-main
Date: Tue, 27 Aug 2002 13:38:45 GMT
bpellet@MAIL.CSLF.ORG (Robert Pelletier) writes:
Good Morning All. First thanks for all the help in the past. Now my question. Does anyone know of a good book/resource/guide for implementing SSL on OS390 V2R9 (SSL for Dummies)? I have been told by accounting to implememnnt this ASAP but I have no clue what it entails. I do have the IP Configuration Guide with me but thought maybe there is info that kind of has easy examples for implememntation. Thanks very much.

what do they mean by SSL?

there is server authentication, encrypted transmission and some protection from MITM attacks. that is the straight-foward thing that most people think of when entering credit card numbers communicating to some e-merchant.

the server authentication ... involves some public key technology at the server site and a server SSL domain name cettificate. The client does some simple checking for valid certificate ... and then checks to see if the server domain name entered in the URL matches the domain name in the SSL domain name certificate.

The client & server then can exchange a secret key ... that is then used for subsequent encrypted transmission (preventing evesdroppers from gaining access to information flowing over the session ... like credit card numbers).

If they are only worried about encryption of transmitted data ... they may be less interested in server authentication and/or all the gorpy details about all the business issues regarding obtaining a SSL domain name certificate ... and any trust that such a certificate might represent.

SSL can also mean optional client certificate authentication. Most implementations don't bother with client certificate authentication ... they are primarily interested in encrypted transmission ... and then possibly within an encrypted transmission perform client authentication by asking them to enter userid/password.

if you have a mainframe webserver doing IP ... then there is all the SSL code that resides on the mainframe server ... the mainframe being able to generate a public/private key pair ... and perform private key operations. Also obtaining or generating a server certificate that is acceptable to clients. Then there is the secret key operation and handling of encrypted transmission. If this is for a web server ... then the SSL package is integrated into HTTP web server operation ... as HTTPS. If it is for something other than web server/HTTP operation, then you will probably need an IP service that has SSL integrated into it.

One of the issues is that SSL is an application layer operation ... which carries with it the implication that the support for SSL is integrated into specific applications (aka the application program is performing the authentication and encryption/decryption operation ... possibly by calling SSL library routines).

SSL is not a network or transport layer operation ... aka not part of the underlying TCP/IP services. You don't deploy SSL and then it is automagically available for use by all applications that use TCP/IP.

ramdom refs:
https://www.garlic.com/~lynn/aadsm12.htm#18 Overcoming the potential downside of TCPAs

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Dump Annalysis

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Dump Annalysis
Newsgroups: bit.listserv.ibm-main
Date: Tue, 27 Aug 2002 13:48:21 GMT
rajeevva@IN.IBM.COM (Rajeev Vasudevan) writes:
Could any one pls guide on dump annalysis? I mean any material in the net which will be useful for me solving the abends quickly, instead of the tedious process of giving the display statements and tracing the problem.

not directly related ... but long ago & far away (20+ years) ... I developed library of REXX scripts that did automatic storage examination for typical/frequent failure signature patterns. random ref:
https://www.garlic.com/~lynn/2002k.html#38 GOTOs cross-posting

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

general networking

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: general networking
Newsgroups: alt.folklore.computers
Date: Tue, 27 Aug 2002 14:01:42 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Cutler, I'm not worried about, yet, haven't met him, no hurry, and from opinions independent of Barb, sounds like I'd be inclined to agree with Barb about him. On the other hand, (and this would be amusing if Lynn were reading this) arrogance is apparently measured in Benoits (did all this computing stuff come from IBM?). I need a calibration. Maybe next time I visit Watson.

actually all computing stuff came from a specific IBM lab. There are lots of instances of IBMers or even ibm customers running around claiming something as being unique/first with IBM. There are even instances of people from specific IBM lab running around claiming credit for work done at other ibm locations (there was once an article published in ibm system journal ... which got a number of letters to the editor from other ibm locations with extensive supporting documentation as to various inaccuracies). I don't think is solely a matter of arrogance ... in some cases it may be ignorance ... and it other cases it may be something else totally.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

general networking

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: general networking
Newsgroups: alt.folklore.computers
Date: Tue, 27 Aug 2002 14:24:15 GMT
ref:
https://www.garlic.com/~lynn/2002k.html#53 general networking

... there is also some of the other side. i've noted a couple times that some of the much better written up time-sharing systems in various bodies of literature ... make references to poor or non-existance time-sharing from ibm. i've contended that ibm has offered extensive time-sharing ... in some cases with 100 to 1000 times more customer installations that some of the better referenced time sharing systems. However, these weren't the ibm commercial data processing systems and because these ibm commercial data processing systems may have had 100 times more customers installations than the ibm time sharing systems ... many people automatically associated ibm solely with its commercial data processing (in much the same way that many ibm customers may have equated ibm commercial data processing with all computing).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Moore law

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moore law
Newsgroups: comp.arch
Date: Tue, 27 Aug 2002 14:59:43 GMT
cecchi@signa.rchland.ibm.com (Del Cecchi) writes:
I rest my case about how hard prediction even a few years out is.

i remember in 91 somebody hiring dataquest for a couple million(?) to do a detailed study of what was going to happen in the PC industry over the next five years. as part of the study, dataquest selected 12 "experts" in silicon valley to come in and have a couple day (video-taped) round table discussion on the subject. somehow i got selected ... they placed me with my back to the video camera and mangled my introduction (on the off chance the person that commissioned study might recognize me).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Moore law

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moore law
Newsgroups: comp.arch
Date: Tue, 27 Aug 2002 16:40:22 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
How many of your predictions came about? :-)

I reckon to get about 30% right, 40% right in principle and wrong in detail, and 30% plain wrong, for a 5 year ahead slot. I don't track my longer-term predictions, but should be flabberghasted if I could get them right in detail for a 10 year ahead slot more than 10% of the time, if that.

Generally, the detail that I get wrong most often is timescale. It may be obvious where the industry is going, but the speed of movement can be anything from paralysed to lightning, depending on external conditions. Flat screens are a classic example, where lots of us were expecting the changeover to start 'imminently' for at least 10 years.


absolutely time-scale .... also trade-offs. my pet was highly custom hardware support for graphics. the trade-off between single large general purpose computing chip for doing everything vis-a-vis more specialized chips kept changing.

unrelated example ... one of the things that my wife and i had running in our high-speed backbone was rate-based pacing .... the nsfnet audit that said it & other things were five years ahead of the bid proposals for NSFNET (to build something new). as it turns out it is closer to 15 years ... maybe 20 years ... internet2 is talking about rate-based pacing.

amount of memory hiccup'ed because of price and some supply issues during the period.

i think part of the reason that i got chosen was that I had been posting prices from sunday sjmn and keeping some running references over a several year period:
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#81 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#82 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

History of AOL

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of AOL
Newsgroups: alt.folklore.computers
Date: Tue, 27 Aug 2002 18:33:33 GMT
"Michael J. Albanese" writes:
Nearly all the online services back then charged substantially more (at least double the rate) for "prime time" access, to minimize impact on commercial customers. You were billed for each and every excruciating minute (no affordable flat rate plans), and rates were structured to strongly encourage home users to connect during off-peak hours, when excess capacity was available.

i remember a story of one of the large commercial time-sharing offerings (late 70s or early 80s) about an executive who first heard about clients playing games on their computers. the reaction was supposedly to remove all such offending programs and try and make it impossible for such things to crop up again. he was then told that 30 percent of the services revenue was now coming from people playing games. when he heard that ... he allowed that maybe it wasn't all bad after all.

flat rate supposedly sprouted when a large clientele developed that used possibly five percent or less of their monthly flat rate. This created a bi-model distribution with huge number of users effectively subsidizing the heavy use of a small number of users (aka nothing is free .... there is always somebody paying for it some way).

The profile of the large clientele population (that made it possible for a few to enjoy large amounts of computer use) ... was supposedly business oriented people that would connect a couple times a week (later a couple times a day) and do email exchange (upload/download) and then disconnect. They believed that they were getting value for their money ... even though their actual computer use never came close to consuming the resources that their payments underwrote.

It is sort of analogous to insurance .... if everybody consumed more than they were paying for ... insurance wouldn't work. In general, low insurance premiums indicate either 1) that they aren't getting any service or 2) a huge number of people utilize significantly less than they pay for. The early days of flat rate tended to be heavily skewed towards #2.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OT (sort-of) - Does it take math skills to do data processing ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT (sort-of) - Does it take math skills to do data processing ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 27 Aug 2002 22:31:01 GMT
gds@best.com.cuthere (Greg Skinner) writes:
Interesting ... were you able to answer the question? How did you go about trying to answer the question?

ref from original post
https://www.garlic.com/~lynn/2002d.html#32 Farm kids

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

History of AOL

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of AOL
Newsgroups: alt.folklore.computers
Date: Tue, 27 Aug 2002 22:33:58 GMT
Q writes:
They hid the games in with the payroll programs, and continued having fun. Hell, no manager would ever wonder why payroll programs were being run so often!!

there was also the observation somewhere that if things like this were banned ... then hundreds of customers might hide their own copies unnecessarily using scarce disk space. that having a single copy of each publicly available would save enormous amounts of disk space.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM-Main Table at Share

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM-Main Table at Share
Newsgroups: bit.listserv.ibm-main
Date: Tue, 27 Aug 2002 22:38:27 GMT
EBIE@PHMINING.COM (Eric Bielefeld) writes:
At Share last week, each night I went to Scids, I always went over to the IBM-Main table. There usually were a few people sitting around the table, but when I asked them if they belonged to IBM-Main, most of them said they never heard of IBM-Main. I was hoping to meet a few people at the table, which I did - just not IBM-Main people. I did meet a number of you throughout the conference, but not as many as I thought I might.

Why don't people go to the IBM-Main table?


i went by the ibm-main table numerous times ... but i never saw/heard anybody that seemed to be discussing anything related to ibm-main??

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

arrogance metrics (Benoits) was: general networking

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: arrogance metrics (Benoits) was: general networking
Newsgroups: alt.folklore.computers
Date: Wed, 28 Aug 2002 00:22:00 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
But I was hoping that you'd search your vast database for micro-Benoits of measurements.

sorry didn't find any micro-benoits (try watson) ... did find

https://web.archive.org/web/20020528181839/http://www.212.net/business/jargonn.htm
notwork - n. VNET (q.v.), when failing to deliver. Heavily used in 1988, when VNET was converted from the old but trusty RSCS software to the new strategic solution. To be fair, this did result in a sleeker, faster VNET in the end, but at a considerable cost in material and in human terms. nyetwork, slugnet

and then from some other source ...
[...from The Sayings of Chairman Peter]

Very few EDP people perform; in part because they are arrogant, in part because they are ignorant, and in part because they are too enamored with their tool.

-- Peter Drucker

[Business Maxims:] Signs, real and imagined, which belong on the walls of the nation's offices:
1) Never Try to Teach a Pig to Sing; It Wastes Your Time and It Annoys the Pig.
2) Sometimes the Crowd IS Right.
3) Auditors Are the People Who Go in After the War Is Lost and Bayonet the Wounded.
4) To Err Is Human -- To Forgive Is Not Company Policy.


there is also corollary to above They would forgive you for being wrong, but they were never going to forgive you for being right.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OT (sort-of) - Does it take math skills to do data processing ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT (sort-of) - Does it take math skills to do data processing ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 28 Aug 2002 00:34:34 GMT
gds@best.com.cuthere (Greg Skinner) writes:
OK, but I'm curious ... did you try to solve the problem a different way? Did you try substituting values, or plotting the equations in the x-y plane? Was it a multiple-choice test?

nope ... it was find x & y. i didn't know how to parse any of it ... what does "find x & y" mean? .... i didn't have any syntactic or semantic context for attaching any meaning. What means the character string "2x", 2x doesn't mean anything.

A month later I had finished a algebra intro book from the bookmobile and could solve the problem ... but at the time it was gibberish. By the end of the summer, I had finished every algebra book that the bookmobile had.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OT (sort-of) - Does it take math skills to do data processing ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT (sort-of) - Does it take math skills to do data processing ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 28 Aug 2002 01:22:07 GMT
gds@best.com.cuthere (Greg Skinner) writes:
OK, but I'm curious ... did you try to solve the problem a different way? Did you try substituting values, or plotting the equations in the x-y plane? Was it a multiple-choice test?

One thing I'm trying to get at is whether this sort of thing (being able to recognize how to solve a problem) has some bearing on what kind of career path/success someone will have in the computer field. For example, when I was a grad student, I was having problems in


i don't think i'm any sort of example. much later (the algebra thing occurred in 5th grade) there was a stanford Phd covering 9 months of my life. a researcher sat in the back of my office, went with me to meetings, etc and took notes on how i communicated ... they also had complete copies of all my from/to email and complete logs of all my instant messages. the phd thesis was sort of in the area of computer mediated communication ... but at the time joint phd between language and computer ai (it was also the basis of a couple subsequent books in the area).

one of the statements of the researcher was that i used english as if i was a non-native (aka a foreign language) ... even tho i was born & raised in the usa ... and only had the smattering of non-english language courses that you typically get in school. supposedly how i thot about problems was not readily recognizable. random cmc stanford phd musings
https://www.garlic.com/~lynn/2001j.html#29 Title Inflation
https://www.garlic.com/~lynn/2001k.html#64 Programming in School (was: Re: Common uses...)
https://www.garlic.com/~lynn/2002b.html#51 "Have to make your bones" mentality
https://www.garlic.com/~lynn/2002c.html#27 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002g.html#64 Pardon my ignorance,

My first programming course was simple fortran intro ... spring of my soph year. They then gave me a summer job (it was better than the dish washing job that i had during the school year) that involved design & implementing my own machine language monitor. I was given the whole machine room from 8am sat. until 8am monday morning (basically a couple mainframes for use as my personal computers ... 48hrs straight w/o sleep and frequently another 8-10 or so during the day on monday). after a couple weeks I was reading the binary punched holes in the cards. I would identify a problem and frequently fan the the binary card deck to the corresponding card and change the punched holes (actually dup the card in a 026 keypunch for the non-changed columns and then "multi-punch" the fix in the appropriate cols. of the new card) as a patch/fix.

for the resource manager ... there were more than a few people that asserted that my inability to explain the workings in terms that they could understand was a failing on my part ... not theirs (aka i invented stuff that i had no english description for). recent resource manager postings ...
https://www.garlic.com/~lynn/2002k.html#13 Difference between Unix and Linux?
https://www.garlic.com/~lynn/2002k.html#40 hung/zombie users
https://www.garlic.com/~lynn/2002k.html#46 OT (sort-of) - Does it take math skills to do data processing ?

there was also the case of some kernel code that i originally dashed off as an undergraduate in the area of page replacement algorithm and it got incorporated into ibm mainframe operating system. nearly 15 years later there was a stanford phd on the subject ... somewhat re-inventing it.
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

History of AOL

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of AOL
Newsgroups: alt.folklore.computers
Date: Wed, 28 Aug 2002 14:20:55 GMT
jmfbahciv writes:
Most computing services did this even when the only way to input anything was via cards. The 360 at U. of Mich. charged less to non-U.of M. people after hours. So my boss would have the guy who needed the computing power of a 360 (vs. a 1620) drive all the way to Ann Arbor so that he'd get there at night. He'd hand the deck to the operator; the computer went <zip> <burb> and the output was handed back. This was for a job that required more than 24 hours of dedicated 1620 time. When the guy came back, he was very humbled. He thought he had a huge number cruncher. :-)

note also on ibm mainframes the lease costs could be for first shift only operation, two shift operation, three/four shift operation and the incremental delta for three/four shift operation was small (ten percent of first shift sticks in my mind). note that this was based on "meter" running when something was going on.

two things credited with turning some of the early time-sharing services (in house or commercial) into 24x7 operaton was (csc started leaving machine up round the clock).
1) prepare command
2) unattended operation


the meter ran when processor was executing and/or doing i/o transfer. the terminal control unit normal was considered in the middle of doing somthing if there was an "active" operation on the controller ... whether or not actual keystrokes were happening at the moment. The prepare CCW command put the terminal controller in a state that effectively didn't active the cpu meter but still ready to accept keystroke.

w/o an off-shift operator and the prepare command ... it was easy to leave the machine up 7x24 and have very little off-shift bills ... the meter only running when somebody was actually doing something. If nobody was one ... or on and not doing something

random refs:
https://www.garlic.com/~lynn/99.html#86 1401 Wordmark?
https://www.garlic.com/~lynn/99.html#107 Computer History
https://www.garlic.com/~lynn/2001b.html#27 HELP
https://www.garlic.com/~lynn/2002b.html#24 Infiniband's impact was Re: Intel's 64-bit strategy

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

History of The Well was AOL

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of The Well was AOL
Newsgroups: alt.folklore.computers
Date: Wed, 28 Aug 2002 19:05:35 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Former? I believe The Well is still there and functioning. I still send email to at least a dozen people there. Mandel might have passed on, but The Well goes on.

they also have web site .... and i periodically get hits on my garlic pages from some web page hosted at the well.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OT (sort-of) - Does it take math skills to do data processing ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT (sort-of) - Does it take math skills to do data processing ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 28 Aug 2002 19:57:54 GMT
gds@best.com.cuthere (Greg Skinner) writes:
OK, but I'm curious ... did you try to solve the problem a different way? Did you try substituting values, or plotting the equations in the x-y plane? Was it a multiple-choice test?

and since you asked ... here is a story about a little more complex method of solving a problem.

from the resource manager ... and a short lead in ... most of the dynamic adaptive stuff was xxxSTP ... from the '60s tv commercial that had the line "the racer's edge".

the long one is slightly more complicated. so when i was an undergraduate i invented, implemented and deployed this stuff for page replacement algorithm, "my form" of working sets scheduling control, fair share scheduling, dynamic adaptive resource management, bunch of fastpath and other misc. stuff. some amount was picked up as part of official ibm product. i also was one of four people that did the first 360 pcm controller.
https://www.garlic.com/~lynn/submain.html#360pcm
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

i then went to work at the cambridge science center. later for various and sundry reasons, misc. of the changes were dropped in the 360->370 port. so over a period of a couple years there was a body of 370 code that wasn't part of the product. one version escaped into at&t longlines and apparently propagated around the organizations because ten years later some branch office guy tracked me down because they were running the same kernel ... but on most recent 370 ... and the next ibm product was XA-only which longlines wasn't going to buy because this ten year old kernel wouldn't run on it.

csc did a lot of the early work in workload profiling, performance modeling and stuff that eventually matured into capacity planning. part of it was that they had nearly 10 years of data at five minute intervals of the system running at csc. they also had similar data for shorter periods of time from possibly hundreds of internal installations. One of the things was to take the millions of data points and plot consumption/activity of major system resources on independent axis ... filling an operational workload envelope. One of the opportunities then was being able to define a synthetic workload ... that could be configured to match the operation characteristic of any point of this operational workload envelope. It was also possible to configure the synthetic workload to operate well outside the operational envelope as a stress test methodology. some of this resulted in my interest in boyd's work:
https://www.garlic.com/~lynn/subboyd.html#boyd

So during this period ... share and customers started lobbying the ibm company for the release of the "wheeler" scheduler ... which eventually happened. The code that shipped in the initial resource manager had the rewrite of serialization mechanism to eliminate a lot of failure modes that could be reliable reproduced with the synthetic workload generator. It also contained a lot of code for multiprocessor support (in the next subsequent release all the SMP and serialization rewrite disappeared into the base, free SCP kernel code ... cutting the module hits by about 90 percent and the actual lines of code by possibly 2/3rds).
https://www.garlic.com/~lynn/subtopic.html#smp
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

Now while there were a lot of references to fair share ... it was just one of the available administrative policies (the default) for resource allocation. The actual resource manager had extensive dynamic adaption to configuration and workload characteristics.

So ... now with the setting for the long humor story. The business people said that before the resource manager could go out it had to be updated to be as modern as the MVS SRM. This SRM had a big table of tunable values ... and obviously any modern operating system needed hundreds of tunable values for system tuners to fiddle with. WTSC and other organizations had great cottage industry going doing methodical fiddling of the MVS SRM tuning parameters and generate elaborate reports about the fiddling. So i'm sitting here and listening to this request to disable all the elaborate automatic dynamic adaptive stuff and return to the deep dark ages that MVS SRM was in.

Now the obvious thing would have been to define a whole bunch of parameters and not do anything with them. Turns out that would have been too simple.

I had already observed that very few kernel hackers were familiar with anything from operation research, control algorithms (kind you might find in oil refinary), etc. So the idea was to implement some number of manual tuning parameters in a new module appropriately named xxxSRM and actually use all the tuning parameters in real live code. So there is detailed documentation and formulaes and all the source code for how all of this works. The one thing that i sort of glossed over was this OR-thing referred to as degrees of freedom. Now if you were me at this point and were going to institute degrees of freedom .... which would you give greater "freedom" to
1) the dynamic adaptive control parameters
or
2) the manual tuning control parameter


remember i said it would have been too simple to have just ignored the manual tuning parameters.

So sometime 15 years after the resource manager had shipped, i'm riding up this elevator in "tinker-toy" hk bank building and somebody in the back asks are you the "wheeler" of the wheeler scheduler. So I say maybe. He says that he recently graudated from the xyz university and had studied the wheeler scheduler in computer science. Of course, I asked him if they had taught about the joke and degrees of freedom.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

next, previous, index - home