List of Archived Posts

2002 Newsgroup Postings (05/20 - 06/27)

Search for Joseph A. Fisher VLSI Publication (1981)
DISK PL/I Program
DISK PL/I Program
Future architecture
Future architecture
Coulda, Woulda, Shoudda moments?
Biometric authentication for intranet websites?
disk write caching (was: ibm icecube -- return of
Biometric authentication for intranet websites?
Biometric authentication for intranet websites?
Digital signature
Why did OSI fail compared with TCP-IP?
Why did OSI fail compared with TCP-IP?
Biometric authentication for intranet websites?
Why did OSI fail compared with TCP-IP?
Coulda, Woulda, Shoudda moments?
Why did OSI fail compared with TCP-IP?
disk write caching (was: ibm icecube -- return of
Address Spaces
PowerPC Mainframe?
PowerPC Mainframe?
PowerPC Mainframe
Why did OSI fail compared with TCP-IP?
System/360 shortcuts
PowerPC Mainframe
Coulda, Woulda, Shoudda moments?
Future architecture
Why are Mainframe Computers really still in use at all?
backup hard drive
Computers in Science Fiction
Multics hardware (was Re: "Soul of a New Machine" Computer?)
Computers in Science Fiction
Computers in Science Fiction
The multiverse is out there !
Computers in Science Fiction
Computers in Science Fiction
Computers in Science Fiction
Computers in Science Fiction
The 64-bit landscape in 2005
Oh, here's an interesting paper
[survey] Possestional Security
Biometric authentication for intranet websites?
Looking for Software/Documentation for an Opus 32032 Card
IBM doing anything for 50th Anniv?
50 years ago (1952)?
Future architecture [was Re: Future micro-architecture: ]
Future architecture
Why did OSI fail compared with TCP-IP?
Why did OSI fail compared with TCP-IP?
Coulda, Woulda, Shoudda moments?
crossreferenced program code listings
Why did OSI fail compared with TCP-IP?
Bettman Archive in Trouble
Bettman Archive in Trouble
Bettman Archive in Trouble
Future architecture [was Re: Future micro-architecture: ]
history of CMS
Future architecture [was Re: Future micro-architecture: ]
history of CMS
history of CMS
Java, C++ (was Re: Is HTML dead?)
Java, C++ (was Re: Is HTML dead?)
history of CMS
Sizing the application
history of CMS
Bettman Archive in Trouble
How does Mozilla 1.0 compare with Opera?
history of CMS
Are you really who you say you are?
history of CMS
history of CMS
history of CMS
history of CMS
Where did text file line ending characters begin?
Where did text file line ending characters begin?
time again
time again
time again
Q: Is there any interest for vintage Byte Magazines from 1983
Al Gore and the Internet
Al Gore and the Internet
Al Gore and the Internet
Al Gore and the Internet
Signing with smart card
history of CMS
Al Gore and the Internet
Al Gore and the Internet
Atomic operations redux
IBM 2540 Card Read Punch - documentation wanted
How secure is SSH ?

Search for Joseph A. Fisher VLSI Publication (1981)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Search for Joseph A. Fisher VLSI Publication (1981)
Newsgroups: comp.arch
Date: Mon, 20 May 2002 20:06:15 GMT
cyril.chevrot@asim.lip6.fr (Cyril Chevrot) writes:
And more generally, Is there anyone who knows a place on the WEB where I could find a publication search engine.

possible the major (generalized) library (as opposed to NLM) search engine around the world ... since at least the 70s sometime has been Dialog ... original a Lockheed subsidiary .. now "A Thomson Company" (i think it passed thru a number of hands after lockheed sold it off). I use to visit them periodically in the '80s .. they had one of the larger disk farms & cluster mainframe operations in silicon valley.

http://www.dialog.com/
This is Dialog.

Providing more than 12 terabytes of content from the world's most authoritative publishers, and the products and tools to search every bit of it with speed and precision. We are a company founded on the idea that information matters -- that it really can make a difference in the world -- or your corner of it.


================================

(libraries pay subscription for it, looks like you can use a credit card).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

DISK PL/I Program

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DISK PL/I Program
Newsgroups: bit.listserv.ibm-main
Date: Mon, 20 May 2002 23:04:57 GMT
Mark.Thompson@MOTOROLA.COM (Mark Thompson) writes:
Looking for source code to an MVS program called DISK, circa 1984. It is written in PL/I and apparently used to break up a long MVS record into smaller chunks. The program accepts parameters like LOAD and DUMP and each record in the output file (on MVS) is prefixed by "CMS" in column 2 and a VM file name on the end. I think the resulting file is transmitted to a VM system and reassembled there by some programs called MVSGET/MVSPUT.

the format is standard CMS DISK utility (going back to CP/67 ... mid-60s). If the MVS is done right ... the standard CMS DISK (load) utility should be able to process the file.

From CMS Logic Manual (60s):
DUMP: DISK copies the file designation from the parameter list into bytes 58-76 of an 80-byte buffer (The first four bytes of the buffer contain an identifier consisting of an internal representation of a 12-2-9 punch and the characters 'CMS'). Then DISK temporarily changes of the characteristics of the file in the 40-byte FST entry to make it appear as a file of 800-byte fixed-length records (The correct FST entry is restored when the file has been dumped, of course). DISK moves the initial value for sequencing (0001) into bytes 77-80 of the buffer. DISK next acll the RDBUF function program to read the first 50 bytes of the temporary copy into bytes 6-55 of the buffer and then the CARDPH function program to punch the contents of the buffer. Having punched the first card, DISK increments the sequence number (bytes 77-80 of the output buffer) and overalys bytes 6-55 of the buffer with the next 50 bytes of the file by calling RDBUF. It then punches the contents of the buffer. DISK repeats this process for each subsequent 50 bytes of data in the temporary disk file. When the end-of file is encountered, DISK generates an end card (one with N in column 5) and punches it, calls the CLOSIO command program to close punch operations, restores the FST entry to it correct value and returns to the caller.

.....


58-76:


58-65 8 byte filename 66-73 8 byte filetype 74-75 2 byte filemode 76 1 byte .. (blank?) doesn't say in the description and long time since I looked at the code ... but presumably byte 5 is cleared to a blank. Also, some portion of the 40-byte FST should be on the "N" card ... in cols 6-55 ... aka ... date written last ... number of items ... F/V flag ... fixed length or variable max. length ... year
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

DISK PL/I Program

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DISK PL/I Program
Newsgroups: bit.listserv.ibm-main
Date: Tue, 21 May 2002 15:11:36 GMT
Breton.Imhauser@ACXIOM.COM (Imhauser Breton - bimhau) writes:
I wrote one in assembler many moons ago, but would be hard pressed to find it. You might look at file 149 on the CBT. Keep in mind, that the newer DMSDDL ("NetData") format is supported by both CMS ("sendfile") and TSO/E ("xmit"), which is a bit more efficient (specifically run-length encoding).

"newer" as in late 70s rather than mid-60s ... after NJE was released for JES2 (significant issue for netdata format was jes2 compatibility) ... only 25 years old instead of almost 40 years old.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Future architecture

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Future architecture
Newsgroups: comp.arch
Date: Tue, 21 May 2002 14:59:29 GMT
cecchi@signa.rchland.ibm.com (Del Cecchi) writes:
My belief is that it will take non-procedural languages to allow wide spread used of parallel computation. The EDA world is full of them at a high level. It would be pretty tough if I had to simulate a circuit by writing a fortran program. Rather I describe it in SPICE (which ends up in C or fortran or something) and let the computer take care of it. The fact that the Spice gets made into a sequential vrs a parallel type program is of no concern to me.

hspice had large grain parallelism in the early/original versions ... based on some of the comments, for some of the 80s mini-supers. that code atrophied and a friend of mine got the assignment several years ago to re-activate it ... testing up thru 8-way. tidying up the code so that the process/signal were again operational was an interesting task. at that level, I don't think the (procedural) implementation language made much difference ... but being able to take specification and parallize it did.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Future architecture

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Future architecture
Newsgroups: comp.arch,comp.sys.super
Date: Tue, 21 May 2002 18:40:16 GMT
David Gay writes:
In an attempt to be clearer than my other post: I'd like to see some evidence that people can effectively program by thinking concurrently (or whatever you're implying is opposite to "thinking sequentially").

I think a lot of people do that today already ... it is just dealing with problems that are not sequential in nature ... but lets say more holographic. the hspice scenario was that the domain semantics for the nets don't have stricly sequential procedural requirement ... although it would be hard to discover that from the procedural language that loops on the arrays. the set of domain problems that can be expressed in nets/graphs semantics lend themselves somewhat better to parallelization than those that have been expressed stricly in sequently steps.

the converse is also true ... i once had to listen to a long diatribe about a new fighter plane heads-up display that was scrolling digital read-outs ... rather than more of an analogy graphical display (aka thesis was that human brain is actually much less efficient with strictly sequential digital processing).

One might contend that the human brain has a problem thinking about concurrent sequential operations ... lets say in contrast to various modes of simultaneous parallel processing. possibly one of the banes of programming all these years is just that problem ... trying to translate relatively parallel thinking into a set of sequential steps; and then once forced into that paradigm ... then trying to again do a paradigm jump to concurrent sequential steps.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Coulda, Woulda, Shoudda moments?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Coulda, Woulda, Shoudda moments?
Newsgroups: alt.folklore.computers
Date: Tue, 21 May 2002 19:59:16 GMT
ehrice@his.com (Edward Rice) writes:
DARPA recognized where the bandwidth was going and tightened things up a LOT. People with legitimate university access could still play without restriction; people with legitimate contractor contacts could still play without restriction (unless you posit that everybody associated with MITRE was connecting up to the un-metered network only to do MITRE official business, a theory I've never heard proposed before); but the basic civilians on the street no longer had access. Unless someone slipped them a password to a TIP or IMP dial-up number.

there was a claim that the IMP-based route-finding administrative chatter was starting to use up a significant portion of the available bandwidth. One of the reasonse that it was going to be unlikely that the IMP-based stuff was ever going to scale much past 200 nodes in any case.

the 1/1/83 change-over from IMP-based network to "internet" changed all that (and the gateway stuff in the internet architecture started to open up networking .... one of the reasons I claim that internet finally caught up and passed the internal network size).

one of the other reasons that the internet took off was the availability of BSD tcp/ip support ... which DARPA told UCB (a number of times) that BSD specifically wasn't suppose to work on networking support.

in the recent osi thread there were comments that the 80s bandwidth was nickle and dime stuff ... it wasn't until the NSFNET1 backbone came along ... and the enormous resources put in by commercial companies (far in excess of what gov. was paying for) ... that the issue of "acceptable use policies" started to rear its head (and the facade that it was even a significantly gov. funded activity).

if you went around all the booths at Interop '88 (aka the internet was making a major transition to signification commercial involvements) ... the majority of booths were demonstrating support/productes that were tahoe4.3 based (which the gov. had specifically forbidden to happen ... possibly because BBN had the IMP networking ... they believed BBN should have all of the internet networking activity; does anybody know of any significant BBN derived "internet" support? ... as opposed to tahoe/reno).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Biometric authentication for intranet websites?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Biometric authentication for intranet websites?
Newsgroups: comp.security.misc,comp.security
Date: Wed, 22 May 2002 00:50:03 GMT
Todd Knarr writes:
They generally digitize the relevant information and send it to a server, which matches it against a database and tells your software the record the biometrics match. The software then decides whether the action is OK for the holder of that record.

There was a major biometrics project that started just recently, with a supermarket chain using fingerprint readers to augment or replace credit card readers at the checkout stands. Within a week of deployment an amateur found a way to fool the readers ( including the ones with temperature and resistivity sensors designed to distinguish between living human fingers and artificial material or dead fingers ) 80% of the time, using nothing more than common gelatin, some superglue and hobbyist PCB etching supplies. He even managed to apply it with guards watching for fakery. This is not a good sign. And worse: if your biometrics do end up being faked, how do you change your password ( the biometric data ) so that the thief can't gain access anymore?


note that for x9.84 biometrics standard where transmittal of biometric matching data is involved, has huge section on security of the sensor and the security of the transmittal and the security of the backend database and the security of the matching function.

I think that there is a pilot kind of deployment in the houston area involving armored-platted ATM machines under video camera survelliance and highly encrypted, secure private network and financial institution backends that have external fips140-3 or fips140-4 external boxes.

note that the above faking of fingerprints is in an environment that some amount of the population writes their PIN numbers on their card. The issue is that both PIN numbers and the owners fingerprints are left laying around on the card. The question is: is it easier to read a PIN number left written on the card and type it into a PIN pad fraudulently or is it easier to do the above process on a fingerprint left on the card and enter it fraudulently on a fingerprint sensor. Say within a 4hr window of time can I find a lost card, read a PIN written on the card, take it to a ATM machine, stick the card in and fraudulently enter the PIN ... or is it simpler/faster to retrieve fingerprints from the card and do the above process, entering a fingerprint fraudulently (instead of entering a PIN fraudulently).

note that whether it is the PIN scenario with a lost/stolen card or a fingerprint scenario with a lost/stolen card ... the card gets reported lost/stolen and is invalidated and a new card is issued. while you may not have much opportunity to change the fingerprint associated with the new card ... and you can write a different (or same) PIN on the new card; the thief would still have to re-steal the new card. I'm still somewhat amazed that we don't have demos of somebody stealling a card and showing how hard it is to fraudulently enter a PIN that has been written on the (stolen) card.

now for the population that claim that they would never, ever write their PIN on the card ... and really aren't comfortable with anything less than random 8digit PINs ... let them have the option of 8digit PINs.

all of these considerations change significantly if you happen to be reducing it to purely electronic (random pc, connecting randomly to some random ISP, going to some random website); digital evesdropping of the whole transaction and then fraudulently replying the same bits doesn't require any of the above physical stuff (whether it is the PIN bits or the fingerprint bits).

one of the issues is where there is requirement for some physical presense you can somewhat be sure that it is at least 2-factor authenticationn ... aka at least something you have (a physical card) and either something you know (i.e. PIN) or something you are (i.e. biometric). going to the purely electronic environment (aka typical web surfing) reduces to just a bunch of bits (no longer can prove the physical card). To migrate to the internet electronic environment starts to need tamper-evident chipcards with correct operation influenced by PIN &/or biometric entry. None of these are absolutely 100% bullet-proof ... an issue is the difficulty level vis-a-vis the reward.

misc. recent random threads:
https://www.garlic.com/~lynn/2002f.html#10 Least folklorish period in computing (was Re: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002f.html#22 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#28 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002f.html#32 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#35 Security and e-commerce
https://www.garlic.com/~lynn/2002f.html#45 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002g.html#56 Siemens ID Device SDK (fingerprint biometrics) ???
https://www.garlic.com/~lynn/2002g.html#72 Biometrics not yet good enough?
https://www.garlic.com/~lynn/aadsm10.htm#tamper Limitations of limitations on RE/tampering (was: Re: biometrics)
https://www.garlic.com/~lynn/aadsm10.htm#biometrics biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio1 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio2 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio3 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#bio4 Fingerprints (was: Re: biometrics)
https://www.garlic.com/~lynn/aadsm10.htm#bio5 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio6 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio7 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio8 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/aadsm11.htm#13 Words, Books, and Key Usage
https://www.garlic.com/~lynn/aepay10.htm#5 I-P: WHY I LOVE BIOMETRICS BY DOROTHY E. DENNING
https://www.garlic.com/~lynn/aepay10.htm#8 FSTC to Validate WAP 1.2.1 Specification for Mobile Commerce
https://www.garlic.com/~lynn/aepay10.htm#15 META Report: Smart Moves With Smart Cards
https://www.garlic.com/~lynn/aepay10.htm#20 Security Proportional to Risk (was: IBM Mainframe at home)

misc. other fraud threads:
https://www.garlic.com/~lynn/subintegrity.html#fraud

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

disk write caching (was: ibm icecube -- return of

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: disk write caching (was: ibm icecube -- return of
 watercooling?)
Newsgroups: comp.arch,comp.arch.storage
Date: Wed, 22 May 2002 15:23:29 GMT
"Bill Todd" writes:
Can't it just read backward from EOF, using the LF delimiters which should be present in most files 'tail' is used on?

I believe it uses direct byte addressing and then (internally) simulates records using line delimiters.

zippy also does this ... attempting to extract a random item (collections from the zippy file). since filesystem doesn't directly know about items ... zippy uses random byte location and then moves forward to start of item ... this is only truely random if all items are the same length ... however not being able to directly address items means that longer items are over selected compared to shorter items (and special case if random is in the middle of the last item).

now somebody did something similar for the 6670 separator sheet ... but using the ibm jargon file; however by making each entry a variable length record; the filesystem gives the total number of (variable length) records, run random to come up with a pro-rated record number ... and then read that record. the 6670 was early page printer (built on ibm copier-3 base). The cover sheet was printed from the alternate paper tray (loaded with different colored paper). Name & file information only occupied the top couple lines of the page ... so there was plenty of room to print a random entry from the jargon file.

long ago and far away i found there was also a "feature" in zippy/unix attempting to run it against the ibm jargon file. zippy was using 16bit int for the random byte location calculation ... zippy file was about 30kbytes ... while jargon file is more like 300kbytes. total coincidence but the issue of 16bit (or actually 15bit since sign gets truncated) for random number was just raised in sci.crypt and cryptography mailing list with regard to the msoft VC6 library random number function.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Biometric authentication for intranet websites?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Biometric authentication for intranet websites?
Newsgroups: comp.security.misc,comp.security
Date: Wed, 22 May 2002 15:53:43 GMT
Todd Knarr writes:
2-factor authentication would work, but you need to be really careful because you're depending _entirely_ on the non-biometric factor for your whole system's security. When you boil the probabilities for the fingerprint readers down, they become equivalent to a single-digit password, digits 1-5, where any 4 of the 5 digits are valid and only 1 would be considered a miss. Would you trust that half of a system to protect anything if the other half failed? I wouldn't. And aren't you down to depending on the single other factor then?

there is a statement/premis that there is significant portion of the population that writes their PIN number on their card (and/or leaves their fingerprint on their card). The question raised is if i found such a card in front of an ATM machine and wanted to use it fraudulently ... would it be easier to insert the card into the ATM machine and fraudulently enter the PIN number written on the card or easer to lift the fingerprint off the card and fraudulently enter the fingerprint?

PINs represent some real-world challenges regardless of how many documents say that nobody is every allowed to write their PIN number on their card.

Then there could be choice ... just offer people their choice with regard to PIN or fingerprint ... and for the population that would never write their PIN on their card ... and would only use a PIN that was a random 8digit number ... let them have that choice.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Biometric authentication for intranet websites?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Biometric authentication for intranet websites?
Newsgroups: comp.security.misc,comp.security
Date: Wed, 22 May 2002 16:16:18 GMT
"Scott Bussinger" writes:
misc. recent random threads:

Thanks for the pointers, but in all that there wasn't any apparent references to real products. Do you have any links to some real products that work in this arena?

Thanks!


there is some discussion of possibility of biometrics in conjunction with the aads chip strawman
https://www.garlic.com/~lynn/x959.html#aads

a PIN version of the aads chip strawman had a booth at cardtech/securetech in new orleans a couple weeks ago.

there were a number of other booths at the show demonstrating biometric products ... see some of the exhibtors at:
http://www.ct-ctst.com/ctst2002/

infineon had an interesting demo in their booth. they've been working on a fingerprint sensor that meets iso 7816 card flexing standards ... i.e. the sensor is actually on the card. they showed 8inch chip wafer (i.e. the sensor is manufactured in the same way computer chips are manufactured with silicon wafers) ... the wafer was held vertically with the top and bottom attached to two arms ... the two arms started with the wafer flat & vertical ... and then would pull the top & bottom points nearly together so that the wafer was bowed & nearly doubled in half. The wafer then was straightened ... this flexing and straightening ran continuously in the booth.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Digital signature

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Digital signature
Newsgroups: comp.security.misc
Date: Wed, 22 May 2002 16:01:23 GMT
"Dodger" writes:
Do you think they'd reduce the bank charges if fraud stopped?

an alternative view is that debit already provides a different fee structure thatn credit. if the aads implementation used in the nacha pilot for internet atm/debit was widely deployed ... would you see a change in bank charges?

misc. aads refs and pointer to nacha ATM/debit aads trial
https://www.garlic.com/~lynn/x959.html#aads

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why did OSI fail compared with TCP-IP?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why did OSI fail compared with TCP-IP?
Newsgroups: comp.arch,comp.protocols.iso,comp.protocols.tcp-ip,comp.lang.c++
Date: Wed, 22 May 2002 16:30:30 GMT
"Rudvar Alswill" writes:
This confirms my suspicions for many years that IBM understood very little about networking and communications.

SNA was terminal communications ... and did it very well ... managing huge numbers of terminals ... a medium size corporate installation might have 65,000 terminals connected to a mainframe.

The (IBM) internal coporate network had effectively the equivalent of gateways from the beginning and one of the attributes that the internal network was larger than the arpanet/internet until approx. mid-85 (with arpanet/internet not getting internetworking and gateways until the 1/1/83 switch-over).

now it may be true that SNA didn't have networking ... but it had one of the most sophisticated communication structures for supporting terminals. on the otherhand, the people responsible for the internal network (and the technology for bitnet) understood networking quite well ... having effectively supported gateways from the very beginning (circa 1970 ... versis arpanet/internet not getting it until 1/1/83).

people with SNA infrastructures could get service level agreements ... contractual commitments for operational characteristics with penalties for not meeting the commitments. when did internet start providing service level agreements?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why did OSI fail compared with TCP-IP?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why did OSI fail compared with TCP-IP?
Newsgroups: comp.arch,comp.protocols.iso,comp.protocols.tcp-ip,comp.lang.c++
Date: Wed, 22 May 2002 18:49:40 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Well, as far as I recall, UK academia was already largely networked, and certainly the University of Cambridge was. Various serial line protocols, the Coloured Books and so on. Back in 1972, we regarded IBM (as seen through its products) as understanding very little about networking and communications - and we we right. Which doesn't mean that the revisionists are correct that the pre-TCP/IP cabal of the day understood much more about how deliver solutions to real users, reliably and on a budget.

many of the large communication infra-structures were ibm mainframe ... things like airline reservation systems ... connecting to every reservation office in the world ... even in the '60s ... things as little as thousands of locations could be considered small ... getting into tens of thousand of connections in the '60s & '70s wasn't uncommon ... along with service level agreements, contractual commitments with penalty clauses. Then along came things like ATM machines in the '70s. For some infrastructures .... it isn't real communications unless you have SLAs ... otherwise it is just toys for playing.

not that this kind of infrastructure isn't a peer-to-peer operation ... but it surely is communicaiton. my wife did a short stint in the SNA architecture group and got into trouble for trying to do peer-to-peer. She also authored the Peer-Coupled Shared Data architecture when she was in pok in charge of loosely-couple (aka mainframe cluster) architecture (which took a while to emerge in products).

it is like some of the threads about interactive time-sharing ... and ibm not supporting interactive time-sharing. the counter argument was that IBM may have had one of the largest interactive time-sharing install bases ... but because 99 percent of the market was commerial batch ... that when people thot of IBM they tended to think of commercial batch. The corollary is that there was also a large networking install ... but because possibly 99 percent of the market was non-peer-to-peer terminal communication ... the dominant image was non-peer-to-peer terminal communication (and possibly the majority of the people/products that came into contact were the ones specialized in non-peer-to-peer terminal communication).

misc. refs to APPN networking, SNA architecture review board presentation, etc:
https://www.garlic.com/~lynn/2000.html#53 APPC vs TCP/IP
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2001k.html#21 OT: almost lost LBJ tapes; Dictabelt

i worked on the first ibm telecommunication controller clone when i was an undergraduate in the '60s. that spawned the 360 pcm controller business. during the last 30 plus years pcm controller industry wasn't exactly small potatoes (and the pcm controller manufactures didn't even have a majority of the business).
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Biometric authentication for intranet websites?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Biometric authentication for intranet websites?
Newsgroups: comp.security.misc,comp.security
Date: Wed, 22 May 2002 19:08:58 GMT
"Scott Bussinger" writes:
Actually, it'd be much easier for a variety of reasons (there are bound to be some people that can never get the knack of using the fingerprint device for instance). The problem is that the users will get lazy and just leave the token inserted into the machine the entire time (and probably walk away from it periodically since these are generally shared-access machines).

The idea with using fingerprints is that they won't leave their fingers in the sensors all the time. Another possibility is using a cardswipe system and make them swipe the card each time. I'm just trying to figure out what will work best.


many of the current hardware tokens have "personalities" and/or at least their applications have personalities. financial transactions require that PIN/biometric re-entry is required for every operation ... not only from the stand-point of authentication but from the standpoint that the re-entry of the PIN/biometric re-entry implies intent, agrees, approves, and/or authorizes, with respect to the specific operation.

Many access-card personalities just require that the PIN/biometric has been entered since the token was powered on (as opposed to every time).

In one of the standards meetings there was some consideration to the design of hardware tokens for laptops ... that it was not possible to leave the token connected when the laptop was closed. One of the suggested advantages for dongles on a keyring ... was that you would take the keyring with you when you closed the laptop ... also you wouldn't leave it in your office PC overnight. Other suggestions that it served dual-purpose as door-badge access to get out of the building ... again addressing the issue of leaving it plugged in when you left. Requiring that you use your dongle to visit the restroom wasn't an issue of wanting to know when you were visiting the restroom but addressing the human nature problem of people leaving their hardware tokens connected when they got up for some reason.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why did OSI fail compared with TCP-IP?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why did OSI fail compared with TCP-IP?
Newsgroups: comp.arch,comp.protocols.iso,comp.protocols.tcp-ip,comp.lang.c++
Date: Wed, 22 May 2002 21:28:07 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Yes, I know of its size, but it was unbelievably primitive! Perhaps 10 years behind ICL in terms of (software) technology, and even further behind the leaders. I present MVT+TSO as evidence, and doubt that I need to say more!

mvt was batch processing ... tso was crje under any other name (in fact in the late '60s i did a modified version of HASP that supported 2741s and teletypes with CMS editor syntax). that is not to say that there wasn't internal politics. cern did a tso/cms bake-off in '74 and published the results at share. Inside the company this public document was given a classification of IBM Confidential Restricted (available on a need to know basis only).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Coulda, Woulda, Shoudda moments?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Coulda, Woulda, Shoudda moments?
Newsgroups: alt.folklore.computers
Date: Thu, 23 May 2002 14:46:38 GMT
Dennis Ritchie writes:
I don't have enough connected material to write the full history, but offer these snippets:

at least one line from:
https://web.archive.org/web/20050418032606/http://www.be.daemonnews.org/199909/usenix-kirk.html

about Fabry "yes them to death" ... "much to the frustration of the DARPA advisery board"

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why did OSI fail compared with TCP-IP?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why did OSI fail compared with TCP-IP?
Newsgroups: comp.arch
Date: Thu, 23 May 2002 15:00:31 GMT
Bernd Paysan writes:
IMHO the primary design mistake in TCP/IP is that it does not use a split control/data stream approach, but uses the header for all possible controls (which falls short, because the TCP designer only anticipated controls for a stream-oriented pipe). That makes the packets larger than necessary and the handling more complex, especially of higher-level protocols like FTP and HTTP, which have control messages and data channels, and either must try to use a single TCP socket for both (HTTP), or goes through hoops to create a second one for data transfers (FTP).

see RFC721 ... Out-of-Band Control Signals in a Host-to-Host Protocol and problem with TCP using in-band.

small extract at
https://www.garlic.com/~lynn/2000.html#72 Difference between NCP and TCP/IP protocols

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

disk write caching (was: ibm icecube -- return of

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: disk write caching (was: ibm icecube -- return of
watercooling?)
Newsgroups: comp.arch,comp.arch.storage
Date: Thu, 23 May 2002 14:51:57 GMT
Terje Mathisen writes:
I believe we could borrow the currently accepted term from the DB people:

BLOB, for Binary Large OBject


lorie (the "L" in GML, goldfarb was "G" and Moshe was "M" ... GML being all done at CSC) did a lot of work on BLOB after transferring from CSC to SJR ... but I don't think the blob work started until after the tech transfer of System/R from SJR to Endicott for SQL/DS ... possibly in R* (r-star) timeframe ... mid 80s.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Address Spaces

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Address Spaces
Newsgroups: comp.lang.asm370
Date: Thu, 23 May 2002 16:49:28 GMT
"Sven Pran" writes:
Well, I distinctly remember when we replaced our four 2311 disk stations (each holding 7,5MB of data) with 2314.

Quadrupling the on-line storage capacity was an immense improvement!

(VSAM????? We had DTFSD, DTFIS and DTFDA)

Sven


and the whole vtoc design point was 2311s.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PowerPC Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PowerPC Mainframe?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 23 May 2002 20:53:19 GMT
jcewing@ACM.ORG (Joel C. Ewing) writes:
Based on the obvious lack of familarity of the writers of that article with the mainframe world, I would expect a misinterpretation. The quote from the editor of Microprocessor Report (Krewell) that this "...would allow Big Blue to take advantage of more modern features ... the ability to carry out several operations simultaneously" obviously shows a total lack of familiarity with the parallelism that has been in mainframe processors for decades.

note that fort knox was going to have all the low & mid-range machines (including endicott machines, but not restricted to 370s) use 801 "micro-engines" as their base. fort knox got killed, in part because they were pushing direct hardware implementations down into the midrange i.e. trade-off was next generation mid-range being 801 with micro-code implementation for 370 ... or chips that had move of the 370 directly implemented in hardware. fort knox got killed ... which freed up some number of 801 engineers ... I believe some of which started appearing in other companies working on RISC chip implementations. I believe that 801s were used in 3090 IOCP engines.

I think the next big push was getting 390 processor into similar cmos technology that was being used for risc.

there was some recent discussion of the pipelining problem for 360 over in comp.arch ng ... and pioneering efforts in this area with 360/91, 360/195, 370/195. The issue back then was branch instructions draining pipeline because of the lack of circuits to implement speculative execution. Now speculative execution past branches is fairly common.

an innovative thing done by 801 was the separtion of I & D streams (harvard architectures). One of the cost items that has been carried from 360 days is the overhead checking for self-modifying instruction streams (which doesn't need to be done in risc machines).

some part of fort knox was eventually sort-of achieved with the migration of as/400 to power/pc.

pieces of the thread:
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past

for the rest search on the above subject in comp.arch ng in google groups ... there were some other posts about cost/penalty of supporting self-modifying instruction streams on 370.

misc. fort knox references:
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001h.html#69 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002g.html#39 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past

past 801, risc, pl.8, CPr, etc posts
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/94.html#47 Rethinking Virtual Memory
https://www.garlic.com/~lynn/95.html#5 Who started RISC? (was: 64 bit Linux?)
https://www.garlic.com/~lynn/95.html#6 801
https://www.garlic.com/~lynn/95.html#9 Cache and Memory Bandwidth (was Re: A Series Compilers)
https://www.garlic.com/~lynn/95.html#11 801 & power/pc
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/97.html#5 360/44 (was Re: IBM 1130 (was Re: IBM 7090--used for business or
https://www.garlic.com/~lynn/98.html#25 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#26 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#27 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#31 PowerPC MMU
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#129 High Performance PowerPC
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000.html#16 Computer of the century
https://www.garlic.com/~lynn/2000.html#49 IBM RT PC (was Re: What does AT stand for ?)
https://www.garlic.com/~lynn/2000.html#59 Multithreading underlies new development paradigm
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000c.html#3 RISC Reference?
https://www.garlic.com/~lynn/2000c.html#9 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000d.html#28 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000d.html#31 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000d.html#65 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001c.html#84 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#12 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001f.html#45 Golden Era of Compilers
https://www.garlic.com/~lynn/2001f.html#58 JFSes: are they really needed?
https://www.garlic.com/~lynn/2001g.html#23 IA64 Rocks My World
https://www.garlic.com/~lynn/2001h.html#69 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001k.html#7 hot chips and nuclear reactors
https://www.garlic.com/~lynn/2001k.html#23 more old RFCs
https://www.garlic.com/~lynn/2001l.html#50 What makes a mainframe?
https://www.garlic.com/~lynn/2001n.html#12 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#80 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#87 A new forum is up! Q: what means nntp
https://www.garlic.com/~lynn/2002b.html#29 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002c.html#40 using >=4GB of memory on a 32-bit processor
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002g.html#5 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#11 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#14 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#39 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#77 Pipelining in the past

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PowerPC Mainframe?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PowerPC Mainframe?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 23 May 2002 23:36:34 GMT
gah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
Is this because RISC architectures define not allowing self-modifying code? Presumably some way to tell the processor to flush whatever buffer exists, so that program loaders can be written.

harvard architectures didn't maintain consistency between I & D cache ... and the D-cache could be store-in, not even store-thru. So a loader implementation required both operation to force altered D-cache to storage and operation invalidating of any corresponding i-cache lines.

i think there was some post in comp.arch of somebody looking at 165 microcode and they got a 25 percent performance improvement by separating code & data by 256 bytes (see nick's posting in "pipelining in the past" in comp.arch on 16may ... a couple before your posts on 17may).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PowerPC Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PowerPC Mainframe
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 24 May 2002 13:25:12 GMT
j.grout@COMPUTER.ORG (John R Grout) writes:
True... but, ever since the days of the 360 Model 91, hasn't the POP also stated that self-modifying programs must protect _themselves_ on processors that pre-fetch instructions and/or cache instructions and data by executing a "serialization" instruction (the general form being BCR 15,0) between the modification and execution of recently modified instructions?

Do processors do extra checking out of fear that some programs don't do this, or are some self-modification scenarios not covered by this requirement? [along these lines... back in 1997, Seymour Metz stated that scenarios where BCR 15,0 was required were "few and far between"].


I don't know, but searching ESA/390 POP just now ...
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CONTENTS?SHELF=

i get:
Thus, it is possible for an instruction to modify the next succeeding instruction in storage.

from 5.13.1 Conceptual Sequence ... see attached.

There are also words said about multiprocessing operations ... where one processor modifying instruction stream, may or may not be (immediately) seen by another processor because of instruction pre-fetch.

The exception are test&set & compare&swap stuff which are serialize the complex (as if you would use these instructions to modify the i-stream). I did some worked with charlie during the invention of compare&swap. Two scenarios for compare&swap ... was the selection of the words were based on coming up with a mnemonic that were charrlie's initials aka CAS, that got changed by Ron Smith to support the forms CS & CDS. The other thing that Ron Smith (Padegs & Ron Smith owned the 370 "red book" ... this was a document written in CMS script with conditionals, printed one way it resulted in the 370 POP w/o all the 370 architecture manual sections ... and printed the other way, it was the full 370 architecture manual ... which was distributed in a red 3-ring binder) insisted on was the compare&swap programming notes ... aka the claim was that there wasn't sufficient justification for an MP-specific new instrucation so the group had to come up with a set of specifications showing the use of compare&swap applicable to single processor environment ... thus was born the whole stuff about atomic updating of link-lists and other stuff for interruptable (possibly application-space) multi-threaded operation in single processor operation.

The original posting referenced some other postings in the thread made in comp.arch ... one claiming that after investigation of the 165 microcode ... there was a recommendation of creating 256-byte pad between instruction & data which resulted in 25 percent performance increase (aka a quick, inexpensive check if current store operand address is within 256bytes of current instruction address ... and then move expensive checking if it was).

==============================================================

ESA/390 POP
5.13.1 Conceptual Sequence

In the real mode, primary-space mode, or secondary-space mode, the CPU conceptually processes instructions one at a time, with the execution of one instruction preceding the execution of the following instruction. The execution of the instruction designated by a successful branch follows the execution of the branch. Similarly, an interruption takes place between instructions or, for interruptible instructions, between units of operation of such instructions.

The sequence of events implied by the processing just described is sometimes called the conceptual sequence.

Each operation of instruction execution appears to the program itself to be performed sequentially, with the current instruction being fetched after the preceding operation is completed and before the execution of the current operation is begun. This appearance is maintained even though the storage- implementation characteristics and overlap of instruction execution with storage accessing may cause actual processing to be different. The results generated are those that would have been obtained had the operations been performed in the conceptual sequence. Thus, it is possible for an instruction to modify the next succeeding instruction in storage.

Operations in the access-register mode or home-space mode are the same as in the other translation modes, with one exception: an instruction that is a store-type operand of a preceding instruction may appear to be fetched before the store occurs. Thus, it is not assured that an instruction can modify the succeeding instructions. This exception applies if either the storing instruction or the instruction stored is executed in the access-register or home-space mode.

Regardless of the translation mode, there are two other cases in which the copies of prefetched instructions are not necessarily discarded: (1) when the fetch and the store are done by means of different effective addresses that map to the same real address, and (2) when the store is caused by the execution of a vector-facility instruction. The case involving different effective addresses is described in more detail in "Interlocks for Virtual-Storage References" in topic 5.13.4.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why did OSI fail compared with TCP-IP?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why did OSI fail compared with TCP-IP?
Newsgroups: comp.arch,comp.protocols.iso,comp.protocols.tcp-ip,comp.lang.c++
Date: Fri, 24 May 2002 13:44:23 GMT
Brian Inglis writes:
There was the party line, as stated above, which was rigidly enforced, and then there was the reality, which had users on (skunk) VM/CMS systems doing a lot of interactive and networking stuff, like HONE and VNET internally.

and VNET had the equivalent of a gateway in every node. The MVS networking in JES2 was more traditional ... being a traditional homogeneous design ... more like the original arpanet NCP-based networking (both vnet & jes2 were host-to-host designs like arpanet/ncp ... but w/o the IMP FEPs). however this resulted in a large number of complications trying to integrate MVS/JES2 systems into the internal network ... which tended to be at end-nodes only. One of the early experiences was that the homogenity of JES2 was so ingrained that a message from one JES2 at one release level that slight header changes would cause a JES2 at another release level to crash (which in turn brought down the whole MVS system). There is the case of a Hursley MVS/JES2 system sending messages to a San Jose MVS/JES2 system and causing it to crash. As a result, not only were MVS/JES2 systems relegated to end-nodes, but the (intermediate-node) VNET node that a directly connected MVS/JES2 ... not only had gateway code for doing JES2 header rewrites ... but also made sure that the JES2 header rewrite code matched the version of the connected MVS/JES2. There were even cases where a simple two-node MVS/JES2 network would have an intermediate VNET node between them that would supply the necessary JES2 header rewrite rules when necessary (i.e. when one JES2 system was upgraded but not synchronized with the other JES2 system upgrade).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

System/360 shortcuts

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360 shortcuts
Newsgroups: alt.folklore.computers
Date: Fri, 24 May 2002 13:58:40 GMT
"George R. Gonzalez" writes:
I vaguely recall some glossy IBM document of long ago that explained how some advanced 360 model did some on-the-fly loop optimization. O'course this was decades ago, before $39 Pentiums did all kinds of shortcuts.

Does anybody recall exactly what the 360's could do, and did it really help in a typical program?

(I know, I'm suggesting that a glossy IBM document would be telling a tad less than the whole truth-- must be my suspicious alter-ego).


see "Pipelining in the past" thread in comp.arch along with the recently posted reference in the "PowerPC Mainframe" thread cross-posted to this n.g. from ibm-main newsgraoup. basically 91 & 195 .. including imprecise interrupts because of concurrent execution of multiple instructions (standard 360s had precise interrupts).

The issue was a 63 instruction pipeline/cache and a branch instruction would stall/drain the pipelining ... unless the branch was to an instruction already in the pipeline.

misc. post posts:
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/94.html#39 IBM 370/195
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/95.html#11 801 & power/pc
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/2000.html#72 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/2000b.html#1 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#15 How many Megaflops and when?
https://www.garlic.com/~lynn/2000c.html#23 optimal cpu : mem <-> 9:2 ?
https://www.garlic.com/~lynn/2000d.html#2 IBM's "ASCI White" and "Big Blue" architecture?
https://www.garlic.com/~lynn/2000f.html#21 OT?
https://www.garlic.com/~lynn/2000f.html#27 OT?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001.html#0 First video terminal?
https://www.garlic.com/~lynn/2001.html#39 Life as a programmer--1960, 1965?
https://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches flushed on a page fault ?
https://www.garlic.com/~lynn/2001b.html#11 Review of the Intel C/C++ compiler for Windows
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001b.html#42 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001c.html#27 Massive windows waisting time (was Re: StarOffice for free)
https://www.garlic.com/~lynn/2001h.html#22 Intel's new GBE card?
https://www.garlic.com/~lynn/2001h.html#49 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001l.html#34 Processor Modes
https://www.garlic.com/~lynn/2001l.html#42 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2001n.html#2 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#41 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2001n.html#86 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002.html#22 index searching
https://www.garlic.com/~lynn/2002.html#50 Microcode?
https://www.garlic.com/~lynn/2002.html#52 Microcode?
https://www.garlic.com/~lynn/2002d.html#39 PKI Implementation
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PowerPC Mainframe

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PowerPC Mainframe
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 24 May 2002 14:42:32 GMT
Anne & Lynn Wheeler writes:
an MP-specific new instrucation so the group had to come up with a set of specifications showing the use of compare&swap applicable to single processor environment ... thus was born the whole stuff about atomic updating of link-lists and other stuff for interruptable (possibly application-space) multi-threaded operation in single processor operation.

random refs:
https://www.garlic.com/~lynn/subtopic.html#smp Multiprocessor, tightly-coupled, SMP & compare&swap

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Coulda, Woulda, Shoudda moments?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Coulda, Woulda, Shoudda moments?
Newsgroups: alt.folklore.computers
Date: Fri, 24 May 2002 17:38:48 GMT
spam+@orion-com.com (Joe Thompson) writes:
From what I remember from the early 90s, Excel was actually a latecomer to the MS stable, being actually (in its first version) a spreadsheet called Wingz that MS bought and slapped an MS logo on. There was the spreadsheet in MS-Works but I don't know which derived what from where or if they were two entirely separate projects at the time.

If my memory serves, Access was also a product bought and renamed and then released essentially unchanged as Access. -- Joe


I remember lotus & multiplan on dos/windows ... and wingz on unix platforms ... i don't believe wingz became excel.

doing some quick search engine:
http://www.wingz-us.com/wingz/news/linux.html
http://www.faqs.org/faqs/spreadsheets/faq/

I didn't find any refs: to derivation of excel ... but

the following site lists Oct87 for excel
https://web.archive.org/web/20030621083742/http://www.tcs.uni.wroc.pl/~jja/ASK/HISZCOMP.HTM

misc. other stuff
http://www.bricklin.com/firstspreadsheetquestion.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Future architecture

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Future architecture
Newsgroups: comp.arch,comp.sys.super
Date: Fri, 24 May 2002 18:32:48 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
One example of that is in serious signal handling and error recovery and another is in serious asynchronous I/O (in both cases, ignore the current jokes). Forcing the parallel operations into a serial model can make the communication problems MUCH worse!

there is some story about some group at CMU attempting to do an SNA LU6.2 implementation on top of unix TCP/IP ... and it supposedly turned out to be extrodinarily difficult ... partially LU6.2 is half-duplex (serialized).

there is also story of some mainframe cluster project in the early days of 3088/trotter with eight concurrent mainframes ... where the synchronization primitive started out multicast ... and achieved consistency in subsecond time. they then were asked to remap to LU6.2 and it took upwards of a minute to achieve the same state consistency.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why are Mainframe Computers really still in use at all?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why are Mainframe Computers really still in use at all?
Newsgroups: alt.folklore.computers
Date: Sat, 25 May 2002 18:57:21 GMT
"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
Depends on the economist/pundit/spin doctor you're listening to. Fact of the matter is the finance institutions smelt trouble 4 years ago. I know, I was working inside one. :)

there was hedge fund crisis ... partially because a lot of the hedge stuff is based on continuous ... and real life can have a lot of discontinuity and/or chaos.

misc. hedge fund posts.
https://www.garlic.com/~lynn/2001f.html#35 Security Concerns in the Financial Services Industry
https://www.garlic.com/~lynn/2001l.html#56 hammer
https://www.garlic.com/~lynn/2001l.html#58 hammer
https://www.garlic.com/~lynn/2001n.html#54 The demise of compaq

there is also englightenment ... you have all the information but it takes some time before you can piece it together ... see discussion of ARMS and the englightenment that occurred in 1989:
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security

from the above:
Prior to 1989, institutions aggregated their assets and liabilities into generalized pools. Not much attention was paid to transaction level detail. The most common product and generalized pools were Adjustable Rate Mortgages (ARMS). Everyone assumed that an ARM was an ARM was an ARM. Not so when you consider there are hundreds, perhaps thousands of different ARM products all booked at various times with different coupons and varying teaser rates. All behaving in completely unpredictable, uncorrelated fashion depending on a specific interest rate scenario. To magnify the problem imagine a situation where the analysis superimposes instrument credit risk valuations along with interest rate risk valuations of the ARM as part of the analysis to fully dimension the transactional and overall institutional risk profile. Each individual ARM behaves differently in a particular interest rate scenario relative to its interest rate risk component and credit risk. A risk manager must also calculate the credit risk profile of each ARM along a particular interest rate curve for a complete valuation process to be accurate. Hence each dimension of risk management is calculated in the risk measurement valuation process. When institutions began to create financial models measuring the entire individual transaction level detail of their portfolios they discovered the gapping error. No one could predict a) the multitude of embedded options that were going unmeasured across an organizations entire balance sheet, b) the individual portfolio behavioral characteristics of these embedded options, c) the unmeasured aggregate earnings impact of these options across a multitude of interest rate scenarios. In one example, Citicorp failed to recognize that a 2% rising rate phase would cause an 80% loss of core holding company earnings. If the cycle was to occur for an extended period Citicorp would fail. This discovery caused Citicorp to get out of the mortgage business in 1989. At the time the company was the largest player in the mortgage market. Coincidentally, Citicorp's stock traded at an all time low of $ 6.00/share and needed a private bailout to continue functioning as an entity. Board and senior execs have learned to take risk management very seriously. In an Internet centric business model what is the proper assessment and valuation of information security risk? How does this component of risk management integrate with other factors like interest rate risk and credit risk? In general what is the nature of information security risk? How is it measured? How does one develop strategies to manage this risk? Finally, how do I manage this risk?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

backup hard drive

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: backup hard drive
Newsgroups: alt.folklore.computers
Date: Sat, 25 May 2002 22:50:27 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Never met one. Haven't been in a machine room with big iron since '93.

dates from article on thin-film heads:
http://researchweb.watson.ibm.com/journal/rd/403/chiu.html

ga (general availability) for 3380 was 1981 ... 630mbytes 3380E was 1985 double density 1260mbytes 3380K was 1987 triple density 1890mbytes

also see:
http://web.utk.edu/~mnewman/ibmguide13.html

3380 hda description:
https://web.archive.org/web/20030404094655/http://www.classiccmp.org/mail-archive/classiccmp/2001-03/1089.html
https://web.archive.org/web/20020806101204/http://www.classiccmp.org/mail-archive/classiccmp/1998-07/0872.html

also see 360/370 timeline at (processor, software, disk, etc dates)
https://web.archive.org/web/20030227015014/http://www.isham-research.com/chrono.html

another early timeline
http://www.cs.clemson.edu/~mark/acs_timeline.html

misc. other info from
http://www.cmg.org/conference/refs2001/papers/01p6001.pdf

from above paper


Table 1: IBM Physical Disk Geometry and Performance [6]

                Average Average Volume
Device Type     Seek    Latency Capacity
                msec    msec    MB

2314            60      12.50   29.2
3330-1          30      8.33    100.0
3330-11         30      8.33    200.0
3350            25      8.33    317.5
3380A           16      8.33    630.2
3380D           15      8.33    630.2
3380E           17      8.33    1260.4
3380J           12      8.33    630.2
3380K           16      8.3     1890.6
3390-1          9.5     7.10    946.0
3390-2          12.5    7.10    1892.1
3390-3          15      7.10    2838.1

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Computers in Science Fiction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computers in Science Fiction
Newsgroups: alt.folklore.computers
Date: Sun, 26 May 2002 18:41:34 GMT
"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
I hate "losing" or "modifying" original data. Massaging figures amounts to fraud in the strictest terms IMHO. Of course, this hardline attitude of mine frequently gets me into trouble.

It's a sad state of affairs when being honest it gets you into trouble because people are expecting you to lie...


misc. refs to data backed up in triplicate:
https://www.garlic.com/~lynn/2000.html#43 Historically important UNIX or computer things.....
https://www.garlic.com/~lynn/2001b.html#76 Disks size growing while disk count shrinking = bad performance

basically one of the operators would periodically pull random tapes from the library and re-assign them as scratch. in one case I lost files that somebody came looking for in support of some IP litigation ... and they were gone.

when i use to do "system build" tapes ... new kernel ... the standard process was to just put a copy of the kernel build on the tape. I extended the procedure to include all the stuff to build the kernel as well as the tools needed to build the kernel (starting in cp/67 days). Some of those tapes survived ... and I was able to supply Melinda with some of the source to early tools:
https://www.leeandmelindavarian.com/Melinda#VMHist

concerned about deep replication, one of the reasons that I write a tape backup/archive system for SJR that was also used at HONE. Version 2 was done at SJR by me and another person; effectively version 3 was redone again at SJR/ARC and saw product life as workstation datasave facility (WDSF); and version 4 was a combined project between ARC (SJR moved up the hill to the new complex) adstar (newer name for GPD disk & file divison) as ADSM (aka adstar distributed storage manager).

I instigated similar stuff for email processing.

http://www.its.queensu.ca/pubs/netnote/gen02_1.html

simple from the above:
The centralized backup software called Workstation DataSave Facility (WDSF) is now called "AdStar(TM) Distributed Storage Manager (ADSM)" [ADSTAR(TM) Distributed Storage Manager is a copyright product of IBM Canada Limited and is distributed under license by Queens University Computing and Communications Services].

ADSM is now product of Tivoli.

random refs:
https://www.garlic.com/~lynn/2001n.html#66 Holy Satanism! Re: Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002e.html#3 IBM's "old" boss speaks (was "new")
https://www.garlic.com/~lynn/2002e.html#10 Deleting files and emails at Arthur Andersen and Enron

we use to work pretty closely with some people in austin before they left to form Tivoli ... which then went thru several iterations and eventually were bought.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Multics hardware (was Re: "Soul of a New Machine" Computer?)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics hardware (was Re: "Soul of a New Machine" Computer?)
Newsgroups: alt.folklore.computers,alt.os.multics
Date: Sun, 26 May 2002 21:21:46 GMT
"Robert A. Matern" writes:
That's pretty early for a cutoff. ENWGS (CSC/U.S.Navy) used Multics until 1996, writing updates to our application right up to the last year. The last major upgrade was written around 1992. To be fair, we were one of the last sites...

then there is dockmaster ... seeing email in various standards bodies up thru 1998 ... and all this stuff about dockmaster-2 (supposedly a DG something? ... tie-in to soul thread). I was at 21st NISSC in fall of 98 (I was speaker on PKI panel) ... and nissc email still said dockmaster.

this says final shutdown 3/31/98:
https://www.multicians.org/mgd.html#DOCKMASTER

at least up thru oct of 98 (nissc '21) ... dockmaster.ncsc.mil was being used interchangeable with dockmaster2.ncsc.mil.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Computers in Science Fiction

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computers in Science Fiction
Newsgroups: alt.folklore.computers
Date: Sun, 26 May 2002 21:35:41 GMT
"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
I hate "losing" or "modifying" original data. Massaging figures amounts to fraud in the strictest terms IMHO. Of course, this hardline attitude of mine frequently gets me into trouble.

It's a sad state of affairs when being honest it gets you into trouble because people are expecting you to lie...


one of the things we have been working on is digital signed transactions ... including ACH and addenda records ... making it harder to fiddle the original information.

one the other hand ... some of this stuff just gets fiddled for one reason or another. we worked with this little client/server startup in silicon valley on being able to perform financial transactions (credit card stuff). it turned out to eventually be pretty successful:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2 Assurance, e-commerce, and some x9.59
https://www.garlic.com/~lynn/aadsm5.htm#asrn3 Assurance, e-commerce, and some x9.59
https://www.garlic.com/~lynn/aadsm5.htm#asrn4 Assurance, e-commerce, and some x9.59
https://www.garlic.com/~lynn/2001i.html#52 loosely-coupled, sysplex, cluster, e-commerce

now as part of the effort in addition to selling the product ... they also used it in their own online store. they were initially thot all the stuff just happened automagically ... but the online store eventually grew into one of the larger staffed organizations. there turned out to be all sorts of things that had to be fiddled with manually.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Computers in Science Fiction

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computers in Science Fiction
Newsgroups: alt.folklore.computers
Date: Sun, 26 May 2002 21:38:54 GMT
Anne & Lynn Wheeler writes:
I instigated similar stuff for email processing.

slightly related post
https://www.garlic.com/~lynn/2002e.html#10 Deleting files and email at Arthur Andersen and Enron

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The multiverse is out there !

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The multiverse is out there !
Newsgroups: rec.arts.sf.science,rec.arts.sf.written,alt.folklore.computers
Date: Sun, 26 May 2002 23:27:57 GMT
lmh@TheWorld.com (Larry M Headlund) writes:
Consider a shorter example of your objection, a single short line of a poem or even better a single word like "positron" or aluminum. It is not unreasonable that a number representing that word in a mapping of your choice was actually written down or used prior to the word being coined. Is this also nonsense?

Pick a short word of recent coinage, say "NAFTA", and you can probably find its mapping in a book of random numbers published prior to its first use. What does this imply for your objection?


there has been some amount of work done on fractal compression/representation, given sufficiently large computing resournces almost anything can be represented by the appropriate fractal. random refs pulled up from search engine (most image related, but there has been work on straight data):
http://links.uwaterloo.ca/
http://citeseer.nj.nec.com/raouf96fractal.html
http://links.uwaterloo.ca/fractals.papers.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Computers in Science Fiction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computers in Science Fiction
Newsgroups: alt.folklore.computers
Date: Mon, 27 May 2002 14:15:38 GMT
jmfbahciv writes:
YES!!! That was my number one goal when I began to work on designing our in-house tape building processes. I wanted the customer to be able to build from scratch. A side effect of insisting we do this for every tape we made ensured that WE did not lose sources or the knowledge of how to make the product.

CP/67 distribution tapes from the start always had full binary and full source ... so that customer could rebuild from source if necessary.

CSC (4th floor, 545 tech sq) had 1) the machine room (actually located on the 2nd floor), 2) the cp67/cms group and 3) all the rest. The machine room ran time-sharing service ... primarily in-house in support of the people on the 4th floor ... but the cp67/cms group also could build new systems and test ... besides the production system. The machine room, at various times supported other people along the way ... typically some number of MIT students ... and for awhile before they got their own machine, the corporate business planning & forecasting people down in armonk hdqtrs running APL models with much of the corporate (really, really sensitive) business data.

A new system build for the machine room might be a couple times a month to a couple times a week. The machine room production build was done directly to disk but also written simultaneously to tape (as backup). The tapes were used primarily to "fall-back" to previous system if something was discovered about current build (or even previous couple builds). They originally were just the kernel image alone. I expanded the production process (as opposed to the customer distribution process) to include everything necessary to rebuild the kernel from scratch (including the kernel source as well as the kernel build procedures). This was possibly pathelogical overkill ... but every once in awhile in came in useful.

The production system also collected activity numbers ... typically every five minutes ... total system activity, number of users, individual user activity, etc. These were written to disk in real-time, but the files were accumulated to tape. Cambridge kept this information starting possibly in 66. By 1976, there was possibly 10 years of production system "performance" data spanning a couple machine upgrades, numerous system changes, etc (various cp/67 releases on 360/67 uniprocessor, 360/67 multiprocessor, various vm/370 releases starting on 370/155, etc).

By the mid-70s, standard new VM/370 release went out to customers every six month or so ... and there were monthly maint. tapes called PLC (or program level change) that went out every month. There might be a couple concurrent releases in the field support ... i.e. customers talked about Release 3PLC15 ... aka 15th monthly update after the start of Release 3. The product group had split off from CSC in the late CP/67 time-frame ... before actual start of port to VM/370. By the mid-70s, the VM/370 group had outgrown any space in the tech sq. building and had moved out to an (old/previous) SBC building in burlington mall (as part of some legal deal, SBC had been sold off to CDC, i.e. out of the service bureau business).

When I (re-)did the resource manager for vm/370 (for direct customer ship) ... I did this initial (mostly automated) benchmarking thing of 2000 some odd benchmarks that took three month elapsed time to run. This was for initial performance validation/calibration. Everytime that I released a subsequent PLC tape ... I would rerun 100 or so of the benchmarks (taking a full weekend) ... my PLC tapes only came out every three months ... in part because I had to do much of the work myself ... and in part because I always did stress testing as well as performance regression testing for everything that shipped (as well as comparing it to full history of previous runs). The base product only had a small performance regression for full releases ... but not for the monthly builds. Also the base product didn't maintain a full history off all performance benchmarks across all releases for comparison.

misc. past postings
https://www.garlic.com/~lynn/subpubkey.html#technology

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Computers in Science Fiction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computers in Science Fiction
Newsgroups: alt.folklore.computers
Date: Mon, 27 May 2002 16:28:21 GMT
Brian Inglis writes:
>:Am i dreaming/senile or do I remember there being a >:"DDR BACKUP NUC[leus]" command to save the kernel image on >:VM/SP/HPO?

original kernel build was just writing an image of the "card" binaries to tape ... and ipl the tape to (re)build the kernel (of course it had been a long time since real cards were used ... cp/67 supporting "virtual" cards in the spool). the original procedure just wrote the couple thousand card images to a single/first tape file. the "loader" at the front ... worked equally well reading from 2540 card reader (real or virtual) or tape. The last thing that was done after the loader had placed everything appropriately in memory was "savenuc" was invoked which wrote a core image to disk with a little glue for "loadnuc". loadnuc was small program that got invoked when booting the disk ... and knew how to reload the kernel image from disk back into memory.

I then would dump cms file image (originally using tape ... but later using vmfplc) of base source & binary files to the 2nd file, followed by all the local source changes & binary files to the 3rd tape file, followed by all the tools used for the build. since these were stored on different cms filesystems ... it just amounted to writing to tape everything in that particular cms file system.

ddr was a disk image copy to tape ... DDR dump of the nucleus disk area was picking up all the kernel image off of disk and writing it to tape. The issue was whether a "card" image of DDR had first been written as the first file on the tape ... with the DDR "nuc" version as the 2nd file on tape. restore would mean that DDR was booted from tape (or card image) and then operator interacted with DDR to specify restore of the "Nuc" image back to disk.

the kernel card image was the binary "card" image output from the compilers which could be booted and would write image to disk in single operation. the DDR image ... required first booting DDR and then telling DDR to restore disk image to idsk.

"tape" was the original cms command from the 60s which read/wrote to tape with 800 byte record blocking. vmfplc was essentially tape using 4k byte record blocking. a lot of the source distribution was small files (4k bytes or less) ... which represented the individual change files. after some months of change ... any particular source module would have the original source and possible 10-15 individual incremental change files. For the full system there could be a couple thousand of these change files. vmfplc took a lot less tape than tape.

however, for the original archive/backup system i did
https://www.garlic.com/~lynn/2002h.html#29 Computers in Science Fiction

i further modified vmfplc into something called vmxplc. vmfplc would first write a 64-byte record containing the file descriptor information followed by 1-n 4k byte records containing the file data. for 1 record data files ... on a 6250 bpi tape ... there were two interrecord gaps ... which was almost as much tape for interrecord gaps as there were for data. I merged the 64-byte file descriptor record into the same physical block as the first data tape record. I would also block up to four 4k physical blocks into single tape record. For large files the larger physical blocking could save 15-20 percent tape ... but for small files ... it doubled the amount of data on tape (using vmxplc compared to vmfplc).
> In VM/SP/HPO days there were PUT tapes that came out > monthly-quarterly depending on volume, still using VMFPLC2 > format: not sure if there were binaries for source supported > modules. > > Always remember the fun of waiting to see if one of my APAR fixes > would be released officially and trying to figure out how they > came up with the PTF code distributed (presumably after changing > PL/S source and recompiling to get the assembler sources > distributed to customers).

note that CP/67 and VM/370/SP/HPO etc were all assembler source, with binaries and source distributed (60s & 70s). It wasn't until vm/xa (i.e. 3081 31-bit support, etc) that things started getting into PL/S source.

as an aside, the one thing that the card image kernel loading did ... was that it recreated the module loadmap each time it was booted ... as well as the comment cards. each compiled binary "card" image for each compiled module had "comments" giving the data/time of what went into the compile of the module ... base source filename with date/time, all the incremental update source file names with date/time, etc. The resulting loadmap had the full original source file history for each module (in comments) along with the module storage locations. None of this was available with DDR since it was just read/writing disk image records.

I've told this before ... but in the mid-80s, I was in madrid visiting the science center. They were working with the gov. on project to digitize as many documents as possible related to the uncoming 500th anniversary of columbus and the americas. While I was there, I went to movie theater. Along with the main feature they also had a 15-20 minute drama "short" made at the university. I forget the plot but a major set was this 2nd floor apartment that had a floor to ceiling wall of television sets. All of the screns were slaved to computer monitor display that had continuous scroll of text (looked like about 1200 baud glass teletype). After awhile I realized that it was repeatedly scrolling a vm/370 loadmap ... what is worse I could tell the release and PLC level from the incremental source filenames in the loadmap. The standard incremental source file name would include any related PTF number and if you worked with the system a lot ... you tended to know what PTFs appeared in which PLC/PUT.

As an aside ... as part of the system debugger that I wrote for VM ... I also modified the kernel "savenuc" program. The "loader" program in front of the card deck ... that turned compiled binaries into core image (wrote the loadmap, etc) was a slightly modified "BPS" 370 loader from the early 60s. When it was all done, it passed control to the appropriate entry in the loaded program (aka savenuc). The savenuc program was the last module mapped into memory ... it would then proceed to write everything, including itself to disk. Now it turns out that the BPS loader (besides writing the loadmap to printer) also passed to the first module a pointer to its internal loader tables (the ESID enttries, name & location) in registers. The kernel had all the appropriate connections made ... but the standard kernel didn't have a symbolic table of those entries. The modification that I made to savenuc was to copy into the kernel memory image the loader table before writing the image to disk. That way I was able to add the raw ESID entries and locations to kernel image ... for any later debugging purposes.

misc. debugging posts:
https://www.garlic.com/~lynn/94.html#11 REXX
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2000b.html#33 20th March 2000
https://www.garlic.com/~lynn/2001c.html#0 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2002g.html#27 Security Issues of using Internet Banking

misc. posts on ESID entries
https://www.garlic.com/~lynn/2001.html#8 finding object decks with multiple entry points
https://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers (was: First video terminal?)

misc. madrid posts
https://www.garlic.com/~lynn/99.html#9 IBM S/360
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/2000.html#14 Computer of the century
https://www.garlic.com/~lynn/2000g.html#36 stupid user stories
https://www.garlic.com/~lynn/2001e.html#66 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2001l.html#54 mainframe question
https://www.garlic.com/~lynn/2001n.html#16 Movies with source code (was Re: Movies with DEC minis)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Computers in Science Fiction

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computers in Science Fiction
Newsgroups: alt.folklore.computers
Date: Mon, 27 May 2002 19:32:38 GMT
Anne & Lynn Wheeler writes:
"tape" was the original cms command from the 60s which read/wrote to tape with 800 byte record blocking. vmfplc was essentially tape using 4k byte record blocking. a lot of the source distribution was small files (4k bytes or less) ... which represented the individual change files. after some months of change ... any particular source module would have the original source and possible 10-15 individual incremental change files. For the full system there could be a couple thousand of these change files. vmfplc took a lot less tape than tape.

the method "tape" used to read/write files (and similarly vmfplc) is similar to the way that DISK read/writes files. description of DISK
https://www.garlic.com/~lynn/2002h.html#1 DISK PL/I Program
https://www.garlic.com/~lynn/2002h.html#2 DISK PL/I Program

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Computers in Science Fiction

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computers in Science Fiction
Newsgroups: alt.folklore.computers
Date: Tue, 28 May 2002 12:37:25 GMT
Brian Inglis writes:
Thanks again -- that was useful on a number of occasions, despite IPCS!

ipcs had something like 20k assembler instructions and had a group of 5-10 people supporting it. back in the early days when rex was still rex
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000

i wanted to demonstrate that rex was not just another shell language but could be used for serious programming. I made the claim that I could rewrite ipcs in rex in half time over a 3 month period and it would have ten times the function and run ten times faster. "dumprx" was the result and it had 10 times the function and ran 10 times faster ... at least for the equivalent functions ... there were (new) debugging scripts that ran thru dump images making the most common storage consistency checks automatically looking at all the major control blocks and checking for things that looked out of the ordinary ... that took a little time.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The 64-bit landscape in 2005

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 64-bit landscape in 2005
Newsgroups: comp.arch
Date: Tue, 28 May 2002 13:03:42 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Never forget marketing. I thought that about the original IBM PC on the grounds of its technical uselessness :-(

i was at dinners with some of the mac developers early in the mac development cycle where I argued that IBM PC was going to be success because it was going to provide a business terminal, a single keyboard and display on the desk that could do both mainframe terminal connection and some other local computing (eventually lotus).

effectively the mac guys were claiming that they would never allow mac to be used anywhere but the kitchen table and never for any useful business purpose.

IBM PCs and IBM 3270s were in the same price range ... and the single keyboard/display on the desk got to be a very powerful argument.

there was an internal period that "terminals" on individuals desks required VP approval. we put together a case that 3-year amortized terminal cost was less than monthly business telephone cost (and it was taken for granted that employees got phones on their desks w/o vp approval). At some point, PCs on the desk (in place of a terminal) became a general business status symbol ... even for people that never used it in any kind of "local" mode.

originally there was a lengthy waiting period on the employee PC purchase plan. i remember picking up my first PC on the day that the external price was dropped to less than what i had signed up to pay on the employee purchase plan.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Oh, here's an interesting paper

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Oh, here's an interesting paper.....
Newsgroups: comp.dcom.modems.cable,comp.security.firewalls
Date: Tue, 28 May 2002 13:41:10 GMT
Cichlidiot writes:
The large majority of computer users that are contributing to the problem would not be able to find their way around a command line if their lives depended on it. This is the crux of the problem. There are people who only know how to use the GUI, who depend on CompUSA to do everything for them, who would scream in horror at the thought of doing a Linux install, etc. This is the "Average Joe"... the one who took 10 years to figure out how to program his VCR... the one who only knows how to point and click.

another area of consumer device requiring security has been automobiles ... from technical expertise standpoint has lots of similarities to computers (how many people rebuild ignitions and locks in their automobiles to be more secure). lots of joy-ridding is not treated as serious felony.

over possibly last 20 year period there was some improvement ... in part because of competition between manufactures ... especially after some of the foreign imports beame a serious consideration (the corollary would probably mean basic operating system competition ... since serious security requires support by the basic operating system). also insurance rates/policies had some role in rating competing models and adjusting the rates. Insurance possibly plays a similar role in security devices for homes.

there are possibly only a couple after-market security devices for automobiles ... with significant market penetration ... and they took possibly 100 years to show up.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

[survey] Possestional Security

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [survey] Possestional Security
Newsgroups: sci.crypt
Date: Tue, 28 May 2002 20:24:17 GMT
Gus writes:
shrug. So I have to pick your pocket, or burgle your house to get it. It does go on. Ok, thats a bit spook stuff for most, but since having the extra layer of a 6-digit PIN with a "3 incorrect tries and I blow irrevocably" mechanism on the smartcard is very easy to build in and (most importantly) very easy to operate day-to-day, while adding tremendously to the overall security of the system it is almost a no-brainer as to wether to include it.

there are (at least) two kinds of hardware tokens out there ... the infrasructure shared-secret based ... that you can attack one token ... even to destruction to extract the infrastructure shared-secret ... and then use in conjunction with other devices. some of these were created with shared-secret "in-waiting" already loaded into the device ... on the off-chance somebody did an exhaustive search of the current shared-secret ... then they could somehow propagate a signal to all devices to upgrade to the shared-secret in-waiting ... w/o having to re-issue all new toakes.

the non-shared-secret ... might have a unqiue secret per device or support public/private key operation (one way or another, a unique key per token). such systems are more considered to have no-single-point-of-failure ... aka attacks on one token yields no fraudulent capability in conjunction with any other token.

given approximately the same level of hardware technology for the two types of tokens ... it can be obvious that attacks on infrastructures with shared-secrets yield higher return on investment compared to attacking the same kind of hardware token that is part of an infrastructure that has no global secrets.

given a financial account scenario with the token used for authenticating transaction ... and a non-shared-secret implementation; if a hardware token was acquired ... could anything be done with the token before the token is reported lost/stolen ... where most attacks result in zeroization of the chip.

scenario currently with magstripe (both with & w/o PIN requirements) is that it is possible to harvest magstripe information and create counterfeit transactions. the purpose of the chip technology is to significantly increase the difficulty of counterfeiting transaction (say compared to currently being able to copy a magstripe from one card to another).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Biometric authentication for intranet websites?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Biometric authentication for intranet websites?
Newsgroups: comp.security.misc,comp.security
Date: Wed, 29 May 2002 14:17:49 GMT
Bernd Eckenfels <ecki-news2002-05@lina.inka.de> writes:
Be aware that in recent tests again all biometric systems failed misserably. IT should never be used in a 2-factor authentication. Always add another one like something you know.

somewhere earlier in the thread there was mention of an environment where people write there PIN numbers on the card and also leave their fingerprints on the card.

so from a fraud standpoint (given that environment) is it easier for an attacker to

1) read the PIN number off the card and enter the PIN number (read from the card) into a PIN pad

or

2) extract the fingerprint from the card and create some fraudulent fingerprint entry

for both #1 and #2, a) what is the skill level required to read a pin number vis-a-vis lift a fingerprint and b) what is the elapsed time it takes to fraudulently enter a pin number vis-a-vis create a fraudulent fingerprint and enter it.

miserable can be relative ... how does people leaving their fingerprint on their card compare to population that writes their pin number of their card ... and the relative ease & technology needed to exploit either.

misc ... see 2-factor and 3-factor authentication defs in
https://www.garlic.com/~lynn/secure.htm

random bio refs:
https://www.garlic.com/~lynn/aadsm10.htm#tamper Limitations of limitations on RE/tampering (was: Re: biometrics)
https://www.garlic.com/~lynn/aadsm10.htm#biometrics biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio1 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio2 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio3 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#bio4 Fingerprints (was: Re: biometrics)
https://www.garlic.com/~lynn/aadsm10.htm#bio5 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio6 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio7 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio8 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm11.htm#13 Words, Books, and Key Usage
https://www.garlic.com/~lynn/aepay10.htm#5 I-P: WHY I LOVE BIOMETRICS BY DOROTHY E. DENNING
https://www.garlic.com/~lynn/aepay10.htm#8 FSTC to Validate WAP 1.2.1 Specification for Mobile Commerce
https://www.garlic.com/~lynn/aepay10.htm#15 META Report: Smart Moves With Smart Cards
https://www.garlic.com/~lynn/aepay10.htm#20 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2001c.html#39 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#42 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#60 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001g.html#11 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
https://www.garlic.com/~lynn/2001j.html#52 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#1 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#61 I-net banking security
https://www.garlic.com/~lynn/2002.html#39 Buffer overflow
https://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002e.html#18 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002e.html#36 Crypting with Fingerprints ?
https://www.garlic.com/~lynn/2002e.html#38 Crypting with Fingerprints ?
https://www.garlic.com/~lynn/2002f.html#22 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#32 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#45 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002g.html#56 Siemens ID Device SDK (fingerprint biometrics) ???
https://www.garlic.com/~lynn/2002g.html#72 Biometrics not yet good enough?
https://www.garlic.com/~lynn/2002h.html#6 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#8 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#9 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#13 Biometric authentication for intranet websites?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Looking for Software/Documentation for an Opus 32032 Card

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Looking for Software/Documentation for an Opus 32032 Card
Newsgroups: alt.folklore.computers
Date: Sat, 01 Jun 2002 16:08:30 GMT
Steve O'Hara-Smith writes:
Sequent kept Dynix and dropped the 32032 - I met Dynix on a box with 32 486-DX50s with separate memory on each processor - it was very nearly 32 ATs on a fast interlink bus at the hardware level but from inside it just seemed like one big system that was better at lots of little jobs than at one big one. The hoops it must have jumped through to make SYSV shared memory work cannot have been pretty - but they were transparent and the system didn't perform at all badly on a complex multi part application that used a lot of shared memory segments.

sequent then did sci-based interconnect processor complex ... using same chip that DG used for their similar design. Convex also did a sci-based interconnect using their own chips and HP processors (exemplar).

steve chen (remember chen supercomputers and cray) showed up at sequent for awhile as their CTO ... and my wife and I did some consulting for him.

from recent soul of new machine thread:
https://www.garlic.com/~lynn/2002g.html#10

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM doing anything for 50th Anniv?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM doing anything for 50th Anniv?
Newsgroups: alt.folklore.computers
Date: Sat, 01 Jun 2002 16:16:40 GMT
Stefan Skoglund writes:
I can name two examples:

KeyKOS (a capability based system from TYMnet - ran tymnets bbs on standard ibm 360 hw a lot faster than their previous VM based implementation)

Multics - Ford found out that a multics based CAD system was 50 % cheaper than the competing UNIX solutions.


gnosis was done by tymshare ... at that time well over 30 percent of the processor time was updating capability and accounting information that happened when switching domains. target for gnosis was being able to allow 3rd party offerings on tymshare service that would produce revenue stream back to their authors based on direct tymshare customer use. gnosis had a lot of domain and accounting stuff in order to implement that function.

after md bought tymshare and spun off gnosis ... a lot of gnosis design point changed for keykos ... into a transaction oriented system (as opposed to platform for deliverying 3rd party software). In that sense, keykos and tpf (transaction processing facility ... current name for ACP ... airline control program ... in part because some number of financial institutions use it for transactions in addition to the large airline res systems) ... are similar design points.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

50 years ago (1952)?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50 years ago (1952)?
Newsgroups: alt.folklore.computers
Date: Sat, 01 Jun 2002 19:02:36 GMT
lwinson@bbs.cpcn.com (lwin) writes:
In the 1960s, govt rumblings encouraged IBM to "unbundle" its hardware and software. Originally, one got a S/360 with peripherals and software as a single package. After the changes (about 1969), one could buy peripherals separately (fueling growth of a huge industry). Also, software was separated, fueling growth of another industry.

date was june 23rd, 1969 for unbundling

note I was undergraduate on four person project that built first ibm 360 pcm (plug compatible) control unit ... reverse engineered the channel interface and built channel interface board that went into an interdata/3.
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Future architecture [was Re: Future micro-architecture: ]

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Future architecture [was Re: Future micro-architecture: ]
Newsgroups: comp.arch
Date: Sat, 01 Jun 2002 19:32:44 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Really? Now please analyse Lamport's algorithm in the face of arbitrary pattterns of contention. I will analyse the behaviour of compare&swap in preference any day.

somewhat aside charlie came up with CAS working on 360/67 smp support. the mnemonic CAS was chosen because they were charlie's initials and then we had to invent the term compare&swap to go with the intials. the guys that owned the 370 hardware architecture said that there was insufficient justification to put in a pure SMP instruction into the hardware ... and what was required as a justification for the instruction that could be used in non-SMP environments ... from that was born the description of using atompic C&S for multi-thread applications ... independent of the number of real processors (aka concurrent could be processor interrupts and pre-emption scheduling ... and/or concurrent could mean simultaneous execution on multiple real processors). The description was orginally included as programming notes with the instruction in the POP ... but has since been moved to an appendix.
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Future architecture

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Future architecture
Newsgroups: comp.arch,comp.sys.super
Date: Sat, 01 Jun 2002 19:20:35 GMT
"Stephen Fuld" writes:
I don't know what you mean by significance, but the Univac 1108A had true SMP style multiprocessing since the late 1960s. There were many installations used for a variety of applications ranging from transaction processing through general purpose to scientific applications.

ibm 360/65 smp ... two processor versions from the 60s. ibm used the definition to include the ability that the real storage and machine could be cleaved and operate as two independent processors. They shared memory but didn't share I/O ... however, in SMP configurations, large percentage of device configuration used the cluster capability (device able to connect to multiple different i/o paths) to give multiple processors i/o access to the same device. In that sense the straight forward 360 smp had real shared memory ... but the i/o configuration operate as if it was cluster.

the exception was the 360/67 smp (a modified verion of 65 with hardware address translation for virtual memory). It had a "channel director" box ... that preserved the ability to partition the hardware into multiple inedependent operating units ... but also provided the ability for each processor to access the i/o channels/bus of all processors in smp mode.

then there were the large number of modified 360/50s & 360/65s for the FAA in the mid-60s

the work that charlie did on 360/67 smp resulted in the compare&swap atomic instruction (the choise of CAS mnemonic was made because they are charlie's initials, then the words had to be invented to go with his initials). the "owners" of the 370 architecture said that we couldn't get it into the 370 machines unless we came up with a non-smp kernel use for the instruction. That prompted what was originally the programming notes of the CAS instruction (since moved to the appendix of the principles of operation) on how to use compare&swap semantics in multithreaded applications (whether running on real SMP or simple uniprocessor).

the 370 smp continued with the design point that the hardware needed to have the capability of being split into individual processors.

The 3081 was the first ibm smp that no longer provided the capability of being hardware cleaved into fully operational independent processors (although it were possible to bind a pair of 3081s into a 3084 ... where the 3084 could be split into two operational 3081s).

random refs:
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why did OSI fail compared with TCP-IP?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why did OSI fail compared with TCP-IP?
Newsgroups: comp.arch,comp.protocols.iso,comp.protocols.tcp-ip,comp.lang.c++
Date: Sat, 01 Jun 2002 19:39:21 GMT
"Rudvar Alswill" writes:
They were mainly cultural. Their philosophy, or lack of it, was perhaps their most fatal shortcoming.

a lot of the OSI model was driven by the telco, copper, PTT guys based on their experience in end-to-end voice, pre-FEC, pre-fiber, pre-LAN, non-computer, etc. these things had cumbersome bit error rates and processing. i've heard some at&t computer guys use a term for the phone people that was somewhat more descriptive.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why did OSI fail compared with TCP-IP?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why did OSI fail compared with TCP-IP?
Newsgroups: comp.arch,comp.protocols.iso,comp.protocols.tcp-ip,comp.lang.c++
Date: Sat, 01 Jun 2002 19:52:29 GMT
"John S. Giltner, Jr." writes:
Sorry to get into the middle of this, if I remember correctly IBM was working on SNA at the same time that OSI was being developled and had people that were part of the OSI working group, so did Honeywell. IBM wanted SNA to be compatiable with the OSI model, but felt that things were moving to slow and IBM wanted a network archicture. So they continued on with SNA development and released and announced prior to OSI being standardized.

I found a Web page where William Stallings stated the OSI sevel layer model was standarized in 1977, I tought that it was earlier that than, but I do know the work on OSI started in the earlier 70's. IBM announced SNA in 1974.

Now I was in elementry school during the early 70's. Got invloved with IBM mainframes since 1982, mainly in networking. I do know when IBM introduced SNA, but as for OSI that was from memory of classes I took 15 years ago. Never really used anything that was totally OSI based, so I do not remember a whole lot about its history.


my wife spent a short time in the SNA architecture effort. Primarily SNA was oriented towards communication infrastructures of large number of (dumb?) terminals connected to ibm mainframes. there was vtam (sscp) and there was 37xx (ncp) and then there were lots of terminals. there has been some claim that possibly a majority of the sscp/ncp interface effort was in response to a project i worked on as an undergraduate that created the first 360 pcm (plug compatible) control unit.
https://www.garlic.com/~lynn/submain.html#360pcm

my wife was partially disinvited from the SNA effort because of her ideas about peer-to-peer (as opposed to a central mainframe controlling everything) ... this was possible partially due to the fact that she had previously spent some time working for BBN prior to going to work for IBM.

networking didn't show up in any form related to SNA until APPN circa 1986. The initial announcement for APPN was objected to by the SNA group. The announcement was delayed 6-8 weeks while there was an executive escalation and the announcement was reworded so any implied connection between SNA and anything to do with APPN/networking was removed.

independent of the SNA stuff ... there was the (non-sna) internal network ... which was larger than arpanet thru just about its complete lifetime up until circa mid-1985. I claim that the big difference was that the internal network effectively had gateway functionality in every node from just about the beginning. the arpanet didn't get that gateway functionality until the 1/1/83 switch-over to "internet" or IP-based infrastructure. Prior to 1/1/83 switch-over ... arpanet lacked an ip/internet layer and the concept of gateways ... being a traditional homogeneous network infrastructure with front-end processors (FEPs) called IMPs built by BBN.

random refs:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
https://www.garlic.com/~lynn/subnetwork.html#hsdt
https://www.garlic.com/~lynn/subnetwork.html#internet

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Coulda, Woulda, Shoudda moments?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Coulda, Woulda, Shoudda moments?
Newsgroups: alt.folklore.computers
Date: Sun, 02 Jun 2002 16:42:19 GMT
Charles Shannon Hendrix writes:
What is so bad is that even less skilled programmers can do a better job if they will simply use test cases outside of normal expectations. I have seen even bad programmers fix serious performance problems just by trying the code with large datasets. Even when they cannot fix it on their own, they can at least note the problem.

for the resource manager i had to test all sorts of cases (since the resource manager was product specifically designed to manage performance). this turned into an automated benchmarking methodology where nominal performance envelopes were defined (based on various profiles of thousands of systems of years of operation ... to varying degree of detail). The automated benchmarking then had synthetic benchmark specifications that provided a statistical coverage of the envelope internals ... concertration on the envelope edges ... and then a fair sample way outside the envelope. this racked up to 2000 benchmarks that took 3 months elapsed time to run.

misc.
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

the extreme "outlayers" had a tendency to precipitate system crashes. as a result, I had to redo the kernel synchronization infrastructure that was included in the resource manager. This had the side effect of eliminating all the zombie/hung processes.

some of this experience I used later when doing a bullet proof i/o supervisor for the disk engineering labs:
https://www.garlic.com/~lynn/subtopic.html#disk

and my all time favorite "envelope" expert:
https://www.garlic.com/~lynn/subboyd.html#boyd

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

crossreferenced program code listings

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: crossreferenced program code listings
Newsgroups: alt.folklore.computers
Date: Mon, 03 Jun 2002 15:42:10 GMT
Brian Inglis writes:
Virtual machines (the other VM) solved that problem by allowing the system programmer to debug VM standalone in a virtual machine under another copy of VM.

the other thing provided virtual machines was security isolation. I once had a discussion early on with the GM of the group that java was part of (he had worked in vm environment a couple lifetimes previously) applying some of the 60s & 70s processes to java vm. there have also been discussions in some of the multics threads about use of virtual machine environment for security isolation.

early work on 370 virtual memory was done at cambridge in virtual machines. cambridge had time-sharing service that included employee researchers, non-employee researchers, mit students, bu students, corporate planners with extremely sensitive corporate financial information, etc.

while non-virtual memory 370 looked quite a bit like 360 ... 370 virtual memory implementation used different control registers and slightly different table structure than the 360/67 virtual memory .. and 370 virtual memory had not yet been announced.

as a side-note there was this security breach of 370 virtual memory document leaked to some press that resulted in major investigation. one of the outcomes were all the (laser) copier machines were retrofitted with an "ID mask" ... that would show up on every page copied on that machine.

in any case, 370 virtual memory had not yet been announced ... but had been specified. a project was begun at cambridge to modify cp/67 to provide 370 virtual machines (as opposed to 360 virtual machines) that included support for the 370 virtual memory architecture (as opposed to 360/67 virtual memory architecture).

since

1) that was core operating system and it was desirable to not take down the production time-sharing system ... that modified cp/67 kernel development went on in a 360/67 virtual machine

2) desire to have security isolation from the rest of the time-sharing population that work went on in a 360/67 virtual machine

Even when the work had been completed ... the modified cp/67 kernel (providing 370 virtual memory architecture support) continued to operate in a virtual machine (rather than on the real hardware) maintaining the isolation from the general time-sharing users.

Work then was begun on a modification to the cp/67 kernel that "ran" in a 370 virtual memory architecture (rather than a 360/67 virtual memory architecture) ... aka the kernel would format 370 virtual memory tables and issue 370 control instructions.

That 370 kernel work went on

a) in a virtual machine running under
b) the modified cp/67 that provided 370 virtual machines running in
c) a virtual machine under the unmodifed cp/67 running
d) on a real 360/67

when that kernel work was done, a copy of cms was brought up in that 370 virtual machine


cms in 370 virtual machine running under
cp/67-I kernel running in 370 virtual memory running under
cp/67-H kernel running in 360/67 virtual memory runing under
cp/67-L kernel running on
    real 360/67
(which was a bunch of 360 microcode on the
     real microcode engine)

a year after all the above was accomplished there was the first engineering model of 370 machine supporting 370 virtual memory available. the modified "cp/67-i" kernel was booted on that machine was a verification test for the hardware.

https://www.garlic.com/~lynn/subtopic.html#545tech

the other side was when the hardware isn't working correctly. this was the disk engineering lab (bldg. 14 ... and the disk product test lab in bldg. 15) where production operation system had MTBF of 15 minutes when a single (engineering) test cell was in operation. As a result bldgs 14&15 had several dedicated mainframes of all the standard models that could be used for testing and verification for disk technology under development. As a result, the operations in bldgs. 14&15 went on with dedicated, stand-alone (non-operating system) machine time on a per test cell basis ... rotating between the different engineers.

the effort was then to make an i/o supervisor that would never fail ... regardless of whether the devices were working correctly or not ... with the objective that a dozen or more testcell operations could be supported concurrently on the same machine.

https://www.garlic.com/~lynn/subtopic.html#disk

in the security case ... virtual machine environment was used to provide security isolation as well as development isolation. in this case, the modified i/o supervisor was to provide development isolation as well as error isolation (in some respect integrity isolation could be considered a superset of fault isolation ... whether the fault is a security related thing or an error related thing).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Why did OSI fail compared with TCP-IP?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why did OSI fail compared with TCP-IP?
Newsgroups: comp.arch
Date: Mon, 03 Jun 2002 16:30:21 GMT
"del cecchi" writes:
TSlow was so much fun that Rochester wrote their own time sharing system, MTMT Multiple Terminal Monitor Task that ran as a long running batch job under MVS. Actually under OS/360 and supported the whole lab on a couple of 360/65s Even had its own editor. Support a few hundred terminals and a bunch of batch jobs on a 65 was pretty good, I would say. I don't think it was ever marketed however.

one of the reasons that internally within ibm that the cern report to shared on the cms/tso bakeoff was stamped IBM Confidential Restricted (available on a need to know only).

the other similar effort was for VS1 ... something that was originally being called PCO (personal computing option) ... but later changed vs/pc (virtual storage / personal computer) because pco happened to be the initials of some political party in europe.

in any case, for the first year or so ... there was a PCO performance modeling group ... that ran "accurate" performance models of what PCO was going to be able to do. The CMS group then was put thru huge resource gyrations doing real benchmarks attempting to show that it had similar performance to PCO. It wasn't until there was actual PCO code running that they found out that the performance model was predicting performance that was at least ten times better than what actually PCO code was deliverying.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Bettman Archive in Trouble

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Bettman Archive in Trouble
Newsgroups: alt.folklore.computers
Date: Wed, 05 Jun 2002 12:41:54 GMT
jmfbahciv writes:
That is one of the big reasons DEC was able to sell into an IBM shop.

4341 sold well head to head against DEC ... but it was also in competition with the high-end ibm stuff (effectively because of a similar marketing theme to some of the dec themes) ... as a result it ran into some huge internal politics.

now protecting turf ... when my wife and I were doing HA/CMP and cluster scale-up ... one of the biggest arguments we got into was at a sigops conference from somebody entrenched in vax/clusters (and worked for dec) who's position was (effectively) that lower end clusters would never be viable.
https://www.garlic.com/~lynn/subtopic.html#hacmp
https://www.garlic.com/~lynn/2001i.html#52 loosely-coupled, sysplex, cluster, supercomputer & electronic commerce

random departmental servers
https://www.garlic.com/~lynn/94.html#6 link indexes first
https://www.garlic.com/~lynn/96.html#16 middle layer
https://www.garlic.com/~lynn/96.html#33 Mainframes & Unix
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001m.html#43 FA: Early IBM Software and Reference Manuals
https://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
https://www.garlic.com/~lynn/2001n.html#15 Replace SNA communication to host with something else
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#34 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#2 The demise of compaq
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#0 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#2 IBM's "old" boss speaks (was "new")
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002e.html#61 Computers in Science Fiction
https://www.garlic.com/~lynn/2002e.html#74 Computers in Science Fiction
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Bettman Archive in Trouble

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Bettman Archive in Trouble
Newsgroups: alt.folklore.computers
Date: Wed, 05 Jun 2002 17:43:42 GMT
jmfbahciv writes:
I was thinking about customers' reason for buying in the early 70s :-).

i remember in the early '70s getting called into customer situations involving some harris stuff & sigma7 stuff ... but I don't remember being called into customer situations involving pdp stuff from the early 70s.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Bettman Archive in Trouble

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Bettman Archive in Trouble
Newsgroups: alt.folklore.computers
Date: Thu, 06 Jun 2002 12:48:27 GMT
jmfbahciv writes:
Is it possible that those accounts were no longer interesting to IBM? And that the "reason" used by the customer or DEC's account rep was to rationalize the disinterest? I was just learning how the biz worked back then so the gossip could have been all wrong.

or it could have been more of 1800 and then s/1 market place ... which i wasn't ever directly involved

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Future architecture [was Re: Future micro-architecture: ]

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Future architecture [was Re: Future micro-architecture: ]
Newsgroups: comp.arch
Date: Sat, 08 Jun 2002 01:32:36 GMT
"Bill Todd" writes:
Not if you also want to use test-and-set for the kind of low-level MP interlocking it's commonly used for: the overhead of taking an interrupt and would be completely unacceptable (and completely unnecessary) there in the kernel.

charlie invented compare&swap (aka mnemonic CAS are charlie's initials ... compare&swap was chosen to match his initials) after a lot of performance work on fine-grain multiprocessing on 360/67. 360s had test&set instruction ... and were used by smp implementations on 360/65 and 360/67.

the original justification included 1) performance improvement compared to test&set implementations and 2) some considerations of weak memory consistency except for memory needing strong consistency (aka test&set also sets the barrier ... but not the actual data that nominally is being operated on inside the barrier).

a condition was put on getting compare&swap into 370 architecture ... that uses be found that were not smp-specific ... that gave rise to the various multithreading scenarios ... aka operations in non-kernel code that were subjects to interruptions (whether or not they were running in an smp environment). The multithreaded scenarios were orginally included in the "programming notes" included in the 370 principles of operation ... that were optionally included at the end of each instruction definition/description. These "programming notes" have since been moved to the appendix.

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.2?DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.3?DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.4?DT=19970613131822

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jun 2002 01:47:47 GMT
cstacy@grant.org (Christopher C. Stacy) writes:
What was the connection to Cambridge, England?

none that i know of.

cambridge scientific center was on 4th floor, 545 tech sq, cambridge (same building as project mac & multics.

some number of the people had previously worked on the ctss time-sharing system on the 7094 (some amount of cms comes out of that ctss experience). melinda's paper gives pretty good description; ... misc. other discussions
https://www.garlic.com/~lynn/subtopic.html#545tech

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Future architecture [was Re: Future micro-architecture: ]

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Future architecture [was Re: Future micro-architecture: ]
Newsgroups: comp.arch
Date: Sat, 08 Jun 2002 01:40:55 GMT
Anne & Lynn Wheeler writes:

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.2?DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.3?DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.4?DT=19970613131822


the rest of the refs in the appendix.
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6?DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.5?SHELF=&DT=19970613131822

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jun 2002 13:01:43 GMT
lwinson@bbs.cpcn.com (lwin) writes:
Is this the same operating system under which PROFS (aka "office vision") now runs under today (with the weird 8+8+2 file names and xedit)?

BTW, they're gonna have to drag me kicking and screaming away from PROFS for my email. It works and works well, outlasting "groupwise".


the basis for PROFS email was something called VMSG ... the PROFS group got an early, limited function version of ... and wrapped a bunch of menus and pfkey stuff around ... and claimed they had written all the code. when the author of VMSG tried to make an issue of it ... they disputed the claims .... until he pointed out that his initials occurred in a control field of every PRORFS email that ever existed.

random res:
https://www.garlic.com/~lynn/99.html#35 why is there an "@" key?
https://www.garlic.com/~lynn/2000c.html#46 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2001j.html#35 Military Interest in Supercomputer AI
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#39 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#40 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#56 E-mail 30 years old this autumn
https://www.garlic.com/~lynn/2002f.html#14 Mail system scalability (Was: Re: Itanium troubles)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jun 2002 13:15:10 GMT
rsteiner@visi.com (Richard Steiner) writes:
At NWA, I thought we ran PROFS (and subsequently OV) under VM.

the original virtual machine (virtual memory) hypervisor was CP/40 (control program, 360/40). the thing that ran in the virtual machine was called CMS (cambridge monitor system, started out looking a lot like ctss user interface).

CP/40 was ported to 360/67 when they became available and renamed CP/67.

When 370s came along CP/67 was renamed VM/370 and cambridge monitor system was renamed conversational monitor system. CMS was the primary human interactive interface that was used under VM. However, lots of other things also ran under VM ... MVT, MVS, MUSIC, VS1, etc.

as outlined in melinda's paper ... CP/40, CP/67, CMS etc were all done at the cambridge science center (4th floor, 545 tech. sq).

late in the cp/67 cycle, the group was split off from the science center.

At that time, the ibm boston programming center occupied the 3rd floor of 545tech sq. They had done something called CPS ... or conversational programming system ... which was a interactive monitor that ran OS/360 and provided interpreted PLI lnguage facility (that was also special microcode for th3 360/50 that CPS could use that would execute significantly faster). Jean Sammet and Nat Rochester were also members of the boston programming center.

The boston programming center effectively got shut down ... the CPS people merged into the CP/67 (and then renamed VM/370) development group, and various non-CPS people from boston programming center got attached to the cambridge science center. The vm/370 development group fairly quickly outgrew the space on the 3rd floor and moved everything out to a building at burlington mall.

random 545
https://www.garlic.com/~lynn/subtopic.html#545tech

misc. other stuff.
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/98.html#7 DOS is Stolen!
https://www.garlic.com/~lynn/99.html#179 S/360 history
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000b.html#55 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000d.html#37 S/360 development burnout?
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001b.html#42 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001m.html#49 TSS/360
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#27 moving on
https://www.garlic.com/~lynn/2002h.html#34 Computers in Science Fiction

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Java, C++ (was Re: Is HTML dead?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Java, C++ (was Re: Is HTML dead?)
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jun 2002 13:35:01 GMT
"Ross Simpson" <rosssimpson@my_spammers_address(optusnet).com.au> writes:
I don't know anything about APL itself (in terms of program structure & appearence), but what I know is, like BASIC it was written on a powerful mainframe computer at the time to take advantage of the power of a mainframe, however unlike BASIC it is not supposed to be easy to learn (from what I hear).

APL has a lot of stuff that encapsulates arrays and vectors as well as bunch of other stuff.

early version done on 7090 ... because mainframes were about the only kinds of computers around (although a 7090 has been compared to something more like a 286 or 386 in terms of power & resources, and the monitor was even simpler than dos).
https://www.garlic.com/~lynn/2001.html#2

falkoff and iverson then did apl\360 at the ibm philli science center ... was a self-contained interactive monitor with 16k-32k byte workspaces that were rolled-in/out under the monitors control ... which typically ran under os/360 mvt.

around '71, cambridg science center converted apl\360 to run under cms ... as cms\apl. this open things up with virtual memory and allowed up to 16mbyte workspaces ... which caused other problems. during the 70s apl was used extensively for all kinds of modeling applications ... including much of the stuff performed on speadsheets today. it represented a challenge at CSC which also provided CP/67, CMS and time-sharing service. The corporate business planners started using it from armonk .. and loaded lots of extremely sensitive business information on the cambridge system ... which also had ibm employees, mit researchers, mit students, bu students, etc.

in the mid-70s, palo alto science center picked up cms\apl and modifed it for apl\cms ... and did a microcode accelerator for the 370/145 (apl\cms on 145 with microcode ran nearly as fast as on 168 w/o microcode). the big thing in apl\cms was the introduction of "shared variables" as a mechanism for interacting with the external world. cambridge had caused something of bad feeling with falkoff and iverson by "polluting" the apl language with direct system calls (i/o read/writes, etc). PASC cleaned that up with apl\cms by migrating all external enviromental interactions to the shared variable paradigm.

There was a precursor to the ibm/pc ... which offered Basic and APL .. where there was 360 subset simulation in the machine ... and the APL was stripped down version of the original os/360 apl\360.

The APL\CMS stuff was picked up by an official product group in STL from the palo alto science center ... and renamed APL2 ... in addition to getting a MVS port.

misc. apl
https://www.garlic.com/~lynn/subtopic.html#hone

pc precursor:
https://www.garlic.com/~lynn/2000.html#69 APL on PalmOS ???
https://www.garlic.com/~lynn/2000.html#70 APL on PalmOS ???
https://www.garlic.com/~lynn/2000d.html#15 APL version in IBM 5100 (Was: Resurrecting the IBM 1130)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2000g.html#46 A new "Remember when?" period happening right now
https://www.garlic.com/~lynn/2001b.html#45 First OS?
https://www.garlic.com/~lynn/2001b.html#56 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2002b.html#39 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#47 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#64 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#44 Blade architectures

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Java, C++ (was Re: Is HTML dead?)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Java, C++ (was Re: Is HTML dead?)
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jun 2002 15:04:19 GMT
Anne & Lynn Wheeler writes:
The APL\CMS stuff was picked up by an official product group in STL from the palo alto science center ... and renamed APL2 ... in addition to getting a MVS port.

oops, i'm pretty sure that the STL product group initially renamed APL\CMS to APL\SV before subsequently renaming it APL2.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jun 2002 15:17:12 GMT
"Russ Holsclaw" writes:
CMS communicated with the VM control program (CP) through a "secret handshake" in the form of a special privileged instruction called "Diagnose" (op code X'83'). The REAL Diagnose instruction (running in REAL Supervisor state) was a virulently nasty instruction that was considered so dangerous that it wasn't even assigned an assembler mnemonic. Running on the bare hardware, it performed diagnostic actions that were only of use to hardware diagnostic programs, like deliberately causing parity errors to test the parity-checking circuits, and making sure they triggered a machine-check interrupt ... stuff like that. The Diagnose instruction behaved completely differently on each model of the 360 and 370 CPU's, so no sane operating system could effectively use it, except for model-dependent diagnostics.

It was safe to issue Diagnose while running on a virtual machine, however, because it was a privileged instruction, so it would be safely trapped by the VM control program, which used it as a way of establishing an API between the virtual machine, and the real one.


original cms on cp/40 and cp/67 didn't use diagnose ... and in fact could run on the real hardware as well as under vm.

I had done a lot of CP/67 performance enhancements for running multiple virtual machines as well as optimizing the execution of guest virtual operations systems (aka os/360) as an undergraduate. random ref:
https://www.garlic.com/~lynn/94.html#18

i also (still an undergraduate) did a modification to the CMS disk i/o for "synchronous" disk operations. for the most part cms was single threaded, default disk i/o sequence was always to start the disk i/o, wait, take the interrupt. restart, etc. that was a lot of wasteful execution under VM. I implemented a SIO "immediate operation", CC=1 csw stored i/o operation for cms & cp interface ... that significantly reduced the overhead.

The cambridge science people didn't like it because it "violated" the hardware architecture interface (while what i did was close ... but no cigar). they came up with the concept that the "83" diagnose was defined in the 360 hardware architecture as an instruction who's execution definition was model dependent. they then invented the concept of a virtual machine machine (aka up until then there were 360/20, 360/30, 360/40, 360/50, 360/65, etc ... where the function performed by the diagnose instruction was specific to that machine model). The virtual machine x'83' diagnose instruction was then defined to have a load of subcode operations that selected different actual instruction functions (something like the b2 instruction introduced in 370, where the second byte selected another 255 instruction op-codes).

They was released in CP/67 "3.1" ... at that time CMS on initial startup would determine whether it was running on a real machine or virtual machine ... and either enable or disable the use of the x'83' mechanism. for the port to vm/370 (and rename from cambridge monitor system to conversational monitor system) the ability for cms to run on the bare hardware (w/o x'83' stuff) was removed.

random other refs:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Sizing the application

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Sizing the application
Newsgroups: bit.listserv.ibm-main
Date: Sat, 08 Jun 2002 15:31:38 GMT
"John S. Giltner, Jr." writes:
First problem is that mainframe mips are not the same as RISC mips, which are not the same as PC mips. If you really want to try, you can take and compare your boxes mip rating to it MSU rating. See how many MSU CICS region is using and this can give you a rough idea of the mips.

Second problem is I/O rates.

I remember reading a artical about FLEX/ES, a s/390 hardware emulator that runs on Intel processors. They stated that on average 1 s/390 machine instruction translated to 17 Intel machine instructions. The smallest was 1 s/390 = 2 Intel, the largest was 1 s/390 = over 1,000 Intel. So you could say that for every s/390 mip you would need 17 Intel mips, that is just for CPU power, that does not include I/O. It also does not take into account memory managment (os/390 beats Windows hands down) and workload management (does Windows even have this), along with other advantages.


i think the average is close to what 360s & 370s used to get. nearly all the 360s and all the low-end and mid-range 370s were microcoded machines ... with "vertical micrcode". The native microcode engine tended to be somewhat risc'y ... and the machines tended to average around 10 microcode instructions for every 360/370 instruction (aka if you had a 1mip 370 processor, that implied that the native microcode engine was running around 10mips).

One of the first big RISC projects was called Fort Knox ... where all the various microcoded engines in the corporation were going to be replaced with 801 RISC processor (low & mid-range 370s, rochester s/38, as/400, etc). Fort Knox was eventually killed, in part because native 370 hardware chip implementations were started to move down into the mid-range.

That appeared to free up some engineers who left and went off to other companies to do RISC implementations. Also that evolved into 801 ROMP ... which was going to be a OPD display writer follow-on ... which morphed into the UNIX PC/RT before delivery to customers ... giving rise to RS/6000 RIOS ... and sort of come full-circle with as/400 being moved to power/pc platform.

there are one or two other intel based simulators out there besides flex/os.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jun 2002 22:27:52 GMT
cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
Can I mention VISTA again at this point? I'm still not exactly sure about the connection, but it was claimed about 12 years ago that PROFS benefitted somewhat from VISTA's experience.

As a subnote, I preferred VISTA, IMHO its use of resources was quite a bit more frugal and its time-booking system was a lot better, although PROFS looked a bit nicer (sometimes)


vmsg was from late 70s and initial profs shortly thereafter (summer 81). there is the whole ollie thing from 20 years ago about profs email being in archives. vmsg drew on experience from use of rmsg on the internal network from during much of the '70s. the problem with the profs version is that the profs group had scarfed an early limited function version of the vmsg source ... and then the antagonism they generated effectively precluded them picking up the full function version later on.

on vmshare archives
http://vm.marist.edu/~vmshare

profs memo was created 12/11/81.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Bettman Archive in Trouble

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Bettman Archive in Trouble
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jun 2002 22:35:33 GMT
jmfbahciv writes:
I've been trying to remember specific customers but my recall is awful these days. IIRC, universities fell in this category.

1800 and series/1 were ibm minicomputers. 1800 saw a lot of use in the 60s .. a lot in process control industry, oil industry, etc (i remember visiting amoco research in tulsa about the time they sort of upgraded from 1800 to vm on 370/135).

series/1 came out in the early '70s and was also used extensively in the process control industry but also saw heavy deployment in various communication areas. there were unix ports to series/1 as well as things like ucla's distributed unix look alike locus.

there was an effort inside ibm to get the series/1 "peachtree" processor selected as the 3705 microprocessor instead of the uc.5.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

How does Mozilla 1.0 compare with Opera?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How does Mozilla 1.0 compare with Opera?
Newsgroups: netscape.public.mozilla.general
Date: Sat, 08 Jun 2002 23:28:32 GMT
i have some glossaries with lots & lots of hrefs
https://www.garlic.com/~lynn/secure.htm
https://www.garlic.com/~lynn/financial.htm

ie "marks" all URLs as read ... once any the file containing them as been referenced in any way. however it moved well forward and backword in the same group of files.

netscape 4.7x ... had sporadic problems recognizing that movement was around in the same file (and would reload)

opera 6 (6.0, 6.01, 6.02, 6.03) handled recognizing that it was moving around in the same file ... except for some sporadic problems when using the "back" button. it would sporadically go totally compute bound and lock up for more than a minute to several minutes (800mhz processor)

mozilla 1.0 so far seems to do the best overall.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Sun, 09 Jun 2002 16:44:39 GMT
mikekingston@cix.co.uk (Michael J Kingston) writes:
At IBM Hursley in 1964 CP/67 was installed, but I didn't have cause to use it. C stood for Cambridge Mass, though it wasn't until later that I learned that CP/67 came out of the IBM Scientific Center there. I believe that APLSV was another of their excellent products.

maybe 74?

when emea hdqtrs was moved from NY to paris ... I hand carried HONE installion (went into new building at la defense). later there were then large hone installations in uithoorn and havant.
https://www.garlic.com/~lynn/subtopic.html#hone

HONE was the system used world wide for sales, marketing, and field support. it started out on a cp/67 with extensive APL-based applications running under CMS. these were updated to vm/370 when available.

as in another thread, cambridge modified apl\360 to cms\apl (and caused quite a bit of upset by implementing direct system calls). palo alto science center enhanced it to apl\cms ... and all the system call support was replaced with shared variable paradigm. STL product group eventually picked up apl\cms ... added an MVS port and called it aplsv ... later renamed and currently called apl2.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Are you really who you say you are?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are you really who you say you are?
Newsgroups: sci.crypt
Date: Mon, 10 Jun 2002 03:45:57 GMT
"Anon E. Maus" writes:
Coming from me, you may laugh... hell I am. But the reason I feel I have to be Anon (AntiSpam) is one thing we are addressing in a new product (actually, a NextGen of an existing product) we are releasing soon.

So I have a question: What in your best estimation will be reasonable and good information to collect from subscribers to a secured internet email system to be able to say "I trust you are who you say you are"?

Some background: We have a number of examples from banking folks, securities, etc., making quite a list of items. I would like your input on what would be a reasonable tradeoff on items for identity verification. E.G., Email response, mother's maiden name, pass phrase hints, mailing address, dog's name, etc... We serve a very diverse range of clients around the world, from government and private sectors and individuals as well.

SSL and A digital ID from a CA for each subscriber is not an option, many subscribers are on wireless only, and are very transmission-cost sensitive. The system is installed and setup from a secure web site, then placed in a wireless environment, and all transmissions are via compressed (imploded) and encrypted (3DES) packets. A centralized hierarchy of servers handle the messages and key management on the internet.

TIA


note that X9.59 specifies a digital signature to authenticate the sender as well as authenticate that the message wasn't modified in transit.

various previous protocols targeted for the internet somewhat ignored the bandwidth requirement for payment type of environment ... transmitted lots of stuff while the transaction was still on the internet and then threw it all away when transitioning to the real financial network.

the requirement given the x9a10 working group was to preserve the integrity of the financial infrastructure for all payments in all environments. that implied an end-to-end authentication protocol that worked in all environments (non-internet, internet portion of a transaction, non-internet portion of transactions, etc). since a standard transaction size in the payment network is on the order of 60-80 bytes, meeting the integrity requirement at the same time as providing a rational payload increase was an interesting task.

in this scenario ... a public key can be registered with a financial institution ... in much the same manner that a public key is registered with a certification authority. however, instead of sending back a certificate ... nothing is sent back ... the certificate is just saved in the accoun record. a payment transaction then goes thru with a digital signature ... but no certificate ... when the transaction gets to the financial institution the certificate is retrieved from the account record and the associated public key is used to authenticate the transaction.

whatever proofing payload occurs at the registration time .... not on every subsequent message. This is nearly identical to the proofing process used by a registration authority element of a certification authority process ... but with the certificate eliminated.

an example showing the mapping of x9.59 to the iso 8583 payment network:
https://www.garlic.com/~lynn/8583flow.htm

various x9.59 discussions & pointers
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#privacy

some discussion applying the technique to other environments & transactions
https://www.garlic.com/~lynn/x959.html#aads
https://www.garlic.com/~lynn/subpubkey.html#radius

some discussion as related to SSL domain name certificates:
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

somewhat related discussions with regard to non-repudiation
https://www.garlic.com/~lynn/aadsm10.htm#cfppki13 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#cfppki15 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#cfppki18 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#paiin PAIIN security glossary & taxonomy
https://www.garlic.com/~lynn/aadsm10.htm#tamper Limitations of limitations on RE/tampering (was: Re: biometrics)
https://www.garlic.com/~lynn/aadsm10.htm#bio3 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#bio7 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#7 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#8 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#9 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#11 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#12 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#13 Words, Books, and Key Usage
https://www.garlic.com/~lynn/aadsm11.htm#14 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#15 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm3.htm#cstech4 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#shock2 revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#ocrp Online Certificate Revocation Protocol
https://www.garlic.com/~lynn/aadsm5.htm#spki2 Simple PKI
https://www.garlic.com/~lynn/aadsm6.htm#nonreput Sender and receiver non-repudiation
https://www.garlic.com/~lynn/aadsm6.htm#nonreput2 Sender and receiver non-repudiation
https://www.garlic.com/~lynn/aadsm6.htm#terror7 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#terror10 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm7.htm#pcards4 FW: The end of P-Cards?
https://www.garlic.com/~lynn/aadsm7.htm#cryptofree Erst-Freedom: Sic Semper Political Cryptography
https://www.garlic.com/~lynn/aadsm7.htm#rubberhose Rubber hose attack
https://www.garlic.com/~lynn/aadsm8.htm#softpki8 Software for PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki9 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#softpki23 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
https://www.garlic.com/~lynn/aadsm9.htm#pkcs12b A PKI Question: PKCS11-> PKCS12
https://www.garlic.com/~lynn/aadsmail.htm#complex AADS/CADS complexity issue
https://www.garlic.com/~lynn/aadsmore.htm#keytext proposed key usage text
https://www.garlic.com/~lynn/aepay7.htm#nonrep0 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep1 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep2 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep3 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep4 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep5 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep6 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/2000.html#57 RealNames hacked. Firewall issues.
https://www.garlic.com/~lynn/2001c.html#30 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#34 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#39 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#40 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#41 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#42 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#43 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#44 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#45 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#46 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#47 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#50 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#51 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#52 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#54 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#56 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#57 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#59 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#60 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#72 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#73 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001d.html#41 solicit advice on purchase of digital certificate
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001f.html#31 Remove the name from credit cards!
https://www.garlic.com/~lynn/2001g.html#1 distributed authentication
https://www.garlic.com/~lynn/2001g.html#11 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#38 distributed authentication
https://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001h.html#7 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001i.html#16 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#36 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#57 E-commerce security????
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#49 Are client certificates really secure?
https://www.garlic.com/~lynn/2001j.html#52 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#1 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#34 A thought on passwords
https://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001m.html#43 FA: Early IBM Software and Reference Manuals
https://www.garlic.com/~lynn/2001n.html#71 Q: Buffer overflow
https://www.garlic.com/~lynn/2002c.html#7 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#15 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002d.html#16 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002d.html#41 Why?
https://www.garlic.com/~lynn/2002e.html#18 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002e.html#29 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#36 Crypting with Fingerprints ?
https://www.garlic.com/~lynn/2002e.html#58 O'Reilly C Book
https://www.garlic.com/~lynn/2002f.html#10 Least folklorish period in computing (was Re: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002f.html#23 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#45 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002g.html#37 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002g.html#69 Digital signature
https://www.garlic.com/~lynn/2002h.html#41 Biometric authentication for intranet websites?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Tue, 11 Jun 2002 04:37:23 GMT
"Russ Holsclaw" writes:
By the way, I recall that Fred Brooks commented, in The Mythical Man-Month, that he thought the 360 Principles of Operation manual was the finest piece of technical writing he had ever seen. He particularly singled out for praise the way it described not only the way properly-formed instructions would work, but exactly what would happen when the program did something illegal, as well as those special cases where the results were "unpredictable", usually because of a few rare model-dependencies. The absolute unambiguous clarity of this specification, I think, paved the way for making the Virtual Machine concept a workable reality.

note that the principles of operation was a subset of the larger architecture red book. by approx. 1970, the principles of operation (and architecture red book) had been converted to CMS script (if you had the sequence of the dash revision manuals ... you could tell when it changed from traditional offset to 1403). the architecture red book was the internal hardware architecture document that included lots of detailed trade-off and detailed implementation information not in the principles of operation ... as well as not-yet-announced and/or released instructions. the principles of operation was the subset sections for open publication.

script was original a "dot" runoff-like document formating language from the mid-60s ... but was enhanced by G, M, & L with GML around 1970 (precursor to SGML, HTML, XML, etc).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Sat, 15 Jun 2002 17:09:03 GMT
jmfbahciv writes:
VM? WTF does virtual memory have to do with status of functional specs? What he described was the way they did their processes. We had groups who had rules that no code gets written until all specs were reviewed, revised, agreed to and turned inside out. That is precisely the reason it took DEC 8+ years to get reasonably working network out to the field. Not having this restrictive rule is precisely the reason TOPS-10 got a working ANF-10 out to the field in the mid-70s (rather than DECnet's mid-80s).

several things had virtual memory in the '60s ... the distinction with cp/40 and virtual memory was that it also implemented a virtual machine ... aka several copies of operating systems implemented to run on bare hardware could be run concurrently under cp in virtual machines on the same real hardware.

an enhanced version of this was when 370 virtual machines were implemented on cp/67 running on 360/67. operating systems then were developed, debugged and tested in 370 virtual machines before 370 hardware was available.

ref:
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Mon, 17 Jun 2002 06:42:06 GMT
jmfbahciv writes:
Yup. [emoticon removes very big e-foot from e-mouth] My gaff has been pointed out. <snip ref>

/BAH


sorry ... for 10 days i'm 8hrs (16hrs?) or more out of sync with most of the postings on the list. partway back tomorrow.

i will be getting some silicon and it will be at an EAL5-high rating (ref AADS chip strawman on garlic web pages).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Tue, 18 Jun 2002 04:01:02 GMT
jmfbahciv writes:
Is this [new silicon] your favorite pastime?

nope, just interesting thing to do. current pasttime is organizing knowledge (sometimes including analysis of end-to-end business processes) ... like the pattern stuff i do for the IETF standards process ... part of it can be seen in the RFC index structure on garlic web pages. other parts are the merged glossary/taxonomy work for security, financial, etc (also on garlic web pages).

the silicon stuff was just part of end-to-end strong authentication for electronic business processes; it was sort of out-growth of payment transaction work that my wife and I did with small client/server startup in silicon valley responsible for SSL & HTTPS (sometimes referred to as electronic commerce :-) ).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Where did text file line ending characters begin?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where did text file line ending characters begin?
Newsgroups: alt.folklore.computers
Date: Thu, 20 Jun 2002 10:52:50 GMT
"Charlie Gibbs" writes:
This difference in record formats (fixed-length vs. variable-length delimited) is to me one of the real differences between mainframes and minicomputers. In fact, combined with the batch-vs.-interactive mindsets, it pretty well sums it up IMHO. (The two points are actually somewhat related.)

an implicit part of the batch-vs-interactive paradigm was that batch assumed that there was rarely or no human directly connected so that errors and exception cases had to be automagically handled.

there was much more of a tendency in the interactive environment of involving a human in the exception & error process.

a current situation can be seen in some of the dim/dark room web farms ... is a human needed for care & feeding of every box?

a large financial transaction operation a couple years ago attributed their 100 percent availability to
• ims hot-standby
automated operator


ims hot-standy was a mainframe database replication technique.

there were a few things still left to these (human) operator in ibm mainframes ... but as other forms of errors and exceptions either were eliminated or had automated handling ... human (operator) mistakes became one of the leading causes of failures/outages.

some amount of the automated exception & error mitigation technology was the significantly higher focus on the area in batch environments that evolved over 30 years or more.

it is possible to have all sorts of technology efforts to address exception & error mitigation. however, a true test of a market focus on the subject is if the information is openly audited and reported. at least in some segments of the mainframe industry ... not only is every error, exception, and fault recorded and reported ... on a per machine basis ... but there is a service that gathers all such data from a large segment of the installed customer base and publishes reports with statistics broken out by vendor.

there is the tale i've repeated about being contacted regarding a concern over one such report. a new machine was developed and a certain type of error was projected to occur 3-5 times over a period of a year for all customers (not avg. per machine per year .... but the total aggregate number of errors across all operating machines for a period of a year). The industry report showed that something like a total of 15 such errors were recorded across all machines for a period of a year. there was great concern that the total number was 15 rather than 3-5 and an investigation launced.

It turns out that some software that I had been involved in many years earlier would perform "local channel i/o" extension simulation over telco lines. when the telco line had an uncorrectable error, the software simulated the report of this other type of hardware error (which basically resulted in higher level error handling retrying the operation). On further investigation it turned out the increased errors were in fact at installations running this software.

some past refs to the error reporting situation:
https://www.garlic.com/~lynn/94.html#24 CP spooling & programming technology
https://www.garlic.com/~lynn/96.html#27 Mainframes & Unix

random automated operator &/or ims hot-standby refs:
https://www.garlic.com/~lynn/99.html#71 High Availabilty on S/390
https://www.garlic.com/~lynn/99.html#77 Are mainframes relevant ??
https://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#107 Computer History
https://www.garlic.com/~lynn/99.html#128 Examples of non-relational databases
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000.html#13 Computer of the century
https://www.garlic.com/~lynn/2000.html#22 Computer of the century
https://www.garlic.com/~lynn/2000c.html#45 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#47 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000f.html#12 Amdahl Exits Mainframe Market
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#54 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001.html#43 Life as a programmer--1960, 1965?
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001d.html#70 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001d.html#71 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#44 Where are IBM z390 SPECint2000 results?
https://www.garlic.com/~lynn/2001e.html#47 Where are IBM z390 SPECint2000 results?
https://www.garlic.com/~lynn/2001g.html#44 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001h.html#8 VM: checking some myths.
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#13 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#14 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001l.html#47 five-nines
https://www.garlic.com/~lynn/2001n.html#3 News IBM loses supercomputer crown
https://www.garlic.com/~lynn/2001n.html#47 Sysplex Info
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002e.html#68 Blade architectures

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Where did text file line ending characters begin?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where did text file line ending characters begin?
Newsgroups: alt.folklore.computers
Date: Thu, 20 Jun 2002 10:58:11 GMT
"Russell P. Holsclaw" writes:
On the issue of end-of-record indicators, it should be noted that on IBM mainframes there are no end-of-record delimiters at all. Fixed length records are simply concatenated together and distinguished purely by byte-count. Variable-length records are always simply preceded by a binary integer byte-count, usually a 16-bit integer with a system-defined maximum of 32767 bytes (the integer being treated as signed, mainly because the System/360 had no instructions for unsigned half-word operations). New-line control codes and such were only used with terminals. Even a printer, like the 1403, was a "unit record" device, that printed one line per "record", with vertical spacing being determined by a device command code that was not part of the datastream. Although there was a common convention of putting a "carriage control character" at the beginning of a print line, this was not sent as data to the printer, but was used to select the CCW (channel command word) command code used to print the line.

i frequently contend that the use of explicit lengths for variable length records results in a programming convention environment that has much lower incidence of buffer-overruns. conversely, the convention of implicit lengths in variable length records associated with most C programming environments has contributed greatly to the incidences of buffer-overruns errors (as well as associated security exploits).

misc. past refs:
https://www.garlic.com/~lynn/99.html#85 Perfect Code
https://www.garlic.com/~lynn/99.html#163 IBM Assembler 101
https://www.garlic.com/~lynn/2000.html#25 Computer of the century
https://www.garlic.com/~lynn/2000b.html#17 ooh, a real flamewar :)
https://www.garlic.com/~lynn/2000b.html#22 ooh, a real flamewar :)
https://www.garlic.com/~lynn/2000c.html#40 Domainatrix - the final word
https://www.garlic.com/~lynn/2001b.html#47 what is interrupt mask register?
https://www.garlic.com/~lynn/2001b.html#58 Checkpoint better than PIX or vice versa???
https://www.garlic.com/~lynn/2001i.html#54 Computer security: The Future
https://www.garlic.com/~lynn/2001k.html#43 Why is UNIX semi-immune to viral infection?
https://www.garlic.com/~lynn/2001l.html#49 Virus propagation risks
https://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002e.html#58 O'Reilly C Book

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

time again

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: time again
Newsgroups: alt.folklore.computers
Date: Thu, 20 Jun 2002 11:11:18 GMT
"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
I reckon using clock() to measure the amount of work my process is doing would yield lots, at least the following insights :
1) Timeslicing costs
2) Scheduling algorithm holes
3) System call costs (depends on implementation)
4) Double word memory locked access costs
5) Jitter
6) The need for cycle accurate performance counters in the ISA. :P
7) I would like to try this on a freak machine like a Cyber205, or a Paragon. :)


a small counter-example ... working on dynamic adaptive and fairshare ... as well as page replacement "clock" algorithms in the 60s and 70s ... there was a situation arose in some environments with extremely high i/o rates on cache machines. the significant amount of i/o interrupts resulted in effectively cache trashing ... with the constant switching back & forth between kernel i/o handling code and application code.

... so i produced some dynamic adaptive code (that was somewhat machine sensitive) that adjusted the "time-slice" interrupt and disabled for all I/O interrupts during application execution (when the mean-time between interrupts ... or the interrupt rate ... exceeded a threshhold). the slight increase in i/o processing latency was more than gained back by significantly improved processing thruput ... both for the application and I/O. The application loaded cache and ran for longer periods of time. When control returned to the kernel (either because of time-slice end, or the application giving up control ... like its own i/o call) ... the kernel would check & "drain" all pending i/o interrupts. The "batching" of i/o interrupt processing resulted in the i/o interrupt having much higher cache hit ratio ... which tended to more than offset any latency that occurred by delaying i/o interrupt processing.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

time again

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: time again
Newsgroups: alt.folklore.computers
Date: Fri, 21 Jun 2002 10:54:00 GMT
"George R. Gonzalez" writes:
In one program, I wanted to not waste too much time polling the serial port, as going into and out of the "anything there yet" system request was taking up huge amounts of time (not in great supply on a 5MHz microcoded machine). My bright idea was to call the system clock and not do the polling more often than every tenth of a second or so.

playing with a really advanced 3.5mhz chip (iso7816) ... that may get up to 15mhz before production (especially in combo iso7816 & iso14443 version).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

time again

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: time again
Newsgroups: alt.folklore.computers
Date: Fri, 21 Jun 2002 10:58:43 GMT
Anne & Lynn Wheeler writes:
playing with a really advanced 3.5mhz chip (iso7816) ... that may get up to 15mhz before production (especially in combo iso7816 & iso14443 version).

and in iso14443 mode there are no physical contacts, no battery, ... it just magically runs on power drawn from the air.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Q: Is there any interest for vintage Byte Magazines from 1983

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Q: Is there any interest for vintage Byte Magazines from 1983
to  1993
Newsgroups: alt.collecting.magazines,alt.marketplace.collectables,comp.sys.ibm.pc.classic,alt.folklore.computers
Date: Sat, 22 Jun 2002 18:46:01 GMT
Neil Franklin writes:
That is already happening.

- SCSI is going serial (sort of merging with Fibre Channel)
- IDE is going serial (SATA)
- Some high end servers already use serial interprocessor comms, like the SGI Origin machines
- Intels newer northbridge/southbridge chips communicate serially
- AMDs Hammer/Opteron Processor has no northbrigge, but drives memory direct and speaks to the rest of the world (other processors or southbridge) via 8 serial links.

- and of course over 10 years ago Transputer links :-)
[me looks over to drawer with Tek-4/8 clone of IMS-B004 board in it]


SCSI had previously gone serial over 10 years ago as SSA ... and the proposal on the table was to have SSA merge with FCS at that time ... supporting lower/fraction speed copper serial in a compatible manner.

The other contender at the time (again over 10 years ago) was SCI ... which in additional to serial for memory acces ... also had mapping for SCSI (SCI was used for memory in at least convex exemplar, the DG machine, and the sequent machine).

there were also mappings for serial HiPPI.

random SSA & SCI refs:
https://www.garlic.com/~lynn/94.html#16 Dual-ported disks?
https://www.garlic.com/~lynn/94.html#17 Dual-ported disks?
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/96.html#25 SGI O2 and Origin system announcements
https://www.garlic.com/~lynn/96.html#26 System/360 Model 30
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000f.html#31 OT?
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#46 Small IBM shops
https://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches flushed on a page fault ?
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#42 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001f.html#66 commodity storage servers
https://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001k.html#22 ESCON Channel Limits
https://www.garlic.com/~lynn/2001m.html#25 ESCON Data Transfer Rate
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002g.html#10 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#33 ESCON Distance Limitations - Why ?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet
Newsgroups: alt.folklore.computers
Date: Sat, 22 Jun 2002 18:55:36 GMT
Brian Inglis writes:
There was a bunch of US academic and research systems running TCP/IP and some other protocols, talking to systems abroad, most of which did not run TCP/IP, through gateways. The changes he got passed effectively created the Internet as we know it today: an open, global, homogeneous, communication internetwork.

to some extent the players that got into NSFNET1 & NSFNET2 backbones were because of the significant unused dark fiber and potential bandwidth and were doing it as a way of promoting bandwidth hungry applications. In part by supplying resources in significant excess that were covered by the terms of the NSF contracts ... this was possibly the most significant aspect of the evolution into the internet ... the industry/commerical desire to create the demand for the dormant dark fiber and their (effective) subsidy to the NSFNET1/NSFNET2 (and other tcp/ip operations).

as noted before ... one critical aspect that resulted in tcp/ip being successful was the internet support for heterogeneous internetworking. prior to the 1/1/83 switch-over ... the arpanet/internet technology lacked any "internet protocol" with support for heterogeneous internetworking ... which was partial reason for its lack of wider spread deployment.

random posts on this from the past:
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#56 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#58 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#59 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#67 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#5 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#11 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#18 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#26 Al Gore, The Father of the Internet (hah!)
https://www.garlic.com/~lynn/2000e.html#28 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#38 I'll Be! Al Gore DID Invent the Internet After All ! NOT
https://www.garlic.com/~lynn/2000e.html#39 I'll Be! Al Gore DID Invent the Internet After All ! NOT
https://www.garlic.com/~lynn/2000f.html#44 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#45 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#46 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#47 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#49 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#50 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#17 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001h.html#74 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#49 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#10 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001m.html#48 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#51 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#52 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#5 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#12 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2002.html#16 index searching
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002b.html#40 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002d.html#19 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#6 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2002e.html#61 Computers in Science Fiction
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#21 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#73 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002g.html#74 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#58 history of CMS

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet
Newsgroups: alt.folklore.computers
Date: Sat, 22 Jun 2002 19:16:30 GMT
Lars Poulsen writes:
Before the High-Performance Computing act, the Internet was a mostly government-funded network for universities and defense contractors. Using it for commercial business was downright illegal. As late as 1993 I remember having trouble routing e-mail from Denmark to Australia because I refused to sign the NSFNET AUP pledge. Apparently, there was no non-NSF link across the Pacific, and no direct link from Europe to Australia.

The Internet AS WE KNOW IT was CREATED by the change in funding rules. If you think of legislation as a programming language for social systems, it was an elegant hack: Instead of using NSF money to operate a backbone network, give the same money to the universities and tell them to buy the service on the commercial market.


note that while the backbone from the mid to late '80s was NSFNET1/NSFNET2 .... the amount of resources subsidized by commercial/industry sources in significant excess of the funding by NSF ... as well as subsidized operations of numerous regional academic networks. The distinction was that on the books specific portions were implemented and deployed as part of a gov. funded RFP ... even if what was implemented and deployed involved resources far in excess of what was actually covered in the gov. funding. The significant industry/commercial (effective) subsidy of the NSFNET backbone and other academic regional networks was in large part responsible for the transition of what had effectively been low-speed, shoe-string, academic/gov. operations to providing levels of services that started to become attractive for commercial entities.

In that sense the industry/commercial entities subsidy to NSFNET and regional networks was done on speculation that something possibly would evolve that would be attractive to general consumers. I would claim that those excess resources donated by industry/commercial interests were in large part responsible for creating an environment that resulted in the development of applications that would become of consumer and commerical interest.

By 1993, the industry/commercial interests had learned significant amounts from the forey into the NSFNET backbone and regional networks and were able to translate that experience directly into commercial delivered applications. Furthermore, since the industry/commerical interests had declared their experiment a success ... it was not likely they would continue their significant subsidy of the backbone and regionals which would have resulted in the backbone and regionals returning to more of the shoe-string type operation of the early '80s. I don't think that any of the academic community would have been happy to return to that level of service ... and I'm sure that there wasn't going to be gov. funds to even coming close to filling the gap between what was directly allocated and what was being subsized by industry.

I would claim that it was actually apparent by at least the fall interop '88 show where there was significant presence of commercial and industry interests .... that were moving well beyond a strict focus of government and academic markets.

The other view of the '93 bill was the economic reality of the enormous gap between what NSF had been paying for and what was actually delivered by industry/commerical for the backbone and academic regionals. The industry/commercial was actually starting to apply COTS to network service ... and the amount that NSF was funding would buy more on a COTS service than what the backbone could be (once industry/commercial subsidies were eliminated).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet
Newsgroups: alt.folklore.computers
Date: Sat, 22 Jun 2002 19:20:20 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Amazing what could be learned during a trip to Germany, isn't it? Now, if only North Americans could learn about rail travel - again.

i learned a little about networking trying to read email on a business trip to paris around 1973.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet
Newsgroups: alt.folklore.computers
Date: Sat, 22 Jun 2002 19:36:42 GMT
Anne & Lynn Wheeler writes:
The other view of the '93 bill was the economic reality of the enormous gap between what NSF had been paying for and what was actually delivered by industry/commerical for the backbone and academic regionals. The industry/commercial was actually starting to apply COTS to network service ... and the amount that NSF was funding would buy more on a COTS service than what the backbone could be (once industry/commercial subsidies were eliminated).

the osi vis-a-vis tcp/ip thread:
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#21 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#22 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#24 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#26 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#28 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#29 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#31 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#34 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#45 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#46 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#49 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#50 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#74 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#14 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#16 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#22 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#47 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#51 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#52 Bettman Archive in Trouble

misc. past nsfnet, csnet, interop '88, threads
https://www.garlic.com/~lynn/94.html#34 Failover and MAC addresses (was: Re: Dual-p
https://www.garlic.com/~lynn/94.html#36 Failover and MAC addresses (was: Re: Dual-p
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/98.html#59 Ok Computer
https://www.garlic.com/~lynn/99.html#7 IBM S/360
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#37a Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#37b Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#40 [netz] History and vision for the future of Internet - Public Question
https://www.garlic.com/~lynn/99.html#138 Dispute about Internet's origins
https://www.garlic.com/~lynn/99.html#146 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000.html#49 IBM RT PC (was Re: What does AT stand for ?)
https://www.garlic.com/~lynn/2000c.html#26 The first "internet" companies?
https://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#78 Free RT monitors/keyboards
https://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
https://www.garlic.com/~lynn/2000d.html#19 Comrade Ronda vs. the Capitalist Netmongers
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#56 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#58 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#59 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#71 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#73 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#74 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#5 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#11 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#18 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#28 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#29 Vint Cerf and Robert Kahn and their political opinions
https://www.garlic.com/~lynn/2000e.html#31 Cerf et.al. didn't agree with Gore's claim of initiative.
https://www.garlic.com/~lynn/2000f.html#44 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#47 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#50 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001h.html#74 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#5 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#6 YKYGOW...
https://www.garlic.com/~lynn/2001m.html#48 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#5 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#14 Security glossary available
https://www.garlic.com/~lynn/2002.html#33 Buffer overflow
https://www.garlic.com/~lynn/2002e.html#6 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#45 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Signing with smart card

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Signing with smart card
Newsgroups: alt.technology.smartcards
Date: Sat, 22 Jun 2002 19:52:34 GMT
Maarten Bodewes writes:
See the reply from John Veldhuis (hi john). Appart from the things he mentions, make sure that the certificate use permits signing email, and make sure that your email address is in the certificate.

as an aside ... a private key (possibly in a hardware token) is used to sign. an example of such a digital signature might be fips186-2, ecdsa federal standard signature (also x9.62 financial standard signature).

for instance look at the internet xml signature RFC
3275 DS
(Extensible Markup Language) XML-Signature Syntax and Processing, Eastlake D., Reagle J., Solo D., 2002/03/14 (73pp) (.txt=164198) (Obsoletes 3075) (was draft-ietf-xmldsig-core-2-03.txt)

one way of selecting RFCs ....
https://www.garlic.com/~lynn/rfcietff.htm

and select Term (term->RFC#)

then select "XML" from the Acronym fastpath:
extended markup language (XML)
see also standard generalized markup language
3275 3236 3120 3076 3075 3023 3017 2807 2376

select 3275 in the above which will bring the summary into the lower frame. then it is possible to select on the ".txt=164198" field to retrieve the actual RFC.

==========================

In any case, a certificate doesn't actually sign anything. The private key signs something. Typically a signed (with a private key) registration form (that includes a copy of the public key) is sent to some certification authority. They check that the supplied public key can actually verify the supplied signature on the registration form. The certification authority then will do some other stuff and generate a "certificate" containing some information (including the supplied public key) which they sign and send back to you.

When certificate-based processing is involved ... you take something ... sign it ... and then package up the 1) "something", 2) the digital signature, and 3) the certificate into a transmission and send it off to somebody.

When the receipient gets your transmission ... they will typically go look for the public key of the certification agency that signed you certificate ... in order to verify that the certificate is valid, then they will take the public key from the certificate and verify that the your signiture is valid. As a result, the certificate isn't actually involved in the signing ... but in the signature validation.

for some other signiture validation software try
http://www.sourceforge.net/

and check the (fips186-2, x9.62) ecdsa signature verification package.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

history of CMS

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of CMS
Newsgroups: alt.folklore.computers
Date: Sun, 23 Jun 2002 12:47:44 GMT
Anne & Lynn Wheeler writes:
i will be getting some silicon and it will be at an EAL5-high rating (ref AADS chip strawman on garlic web pages).

thursday i gave a talk on the silicon at haw (aka adjunct bldg to ykt at tjw research). after the talk somebody mentioned that there is an upcoming reunion/celebration for people that worked on stretch. it is also mentioned on the web page:
http://www.brouhaha.com/~eric/retrocomputing/ibm/stretch/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet
Newsgroups: alt.folklore.computers
Date: Sun, 23 Jun 2002 20:30:35 GMT
Brian Inglis writes:
Without his funding changes, commercial companies would have created an Internet, separate from the US government funded network, probably based on ISO protocols funded by other governments. So maybe we should agree that Al Gore created the Internet "as we know it", companies actually created the Internet (they would have anyway), and some bodies did the grunt work to run on and hook up the nodes and got paid for it, but they did not create the Internet.

nope, by interop '88 commercial companies were well into tcp/ip hardware and software ... as well as subsidizing the nsfnet backbone and the regionals .... in excess of what the gov. was funding them. that commercial "subsidy" was what really created the internet ... in terms of significantly available bandwidth .... at least compared to the early to mid '80s "shoestring" operation that effectively relied almost totally on gov. funding. by early '90s commerical interests had broaden into straight commercial tcp/ip offerings and were probably going to cease the "subsidy" of the gov. backed offerings.

there was some discussion in the "why did osi fail ... and tcp/ip succeed" thread .... about the shoestring level of operations prior to NSFNET1 "backbone" ... which saw significant commerical/industry subsidy (significant additional resources more than equal to that directly funded by the gov).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet
Newsgroups: alt.folklore.computers
Date: Mon, 24 Jun 2002 13:09:26 GMT
rogblake10@iname10.com (Roger Blake) writes:
(Of course the real problem with Algore is not whatever role he may have had regarding the destruction of the pre-commercial Internet, but that he is a dangerous far-left environazi wacko. Although I am no fan of Bush, just keeping us from signing onto Kyoto is good enough reason to keep Gore and his ilk out of power as far as I am concerned.)

there were two phases of the "pre-commercial" internet ... the earlier shoe-string part ... strictly funded by the gov. ... and the later NSFNET backbone & regionals that were heavily subsidized by commercial interests. most people refer to the period when there was significant available resources ... restricted to non-commercial use ... but in actuallity heavily subsized by commercial industry. it may have seemed idilic but was not a stable situation since the commercial/industry interests were in it to eventually see a return on their investment.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Atomic operations redux

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Atomic operations redux
Newsgroups: comp.arch
Date: Mon, 24 Jun 2002 13:31:47 GMT
Emil Naepflein writes:
I also remember that in the old IBM /370 PoO there was code regarding manipulating a free list with CAS. But they used an additional counter in the header manipulated concurrently with one CAS to make it work.

at the same time CAS was doing CAS at 545 tech. sq .... there was also work going on at 545 tech sq for the kernel storage manager. There was work with both first-fit, best-fit, etc ... and using the "SLT" rpq instruction (search list hardware instruction) for the implementation (I believe SLT instruction original definition came from lincoln labs).

the problem was that kernel storage manager was starting to exceed 20 percent of total kernel processor time (especially as other pathlengths in the kernel was being significantly reducted). The eventual solution was the invention of the subpool LIFO storage implementation. Common, short-lived small storage sizes ... instead of being managed by address ordered linked lists were moved to size-specific LIFO management. There was then various sorts of administrative functions to periodically clean-up/re-organize subpools into the traditional address-order lists.

I believe there is a system journal article sometime in the 71-73 timeframe on the kernel subpool storage manager. The subpool changes had the advantage of reducing the time the kernel spent in the storage manager from 20plus percent of total kernel time to less than 2-3 percent of total kernel time. The pathlength for determining whether it was a subpool request and then returning a subpool value was something like 14 instructions.

In any case, one of the design points for CAS's work on CAS was the kernel lifo subpool storage management.

Later in the early '80s the lifo subpool storage management was upgraded to be cache-line sensitive ... aka there was big gain minimizing multiple storage units in the same cache line being allocated to different processors.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM 2540 Card Read Punch - documentation wanted

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 2540 Card Read Punch - documentation wanted
Newsgroups: alt.folklore.computers
Date: Thu, 27 Jun 2002 14:51:23 GMT
Eric Smith <eric-no-spam-for-me@brouhaha.com> writes:
How reliable was the 2540? Did it require much maintenance?

i don't know about CE hardware type maint. I do know when I was a student that they left me have the machine room from 8am sat. to 8am mon and i would clean the tape drives and pull the 2540 apart and clean all the brushes when i first got there and then once or twice more before i left (depending on how much i was using tapes and reader/punch).

at first i would just come in and start doing the stuff i wanted to do ... but relatively quickly learned that a little care & feeding made things go much smoother.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

How secure is SSH ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How secure is SSH ?
Newsgroups: alt.certification.mcse,alt.computer.security,comp.dcom.vpn,comp.security.firewalls
Date: Thu, 27 Jun 2002 15:02:20 GMT
a new cert went out yesterday with regard to vulnerability in the openssh challenge/response code.
Overview

There are two related vulnerabilities in the challenge response handling code in OpenSSH versions 2.3.1p1 through 3.3. They may allow a remote intruder to execute arbitrary code as the user running sshd (often root). The first vulnerability affects OpenSSH versions 2.9.9 through 3.3 that have the challenge response option enabled and that use SKEY or BSD_AUTH authentication. The second vulnerability affects PAM modules using interactive keyboard authentication in OpenSSH versions 2.3.1p1 through 3.3, regardless of the challenge response option setting. Additionally, a number of other possible security problems have been corrected in OpenSSH version 3.4.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

next, previous, index - home