List of Archived Posts

2000 Newsgroup Postings (09/12 - 10/13)

What good and old text formatter are there ?
What good and old text formatter are there ?
Ridiculous
Ridiculous
Ridiculous
Is Al Gore The Father of the Internet?^
Ridiculous
Ridiculous
Is a VAX a mainframe?
Checkpointing (was spice on clusters)
Is Al Gore The Father of the Internet?^
Is Al Gore The Father of the Internet?^
Restricted Y-series PL/1 manual? (was Re: Integer overflow exception)
internet preceeds Gore in office.
internet preceeds Gore in office.
internet preceeds Gore in office.
First OS with 'User' concept?
X.25 lost out to the Internet - Why?
Is Al Gore The Father of the Internet?^
Is Al Gore The Father of the Internet?^
Is Al Gore The Father of the Internet?^
Competitors to SABRE? Big Iron
Is a VAX a mainframe?
Is Tim Berners-Lee the inventor of the web?
older nic cards
Test and Set: Which architectures have indivisible instructions?
Al Gore, The Father of the Internet (hah!)
OCF, PC/SC and GOP
Is Al Gore The Father of the Internet?^
Vint Cerf and Robert Kahn and their political opinions
Is Tim Berners-Lee the inventor of the web?
Cerf et.al. didn't agree with Gore's claim of initiative.
Tektronics Storage Tube Terminals
War, Chaos, & Business
War, Chaos, & Business (web site), or Col John Boyd
War, Chaos, & Business (web site), or Col John Boyd
War, Chaos, & Business (web site), or Col John Boyd
FW: NEW IBM MAINFRAMES / OS / ETC.(HOT OFF THE PRESS)
I'll Be! Al Gore DID Invent the Internet After All ! NOT
I'll Be! Al Gore DID Invent the Internet After All ! NOT
Why trust root CAs ?
Why trust root CAs ?
IBM's Workplace OS (Was: .. Pink)
Why trust root CAs ?
Why trust root CAs ?
IBM's Workplace OS (Was: .. Pink)
Where are they now : Taligent and Pink
Why trust root CAs ?
Where are they now : Taligent and Pink
How did Oracle get started?
Why trust root CAs ?
Why trust root CAs ?
Why not an IBM zSeries workstation?
Why not an IBM zSeries workstation?
VLIW at IBM Research
Why not an IBM zSeries workstation?
Why not an IBM zSeries workstation?
Why not an IBM zSeries workstation?
Why not an IBM zSeries workstation?

What good and old text formatter are there ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What good and old text formatter are there ?
Newsgroups: alt.folklore.computers
Date: 12 Sep 2000 17:30:03 -0600
jones@cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) writes:
Groff is, of course, gnu's implementation of the UNIX standard troff, which is the CAT phototypesetter extension of nroff, which is the UNIX port of roff (from the GECOS system) which was a reimplementation of RUNOFF from CTSS. J. E. Saltzer wrote RUNOFF prior to 1965. M. D. McIlroy is responsible for both ROFF and NROFF. Does the source for any of these older systems still exist.

& Stu Madnick (MIT & CSC, 545 tech sq) did "script" for CMS ... which might be considered port of RUNOFF to CP/67/CMS ... this was about 1967 or so (some of the CTSS people showed up at Multics in 545 tech sq & other CTSS people showed up at CSC also 545 tech sq).

Then "G". "M", & "L" (also all at CSC, 545 tech sq) ... added GML to "script" a couple years later. This was standardized later as SGML. We have since seen it show up in HTML, XML, etc.

Claim has been made that the original CSC script was also ported to Tandy and misc. other PCs in the early '80s.

misc. refs (i think still good):
https://web.archive.org/web/20231001185033/http://www.sgmlsource.com/history/roots.htm
https://web.archive.org/web/20001201190700/http://www.sgmlsource.com/history/roots.htm

random refs:
https://www.garlic.com/~lynn/97.html#9
https://www.garlic.com/~lynn/99.html#42
https://www.garlic.com/~lynn/99.html#43
https://www.garlic.com/~lynn/99.html#67
https://www.garlic.com/~lynn/99.html#197

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

What good and old text formatter are there ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What good and old text formatter are there ?
Newsgroups: alt.folklore.computers
Date: 12 Sep 2000 19:41:01 -0600
SGML Users' Group History moved, updated URL
http://www.oasis-open.org/cover/sgmlhist0.html

that is in addition to Goldfarb's history at:
https://web.archive.org/web/20231001185033/http://www.sgmlsource.com/history/roots.htm
https://web.archive.org/web/20001201190700/http://www.sgmlsource.com/history/roots.htm

with respect to 6670 output device (recent showed up in different thread in this same newsgrup), 3800 support had been put into "script" ... and was supported both with GML "tags" and "runoff?" tags for formating. The 3800 supported then was modified to support 6670.

minor refs:
https://www.garlic.com/~lynn/2000d.html#81

correction in the above 6670 ref, OPD (? office products by whatever name) had added the computer interface; SJR had extended it for postscript & other types of support.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Ridiculous

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ridiculous
Newsgroups: comp.arch
Date: 16 Sep 2000 08:34:30 -0600
mash@mash.engr.sgi.com (John R. Mashey) writes:
I don't know any particular name for this, but it's hardly new: I used to run jobs on an IBM 360/67 that had regular main memory, plus a big chunk of (LC? Large Core Storage, I think) that was slower. I think I recall that some PDP-11s could have several different speeds of memories. in the same machine.

Ampex sold 8msec memory add-ons for 370. Standard 360/65 memory was .750msec memory and you could get 1mbyte (2mbyte in a duplex, smp configuration). You could get an extra 8mbytes of ampex 8msec memory.

There were also 360/50 configurations (standard 2msec memory) that had Ampex add-on memory ... and possibly some 360/75 configurations also.

Some shops had software that would not also execute programs directly in the slower memory ... but also do things like copy programs down to higher speed memory before execution.

360/67s were 360/65 with virtual memory hardware added (8-entry fully associative table look-aside buffer and other stuff). For SMPs there were other differences between 65 and 67. The 65 duplex was basically two 65s that had their memory addressing combined and a couple other things. The 67 duplex had a "channel controller" which supported hardware configuration of the channels (i/o buses) and memory boxes ... along with tri-ported memory bus (independent memory access for the two processors and i/o activity). The different memory added slightly to the memory access latency for cpu-intensive workloads. However, combined cpu intensive and i/o intensive workload had higher thruput on a "half-duplex" 360/67 (duplex hardware configured to run as independant processors) than a "simplex" 360/67.

There was also a custom triplex 360/67 (I think done for Lockheed on a goverment project) that had a special "channel controller" that was software configurable. In cases of various kinds of faults ... the kernel could re-configure the channel controller to fence off the faulty component.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Ridiculous

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ridiculous
Newsgroups: comp.arch
Date: 16 Sep 2000 08:40:51 -0600
Anne & Lynn Wheeler writes:
finger slip; 370->360
Ampex sold 8msec memory add-ons for 360. Standard 360/65 memory was


There was also a custom triplex 360/67 (I think done for Lockheed on a

the triplex 360/67 was completely different than the triplex 360/50s & 360/65s (90xx?) machines done for the FAA system.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Ridiculous

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ridiculous
Newsgroups: comp.arch
Date: 16 Sep 2000 14:38:32 -0600
mash@mash.engr.sgi.com (John R. Mashey) writes:
Do you remember: were the MP 360/65's truly hardware-symmetric (like the 360/67s)? I never used a /65, and I couldn't find 360/65 MP PMS Diagrams in Bell&Newell.

the standard 360/65MPs just had common memory addressing (with a little bit extra for signaling) ... but everything else was independent (i.e. standard single processor 65), like I/O.

In order to have common I/O capability, devices had to be "twin-tailed", i.e. each device (or the controller for the device) was connected with two different i/o channels (one for each processor). For devices that weren't "twin-tailed", I/O requests had to be queued for the specific processor that owned the I/O channel the device was connected to.

I believe the os/360 mp software relied primarily on a spin-lock on the kernel (supervisor state) code.

The 360/67 duplex channel controller gave each cpu access to all I/O channels in the configuration.

There was a lot of early fine-grain locking work done using the 360/67 duplex configuration at CSC ... cummulating in the compare&swap work that eventually showed in 370s.

random refs:
https://www.garlic.com/~lynn/93.html#22
https://www.garlic.com/~lynn/94.html#02
https://www.garlic.com/~lynn/98.html#16
https://www.garlic.com/~lynn/99.html#88
https://www.garlic.com/~lynn/99.html#89
https://www.garlic.com/~lynn/99.html#102
https://www.garlic.com/~lynn/99.html#103
https://www.garlic.com/~lynn/99.html#139

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Is Al Gore The Father of the Internet?^

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Al Gore The Father of the Internet?^
Newsgroups: alt.folklore.computers,talk.politics.misc
Date: 17 Sep 2000 12:45:13 -0600
I don't know any specific reference there. There is the stuff from 1994
https://www.garlic.com/~lynn/2000d.html#75

At the time of Interop '88 ... there was a lot of migration of tcp/ip into commercial sector and many of the networks had "acceptable use policies" that allowed various commercial activities.

In the era of NSFNET1 about that time (& Interop '88) .... there was the NSFNET1 contract funding the NSFNET1 backbone with a dozen or so sites ... but using lots of commercial products (i.e. the service was NSF funded, and used the contract to buy commercial products to implement the service). There was also a lot of commerically "donated" stuff for NSFNET1 (rumours that value exceeded the funding from NSF). A complicating factor is that the commerical companies probably took a non-profit "donation" tax write-off for the stuff donated to NSFNET1. Possibly as much as anything else, NSFNET needed to have "non-commercial" acceptable use policy in order to maintain non-profit tax-write-off status(?).

Many of the tcp/ip implementations & products in the Interop '88 & NSFNET1 era were based on BSD tcp/ip software. A lot of the consideration at the time was about the BSD code being free from the AT&T licensing (not government licensing). I remember the BSD "free" code ... which may or may not have had NSF &/or gov. funding (possibly also the Cornell domain name system stuff?, UofTenn SNMP stuff?, etc) .... were more along the lines of the GNU licensing issues i.e. the base code is "free" univ & gov. licensed stuff (developed with gov. &/or nonprofit funding support) .... you have to charge for some other added value ... like packaging & support.

At the time of Interop '88, commercial TCP/IP products and services were well established. One of the "big" items that I remember from Interop '88 was the guy that was responsible for a lot of the SNMP stuff done at a University was moving into commercializing/productizing the SNMP (not too long before this ... it wasn't clear whether the heavy weight network monitoring stuff would win out or whether SNMP would win out).

The commercializing issues regarding HPCC, NREN, etc were more like 6-7 years later.

random refs:
https://www.garlic.com/~lynn/2000d.html#77
https://www.garlic.com/~lynn/2000d.html#71
https://www.garlic.com/~lynn/2000d.html#72
https://www.garlic.com/~lynn/2000d.html#73
https://www.garlic.com/~lynn/2000d.html#74
https://www.garlic.com/~lynn/2000d.html#78
https://www.garlic.com/~lynn/2000d.html#79
https://www.garlic.com/~lynn/2000d.html#80
https://www.garlic.com/~lynn/internet.htm

copyright notice from some bsd tcp/ip software


/
Copyright (c) 1983, 1986 Regents of the University of California.
All rights reserved.

  Redistribution and use in source and binary forms are permitted
provided that the above copyright notice and this paragraph are
  duplicated in all such forms and that any documentation,
advertising materials, and other materials related to such
distribution and use acknowledge that the software was developed
by the University of California, Berkeley.  The name of the
  University may not be used to endorse or promote products derived
from this software without specific prior written permission.
  THIS SOFTWARE IS PROVIDED ''AS IS'' AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
/


Interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Ridiculous

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ridiculous
Newsgroups: comp.arch,alt.folklore.computers
Date: 20 Sep 2000 23:15:44 -0600
Jan Vorbrueggen writes:
I witnessed two: turning VMS from the VAX-11/782 ASMP into a symmetric multiprocessor OS, and converting Solaris from single processor to single-lock SMP to fine-grain SMP. The first went pretty smoothly and quickly, the second was long-drawn and problematic (at least when watching as an interested bystander). I have always wondered why they were so different.

VAMPS was a five processor smp based on 370/125 ... that I worked on in the 1975 time-frame.

the 370/115 & 370/125 shared a common 9 position memory bus and a common microcode engines for dedicated functions. In both systems, dedicated i/o functions (like disk controller, communication controller, etc) was implemented on a dedicated microcode engine and took up one of the nine positions. The 115 implemented the 370 instruction set on the same microcode engine as was used for all the other dedicated tasks ... which provided about 80kips 370 (i.e. at about 10:1, the base engine was about 800kips). The 125 used all the same components as the 115 but used a faster microcode engine for the 370 instruction set yielding about 120 kips 370 (at 10:1 ratio, native engine about 1.2mips).

VAMPS was a 370/125 configuration that would use up to five of the nine positions for 125 370 engines (the basic 115 & 125 were 370 uniprocessor, even tho the underlying architecture was multiprocessor ... just with different engines for dedicated functions.

I worked on the VAMPS project concurrently with the ECPS effort ... various references:
https://www.garlic.com/~lynn/94.html#21
https://www.garlic.com/~lynn/94.html#27
https://www.garlic.com/~lynn/94.html#28
https://www.garlic.com/~lynn/2000.html#12
https://www.garlic.com/~lynn/2000c.html#50
https://www.garlic.com/~lynn/2000c.html#76

The SMP work used a single kernel lock ... but a number of kernel functions were migratede into microcode and in some cases offloaded onto dedicated engines (i.e. a lot of paging was offloaded to the disk controller engine). The optimization resulted in situation where the majority of the workloads had a 90% non-kernel/10% kernel execution ratio. Dispatching of tasks and taks queue management was dropped into the engine microcode and ran with "fine-grain" locking on all "370" processor microcode.

When a task required kenel services, an attempt was made to obtain the kernel lock, if the kernel lock was already held (by another processor), a super lightweight request was queued and the processor would look for other work to do.

>From the 370 instruction set standpoint, the migration of dispatching into the processor "hardware" (actually micrcode) resumbled some of the i432 work done a number of years later. Misc. refs:
https://www.garlic.com/~lynn/2000c.html#68
https://www.garlic.com/~lynn/2000d.html#10
https://www.garlic.com/~lynn/2000d.html#11

Because of various optimization and offloading specific function to dedicated processors, in a fully comfigured five (370, total of nine microcode engines) processor system, the normal aggregate time spent executing kernel instructions (bracketed by the kernel lock) was normally no more than 50% of a single processor. This mitigated the restriction that the single kernel lock limited the aggregate kernel activity thruput to no more than 100% of a single processor.

For various reasons the product never shipped to customers. However, the work was adapted to standard 370 kernel to support 370/158 & 370/168 SMP multiprocessors. A standard kernel was slightly re-oganized so that the parts of the kernel that had been dropped into the microcode (in the VAMPS design) were modified with fine-grain locking support. The remaining portion of the kernel was bracketed with a single (sub-)kernel lock. Code operating under fine-grain locking when it encountered a situation that required transition to the portion of the kernel with the single lock, would attempt to obtain the lock. If the processor was unable to obtain the lock, it queued a super lightweight kernel request and attempted to find other work. The processor that held the kernel lock, when it finished its current task would check (& dequeue/execute) pending kernel requests prior to releasing the kernel lock.

This was a significant enhancement from the earlier os/360 SMP work that used a single kernel "spin-lock" (i.e. a task running on a processor needing kernel services would enter a tight "spin-loop" for however long was necessary for the kernel to become available). The careful choice of the kernel functions for fine-grain locking resulted in less than 10% of the kernel being modified for fine-grain locking but that 10% represented 905 of the time spent executing in the kernel. Furthermore, rather than adopting the single kernel "spin-lock" convention that had been common up until then, the implementation would queue light-weight requests for kernel services (rather than spinning, waiting for those kernel services to become available).

The implementation was implemented on an existing kernel release & deployed on the internal HONE systems in fall '77 (supposedly at the time, the largest "single system image" configuration in the world, eight multiprocessor complexes all sharing the same disk pool), misc hone refs:
https://www.garlic.com/~lynn/2000.html#75
https://www.garlic.com/~lynn/2000c.html#30
https://www.garlic.com/~lynn/2000c.html#49

and then incorporated into a later release for customers in '78

misc/random refs:
https://www.garlic.com/~lynn/2000.html#78
https://www.garlic.com/~lynn/2000b.html#38
https://www.garlic.com/~lynn/2000b.html#65
https://www.garlic.com/~lynn/2000d.html#47
https://www.garlic.com/~lynn/2000d.html#82
https://www.garlic.com/~lynn/2000e.html#4

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Ridiculous

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ridiculous
Newsgroups: comp.arch,alt.folklore.computers
Date: 21 Sep 2000 07:35:14 -0600
plugh@NO.SPAM.PLEASE (Caveman) writes:
Egregiously switching CPUs causes a huge performance hit in most cases.

Most schedulers for MP systems since the DEC-10 and -20 and the GE 6[34]5 have tried to keep a thread on the same CPU unless some other systemwide priority took precedence.


in my previous posting on VAMPS and support for switching CPUs for the 370/158 & 370/168, the VAMPS processor (370/125) was a non-cached machine and switching processors 1) avoided spin-lock and 2) didn't have cache hit issue.

The 158 & 168 configurations mentioned were 32kbyte & 64kbyte cache, two processor machines. Queuing request for the processor that already had the kernel lock: 1) avoided spin lock, 2) the high use portions of the kernel had fine-grain locking, 3) it tended to preserve "kernel" cache hit (i.e. interrupts, task switching, & kernel/non-kernel transitions all tended to create lots of cache misses because of locality transitions, if processor already had the kernel lock, queueing a request for that processor tended to re-use kernel code already in the cache), 4) the kernel lock was against lower-usage, longer path length operations (once executed would have preserved very little non-kernel cache lines, 5) processes that tended to make lower-usage longer-path kernel calls tended to be various i/o request; the request queueing tended to migrate them to the same processor ... leaving processes that made few such calls on the other processor ... tending to improve cache-hit ratios on both processors (i.e the processor switching implied by the queueing had a slight tendency to cluster processes with similar characteristics on the same processor, while at the "micro" level process switching would seem bad on cache-hits, the clustering effect at the "macro" level actually improved overall cache-hits and improved thruput, although it was somewhat a characteristic of the relative cache sizes and kernel pathlengths involved).

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Is a VAX a mainframe?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is a VAX a mainframe?
Newsgroups: alt.folklore.computers
Date: 22 Sep 2000 09:44:38 -0600
jmfbahciv writes:
Nitpik: Having all I/O paths shared by all CPUs is not a prerequisite to SMP. This is the second time I've seen this notion.

the IBM 360 SMPs had shared memory ... but not explicitly shared I/O channels. To simulate shared I/O channels, an installation would configure devices &/or control units with multiple connections to different channels (i.e. the channels were processor specific, but devices had multiple connections to different channels).

The one exception was the 360 model 67s SMPs which had both shared memory and shared I/O implementations (the 67 was also the only 360 model that supported virtual memory).

The controller/device multiple connections was also what was used to implement IBM 360 clusters (i.e. multiple 360 processors, not sharing memory, but sharing I/O devices).

The IBM 370 line came out (eventually) with virtual memory as standard, but all of the IBM 370 SMP implementations had non-shared I/O channel. The 370 line also had asymmetric multiprocessing implementations, multiple processors sharing memory but some processors with no I/O capability at all.

misc. refs from the Numa/SMP/etc discussion on comp.arch
https://www.garlic.com/~lynn/2000e.html#2
https://www.garlic.com/~lynn/2000e.html#4
https://www.garlic.com/~lynn/2000e.html#6
https://www.garlic.com/~lynn/2000e.html#7

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Checkpointing (was spice on clusters)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Checkpointing (was spice on clusters)
Newsgroups: comp.arch
Date: 22 Sep 2000 10:07:43 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
That was the situation in 1975, too :-)

Non-automatic checkpointing is far easier, and such methods have been partially successful. As you point out, I/O is a problem, but there are actually dozens of others even in this case - I/O is merely the most obvious.


prior to that, at least one of the time-sharing service bureaus with datacenters in boston and sanfran ... and world-wide customers ... did process migration ... sanfran (or boston) machines would shutdown for preventive maintenance ... and the process, state, i/o, etc would not just be "checkpointed" (to disk) ... but the whole state, process, etc copied to the other datacenter (disk) ... and restarted.

Simpler variations were restarting on a machine in the same datacenter and/or on the same machine (after it had been brought back into service, also could be used for less planned outages, load balancing, etc).

misc. refs:
https://www.garlic.com/~lynn/99.html#10
https://www.garlic.com/~lynn/2000.html#64

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Is Al Gore The Father of the Internet?^

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Al Gore The Father of the Internet?^
Newsgroups: alt.folklore.computers,talk.politics.misc
Date: 22 Sep 2000 20:33:05 -0600
... misc refs from the '80s.

slightly related prior posts:
https://www.garlic.com/~lynn/2000d.html#70
https://www.garlic.com/~lynn/2000d.html#72
https://www.garlic.com/~lynn/2000d.html#73
https://www.garlic.com/~lynn/2000d.html#76

================================================================

misc. announcement

Date: 4 January 1988, 14:12:35 EST
To: distribution
Subject: NSFNET Technical Review Board Kickoff Meeting 1/7/88

On November 24th, 1987 the National Science Foundation announced that MERIT, supported by IBM and MCI was selected to develop and operate the evolving NSF Network and gateways integrating 12 regional networks. The Computing Systems Department at IBM Research will design and develop many of the key software components for this project including the Nodal Switching System, the Network Management applications for NETVIEW and some of the Information Services Tools.

I am asking you to participate on an IBM NSFNET Technical Review Board. The purpose of this Board is to both review the technical direction of the work undertaken by IBM in support of the NSF Network, and ensure that this work is proceeding in the right direction. Your participation will also ensure that the work complements our strategic products and provides benefits to your organization. The NSFNET project provides us with an opportunity to assume leadership in national networking, and your participation on this Board will help achieve this goal.


... snip ... top of post, old email index, NSFNET email

=================================


         John Markoff, NY Times, 29 December 1988, page D1

In an article titled 'A Supercomputer in Every Pot' a proposal for a
nationwide 'data superhighway' is discussed.  The following points
are made:

- The network would link supercomputers at national laboratories (Princeton,
NJ; College Park, MD; Ithaca, NY; Pittsburgh, PA; Chicago, Ill; Lincoln,
  Neb; Boulder, Colo; Salt Lake City, Utah; San Diego, CA; Palo Alto, CA;
and Seattle, WA).

- This network would also be at the top of a hierarchy of a number of slower
  speed networks operated by a number of government agencies.

- Fiber-optic cable operated at 3 gigabits/sec.

- Previous attempt to meet the need is the two-year old NsfNet research network
which links five of the supercomputer laboratories with 1.5 Mbs lines.

- Feeling among the academic community that a federal funding and coordination
  are necessary; with it serving as a model for later commercial efforts.

- Federal legislation for initial financing and construction of a National
Research Network introduced in October, 1988, by Senator Albert Gore.

- Five-year development and implementation period mentioned for protocols,
  hardware, etc.

- A proposal for a regional network linking Univ of Pa, Princeton, and IBM's
Watson Research Labs (Hourglass Project) was mentioned as potentially
  providing a 'preview of some of the services' of the national network.

=================================================================================

Subject: Paving way for data 'highway'
Carl M Cannon, San Jose Mercury News, 17 Sep 89, pg 1E

National High-Performance Computer Technology Act of 1989
- US Senate Committee on Commerce, Science and Transportation
- $1.8 billion over next 5 years
 - Research and development of supercomputer hardware/software/networks
- Senator Albert Gore, Jr and 8 co-sponsors
   . "My bill will definitely get out of committee
... pass in this congress
... maybe this year."
- Also introduced in the House of Representatives
 - Newly declared allies in George Bush's
Office of Science and Technology Policy

Senator Albert Gore, Jr
- 10 years old when Senator Al Gore Sr planned the nation's interstate
highway system
 - "I was impressed ... I noticed how the ride from Carthage Tennessee
to Washington DC got shorter and shorter."
 - Al Gore Jr is the main proponent behind the US superhighway of tomorrow
. nationwide network
. transporting billions of pieces of data per second
. among scholars, students, scientists, and even children
 - "A fundamental change is taking place ...
no longer rely on ink and paper to record information ...
    we count on computer systems to record informational digitally."
- "The concept of a library will have to change with technology ...
new approach .... the 'digital library'"
- Coined the "computer superhighway" 9 years ago
   . 1986: shepherded legislation thru Congress while in the House of Rep
- prepare a report on hi-performance computers and networking
   . 1987:  Report signed by William R Graham, Pres Regan's science adviser
- Urged a coordinated, long-range strategy
- support high performance computer research
- apply the research to the rest of the nation, including industry
   . 1989: White House Office of Science and Technology Policy
- followup report was a minor block buster in technology policy
 - Senators have received a series of 3 presentations
. a primer on the power of supercomputers
. "Wiring the World"; What's possible thru networks
. "The freight that can be carried on this highway"
 - Joe B Wyatt, Vanderbilt University chancellor
. University librarian estimated the world's authors would
     produce 1 million new titles per year by the year 2000
. This is the "freight" a national supercomputer network would carry
. Not Vanderbilt's existing library
. "store and transport to those who need it"
 - James H Billington, librarian of Congress
. "88 million items in the Library of Congress"
   . "Largest collection of recorded information and knowledge
ever assembled in one place here on Capitol Hill"
. "The nation's most important single resource for the information age"
. "establishment of a national research and education network
      would give an immense boost to the access of these materials"
. "allow the LofC to provide much more of its unequaled data
      and resources than can now be obtained only by visiting Washington"
- John Seely Brown, Xerox PARC vice president
. "power of information technology to help meet the demand for knowledge
is unquestionable"
   . But knowledge workers are already overburdened by
- information explosion
     - increasing complexity
- ever-accelerating pace of change
- Where networks are already developed, the results can be stunning
. US Geological Survey demonstrated
     - instantaneous combination of 15 types of electronic maps
- helps municipalities figure out where they can safely authorize
       drilling wells for drinking water
- Computations without the database or high-performance computers
would be impossible

 D. Allan Bromley, science adviser to George Bush
- recommended spending the same amount of money Gore is requesting
   . supercomputers and supercomputer networks
- Still not a formal budget proposal
- Four agencies are spending $500 million per year on computing research
. Defense Advanced Research Projects Agency
   . Department of Energy
. National Aeronautics and Space Administration
   . National Science Foundation
- Bromley's recommendation makes it an easier issue for Congress to support
. Either Al Gore's bill, or individual appropriations to the agencies
. Sends a signal to the agencies that they have allies for these projects

Stephen L Squires, DARPA chief scientist for Information, Science, Technology
 - "There is a real urgency for this"
- US is in a critical stage in technology development
- "wait 2-3 years ... a lot of people will be starved for resources"
- "enormous demand, not just in computing, but in scientific fields"
 - "People can see what would make their dreams come true"

=======================================================================

 NATIONAL CENTER FOR  SUPERCOMPUTERING APPLICATIONS (NCSA) VISIT
Oct. 26, 1989

NCSA is a part of the University of Illnois and one of the
leading-edge Engineering and Scientific Applications. For,
example, NCSA demonstrated Visualization at Siggraph'89
 (Boston) across multiple hardware platforms (Cray-2, SUN,
Alliant FX-80, Ultranet, AT&T) and across long distances
 (Urbana----Boston).  NCSA work has been cited in numerous
proposal of technology advancement such as Gore's 3-Gigabits
"Data Highway" congressinal bill.

 Representing NCSA will be:

 Dr. Larry Smarr          -Director of NCSA; Prof. of Physics
& Astronomy
Dr. Karl-Heinz Winkler   -Deputy Director for Science,
Technology and Education of NCSA;
                          Professor of Physics, Astronautical
and Aeronautical Engineering,
                          Mechanical & Industial Engineering
Dr. Melanie Loots        -Research Scientist of NCSA,
Computational Chemistry:

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Is Al Gore The Father of the Internet?^

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Al Gore The Father of the Internet?^
Newsgroups: alt.folklore.computers,talk.politics.misc
Date: 23 Sep 2000 10:43:41 -0600
toor@y1.jdyson.net (John S. Dyson) writes:
Nope. He advocated funding for NSFnet at the best... Funding isn't creation though... There still is no evidence (even per Anne+Lynn Wheeler) that he had an idea of what was going to happen.

I admit I hadn't paid any attention to any Gore activity at the time it was happening. I can find old references to what appears to be advanced technology bills starting in '88 and possibly not passing until the end of '91 (same bill 3 years later?).

This was during a period of "high technology" gov. sponsorship and churn. It was time of gov. supporting (univ) supercomputing centers (in some cases the money went for the buildings, there was some talk regarding UC system, that it went to the campus that was "due" the next new building), high-speed gigabit fibernetworks and HDTV.

NSFNET backbone (56kbit, T1, then T3) was coming at a time when there was a vast explosion in networking ... both non-profit & profit. The contribution of the prior NSF activity supporting CSNET & gov. support for various university TCP/IP activities in the early '80s ... along with the introduction of workstations & PCs ... fueled the explosion ... as well as the online service providers (the internet is as much a service issue for the public as it is a networking technology issue).

There was hope that the gov. support for high-end projects in HPCC (supercomputers), NREN (gigabit fiber networks), & HDTV (electronic component manufacturing) would have a trickle down effect into the US economy.

There, in fact, seemed to be the reverse happening. Once the low-end stuff (consumer online access) started hitting critical mass ... the commercial funding trickled "up" into the high-end stuff (at the time, more recognized in the HDTV area).

It isn't even clear how much of the NREN funding was actually spent (I remember various corporations commenting about being asked to donate commercial products to participate in NIIT "test bed"). Also, HPCC supercomputers started to shift to being large arrays of workstation &/or PC processor chips (workstations & PCs enabled the consumer online market as well as providing basis for todays supercomputers).

This was probably more recognized in the various HDTV gov. activites ... which appeared to be more slanted towards attempting to bias standards & regulations to help US corporations (as opposed to direct technology funding). The issue was that TV market (at the time) was thousands of times bigger than computer market ... and HDTV components would be as advanced as anything in the computer market .. whoever dominated the HDTV/TV market possibly would take over the whole electronic industry (computers, networking, components, etc).

At least in some areas, there started to be shift from direct gov. technology funding (i.e. fund a university to write TCP/IP code) to targeted services using of commercial products (aka a NSFNET backbone). It isn't to say that it was bad to try and continue (in the 80s/90s) the gov. funding of research & strategic technologies ... it was just that it seemed that commercial market penetration (for some areas) had reached a point by the mid to late 80s that commercial & profit operations funding started to dominate (i.e. gov. funding has tended to be more productive in areas of pure research ... and not as productive later in technology cycle when it has started to be commercialized).

If there was any doubt at the time ... Interop '88 was a large commerical "internet" show ... where significant numbers of univ researchers started showing up in commerical companies.
INTEROP 88: The 3rd TCP/IP Interoperability Conference and Exhibition will be held at the Santa Clara Convention Center and Doubletree Hotel from September 26 through 30th, 1988. The format is 2 days of tutorials followed by 3 days of technical session (16 in all). For the first time, there will also be an Interoperability exhibition where vendors will show TCP/IP systems on a "Show and Tel-Net" which additionally will be connected to the Internet.

A number of vendors, known as the "Netman" group will be demonstrating an experimental network management system based on the ISO CMIP/CMIS protocols.

For more information on the conference contact:

Advanced Computing Environments
480 San Antonio Road, Suite 100
Mountain View, CA 94040
(415) 941-3399


The show had four "backbone" networks with machines in many booths connected to two or more of the backbone networks. As the machines were starting to be connected on Sunday ... the networks started to crash and burn. Early Monday morning (the day the show started) the problem was identified.

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88

& as usual the random urls:
https://www.garlic.com/~lynn/94.html#34
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/99.html#40
https://www.garlic.com/~lynn/2000c.html#12
https://www.garlic.com/~lynn/2000c.html#21
https://www.garlic.com/~lynn/2000d.html#2
https://www.garlic.com/~lynn/2000d.html#3
https://www.garlic.com/~lynn/2000d.html#70
https://www.garlic.com/~lynn/2000d.html#71
https://www.garlic.com/~lynn/2000d.html#72
https://www.garlic.com/~lynn/2000d.html#73
https://www.garlic.com/~lynn/2000d.html#76
https://www.garlic.com/~lynn/2000d.html#77
https://www.garlic.com/~lynn/2000d.html#78
https://www.garlic.com/~lynn/2000e.html#5
https://www.garlic.com/~lynn/2000e.html#10
https://www.garlic.com/~lynn/internet.htm

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Restricted Y-series PL/1 manual? (was Re: Integer overflow exception)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Restricted Y-series PL/1 manual? (was Re: Integer overflow  exception)
Newsgroups: comp.arch
Date: 23 Sep 2000 16:36:00 -0600
"Rostyslaw J. Lewyckyj" writes:
Eric Smith wrote: > Why was the Y-series PL/1 manual "restricted"?

Probably to keep from having too many people from taking it as the real language spec, and then asking why various things were not implemented in the product compilers. I assume that Eric meant the language spec manual, rather than the compiler logic manual for one of the compilers. --Rostyk.


... there were restricted manuals for PL/1 when it was still in "beta" test ... i.e. hadn't been released as product yet. however most y-manuals were various kinds of (internal) system/program/compiler logic manuals.

I was at a university where IBM did beta installation of PL/1 and there were all sort of security(?) restrictions. All evidence of its existance was supposed to have been obliterated after the end of the trial period ... and there was some incident where there was suspicion that somebody at the university had made a (unauthorized) copy.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

internet preceeds Gore in office.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: internet preceeds Gore in office.
Newsgroups: alt.folklore.computers,talk.politics.misc
Date: 24 Sep 2000 09:02:21 -0600
toor@y1.jdyson.net (John S. Dyson) writes:
For 1985 timeframe or so, 2300 nodes was a very impressive number. That is before ANY of Gore's influence. Technology did make the internet more practical, and Gore did join in on the bandwagon. By taking credit for the internet, he is also leveraging technology by taking political advantage of the co-incedence of technology and his tenure.

as an aside, the internal network hit 1000 nodes in '83 and 2000 nodes in '84. These were just about all multi-user time-sharing mainframe nodes. The internal network started at 545 tech. sq in the same era that the first nodes became operational on the arpanet ... and until the internet started seeing massive numbers of workstation & PC nodes ... consistently had more (mainframe) nodes than the internet (had total nodes).

One of the successes of the internal network was essentially a gateway layer at each node with heterogeneous networking support (from the start). While the original arpanet introduced packets ... it was a homogeneous network. It wasn't until the IP-layer came along that it got real gateways and heterogeneous network support ... which ... with BSD TCP/IP support ported to workstations and PCs ... allowed it to really start to take off.

misc: refs:
https://www.garlic.com/~lynn/99.html#39
https://www.garlic.com/~lynn/99.html#44
https://www.garlic.com/~lynn/99.html#112
https://www.garlic.com/~lynn/2000e.html#5

Internal Net posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

internet preceeds Gore in office.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: internet preceeds Gore in office.
Newsgroups: alt.folklore.computers,talk.politics.misc
Date: 25 Sep 2000 07:55:33 -0600
jmfbahciv writes:
IIRC, and my dates are real fuzzy these days, it was around that time that DEC started running out of node numbers (the field definition was too small). Their solution was to define an area number and tack it onto the node number. Out of that came our LAN development group (this is not a claim of creating the LAN).

Similar were the IBM HASP/JES systems that connected into the internal network (as end-nodes) .... where the internal network machines had embedded gateways (previously mentioned for heterogeneous support) that talked HASP/JES protocols and translated to the internal network protocol.

JES/NJE networking was introduced in the mid-70s with networking "node" definitions mapped into the HASP/JES 256-entry "psuedo-device" table (i.e. HASP originall defined psuedo-reader/printer/punch devices with the table). Typical JES system might have 40-50 defined psuedo devices (for spool) leaving 200 or so entries available for defining network nodes.

The internal network was already larger than the JES/NJE limit by the time the original support was implemented. JES increased the number of definition slots to 999 (from 256) after the internal network had exceeded 1000 nodes.

The internal network gateways used an eight-byte alphanumeric field.

The internal network gateways were even used to interconnect different HASP/JES/NJE systems operating at different release levels. HASP/JES/NJE had bit-specific header fields that could vary from release to release (and frequently different releases wouldn't interoperate, and in some cases a JES/NJE at one release could cause the operating system in another machine running a different JES/NJE release to crash). The internal network gateways would have gateway support for all of the various HASP/JES/NJE protocol flavors and if necessary do the bit-field conversions from one release to a different release.

The early IBM HASP/JES methodology suffered from the same vision limitation as the early ARPANet work ... supporting homogeneous (bit-specific) networking w/o the concept of gateways. In contrast, the internal network methodology incorporated the concept of gateways from just about the original implemenation.

misc. ref:
https://www.garlic.com/~lynn/99.html#113

as an aside, my wife does have a patent (4195351, 3/25/1980) on some of the early (token passing) LAN technology
https://www.garlic.com/~lynn/2000c.html#53

Internal Net posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

internet preceeds Gore in office.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: internet preceeds Gore in office.
Newsgroups: alt.folklore.computers,talk.politics.misc
Date: 25 Sep 2000 10:11:24 -0600
some misc. other refs:
https://www.garlic.com/~lynn/97.html#27
https://www.garlic.com/~lynn/99.html#33
https://www.garlic.com/~lynn/2000b.html#61
https://www.garlic.com/~lynn/2000b.html#67

early internal network started out with cpremote on cp/67 ... at 545 tech. sq (including talking to 1130). one of the early distributed development projects was support and testing of relocation hardware on the 370/145.

370s were initially delivered w/o virtual addressing enabled (and in some cases even present). 145s had the hardware support ... even tho it wasn't enabled. there was even a flap about the "lights" on the 145 front panel (before announcement of relocation on 370 hardware). One of the lights was labled dlat ... (indicating address translation mode).

there was also a "pentagon papers" type flap where a document on address translation was leaked. there was a big internal investigation ... and after that all (internal) copiers were modified to have an identification installed (copies made would have the copier ID printed on each page copied).

In any case, there was distributed project between 145 plant in Endicott (NY) and 545 tech sq (cambridge, mass) ... using the network support. It included having a version of CP/67 (360/67) modified to simulate 370 relocation architecture (different from the 360/67 relocation architecture) on a 360/67. Then a different CP/67 was modified so that it "ran" using 370 relocation architecture (in the simulated 370 virtual machine running on a real 360/67).

This modified CP/67 was operational and running, a year before there was real 370/145 hardware available. The modified CP/67 "I" ... was used as the initial test of 370/145 when the hardware became available. The initial IPL/boot failed ... it turned out that the 145 engineers had implemented part of the relocation architecture wrong. The modified CP/67 was temporarily pathed to correspond with the wrong hardware implementation until the engineers were able to correct the machine.

CPREMOTE was eventually renamed VNET and after several enhancements and wide deployment internally, was maded available to customers in the mid-70s (as part of a combined VNET & JES2/NJE offering).

Parts of this was also the basis for BITNET in north america and EARN in europe (the number of nodes quoted for the internal network ... only included those machines on the internal corporate network ... and none of the BITNET or EARN nodes).

Internal Net posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

First OS with 'User' concept?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First OS with 'User' concept?
Newsgroups: alt.folklore.computers
Date: Wed, 27 Sep 2000 15:05:57 GMT
Terry Kennedy writes:
Of course, with timesharing and multiple-stream batch processing, it be- comes important to distinguish between different users running on the hardware at the same time, which is probably where your question actually starts.

i expect cp/40 ('65-'66) & cp/67 ('67) inherited its username & login from CTSS. The basic CP/67 control block that all resources, etc were hung off per user was the UTABLE (aka user table).

I'm slowly unpacking stuff that has been in storage for over a year. Somewhere in all the stuff is a CP/67 program logic manual (PLM).

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

X.25 lost out to the Internet - Why?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.25 lost out to the Internet - Why?
Newsgroups: alt.folklore.computers,comp.dcom.telecom.tech
Date: Wed, 27 Sep 2000 17:10:50 GMT
Lizard Blizzard writes:
When you say Worldcom, don't you mean the predecessor of Worldcom? I think Worldcom is a newer more recent name for the MCI and UUNet and whatever else it's now conglomerately called. Perhaps you meant MCI?

from 10/2/97 news
The telecom company may have started out as small fish, but it's been eating well, gobbling up about 50 companies in the past five years. Among its recent acquisitions were UUNET Technologies, a major Internet backbone company, as well as the networks formerly held by AOL and CompuServe. Now WorldCom has launched an unsolicited takeover bid for MCI, topping an offer from British Telecom. The deal would make it the nation's second largest telephone company, put it in control of about half the Internet backbone and shake things up on the consumer level.

from 9/17/96 news
Worldcom plus MFS equals global contender
=========================================

Two of the fastest-moving telephone operators in the world, MFS Communications and WorldCom, are to merge. The two are seen as exemplary operators who find ways of achieving their goals despite regulatory and other difficulties that other operators claim are insurmountable. Both have the further advantage of modern infrastructure.

WorldCom is the fourth largest long distance carrier in the USA (it bought out Wiltel in 1994) and is to pay a high price for MFS. It intends to swap 2.1 of its shares for every MFS share, which values MFS at around US$14 billion. MFS lost US$277 million last year and is unlikely to make a profit before 1998.

Over the next five years MFS intends to build its own fibre networks in 45 financial centres around the world to add to its existing networks in London, Paris, Frankfurt and Stockholm as well as networks in 49 US cities it has built since 1987.

WorldCom last year had revenues of US$3.4 billion and a net income of US$267.7 million. It chiefly leases bandwidth from other operators which for long distance is both plentiful and cheap. On the other hand, events in liberalised markets over the last few years have shown that real competitive telecoms provision needs alternative local loop infrastructure. The combination of WorldCom's long distance network and MFS' access networks should make MFS WorldCom a serious global contender.

The deal will take some months to complete so it would not be surprising if either party were made an offer they couldn't refuse by one of their slower-moving but larger fellow operators.


--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Is Al Gore The Father of the Internet?^

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Al Gore The Father of the Internet?^
Newsgroups: alt.folklore.computers,talk.politics.misc
Date: Wed, 27 Sep 2000 21:03:22 GMT
Anne & Lynn Wheeler writes:
... misc refs from the '80s.

slightly related prior posts:

https://www.garlic.com/~lynn/2000d.html#70
https://www.garlic.com/~lynn/2000d.html#72
https://www.garlic.com/~lynn/2000d.html#73
https://www.garlic.com/~lynn/2000d.html#76


&
https://www.garlic.com/~lynn/2000e.html#10
https://www.garlic.com/~lynn/2000e.html#11
https://www.garlic.com/~lynn/2000e.html#13

announcing the switch-over to TCP/IP and some early comments about the success of the switch. Note that the "IP" introduced internetworking and gateways ... easing the integration of a large number of different networks.

Date: 30 Dec 1982 14:45:34 EST (Thursday)
From: Nancy Mimno <mimno@Bbn-Unix>
Subject: Notice of TCP/IP Transition on ARPANET
To: csnet-liaisons at Udel-Relay
Cc: mimno at Bbn-Unix
Via: Bbn-Unix; 30 Dec 82 16:07-EST
Via: Udel-Relay; 30 Dec 82 13:15-PDT
Via: Rand-Relay; 30 Dec 82 16:30-EST

ARPANET Transition 1 January 1983
Possible Service Disruption
---------------------------------

Dear Liaison,

As many of you may be aware, the ARPANET has been going through the major transition of shifting the host-host level protocol from NCP (Network Control Protocol/Program) to TCP-IP (Transmission Control Protocol - Internet Protocol). These two host-host level protocols are completely different and are incompatible. This transition has been planned and carried out over the past several years, proceeding from initial test implementations through parallel operation over the last year, and culminating in a cutover to TCP-IP only 1 January 1983. DCA and DARPA have provided substantial support for TCP-IP development throughout this period and are committed to the cutover date.

The CSNET team has been doing all it can to facilitate its part in this transition. The change to TCP-IP is complete for all the CSNET host facilities that use the ARPANET: the CSNET relays at Delaware and Rand, the CSNET Service Host and Name Server at Wisconsin, the CSNET CIC at BBN, and the X.25 development system at Purdue. Some of these systems have been using TCP-IP for quite a while, and therefore we expect few problems. (Please note that we say "few", not "NO problems"!) Mail between Phonenet sites should not be affected by the ARPANET transition. However, mail between Phonenet sites and ARPANET sites (other than the CSNET facilities noted above) may be disrupted.

The transition requires a major change in each of the more than 250 hosts on the ARPANET; as might be expected, not all hosts will be ready on 1 January 1983. For CSNET, this means that disruption of mail communication will likely result between Phonenet users and some ARPANET users. Mail to/from some ARPANET hosts may be delayed; some host mail service may be unreliable; some hosts may be completely unreachable. Furthermore, for some ARPANET hosts this disruption may last a long time, until their TCP-IP implementations are up and working smoothly. While we cannot control the actions of ARPANET hosts, please let us know if we can assist with problems, particularly by clearing up any confusion. As always, we are or (617)497-2777.

Please pass this information on to your users.

Respectfully yours, Nancy Mimno CSNET CIC Liaison


... snip ... top of post, old email index

================================================ ================================================

some observations about the success of the switch-over

Date: 02/02/83 23:49:45
To: CSNET mailing list
Subject: CSNET headers, CSNET status

You may have noticed that since ARPANET switched to TCP/IP and the new version of software on top of it, message headers have become ridiculously long. Some of it is because of tracing information that has been added to facilitate error isolation and "authentication", and some of it I think is a bug (the relay adds a 'From' and a 'Date' header although there already are headers with that information in the message). This usually doesn't bother people on the ARPANET because they have smart mail reading programs that understand the headers and only display the relevant ones. I have proposed a mail reader/sender program that understands about ARPANET headers (RFC822) as a summer project, so maybe we will sometime enjoy the same priviledge.

The file CSNET STATUS1 on the CSNET disk (see instructions below for how to access it) contains some clarification of the problems that have been experienced with the TCP/IP conversion. Here is a summary:
- Nodes that don't yet talk TCP (but the old NCP) can be accessed through the UDel-Relay. So if you think you have problems reaching a node because of this, append @Udel-Relay to the ARPANET address.
- You can find out about the status of hosts (e.g., if they run TCP or not) by sending ANY MESSAGE to Status@UDel-Relay (capitalization is NOT significant).
- If your messages are undeliverable, you get a notice after two days, and your messages get returned after 4 days.
- Avoid using any of the fancy address forms allowed by the new header format (RFC822).
- The TCP transition was a lot more trouble than the ARPANET people had anticipated.


... snip ... top of post, old email index

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Is Al Gore The Father of the Internet?^

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Al Gore The Father of the Internet?^
Newsgroups: alt.folklore.computers,talk.politics.misc
Date: Wed, 27 Sep 2000 21:05:21 GMT
Anne & Lynn Wheeler writes:
... misc refs from the '80s.

long re-post warning ... from later in the 80s

From: geoff@FERNWOOD.MPK.CA.US (the terminal of Geoff Goodfellow)
Newsgroups: comp.protocols.tcp-ip
Subject: THE INTERNET CRUCIBLE - Volume 1, Issue 1
Date: 1 Sep 89 03:12:06 GMT

THE CRUCIBLE                                              INTERNET EDITION
an eleemosynary publication of the                            August, 1989
Anterior Technology IN MODERATION NETWORK(tm)           Volume 1 : Issue 1

Geoff Goodfellow
                                 Moderator

In this issue:
A Critical Analysis of the Internet Management Situation

THE CRUCIBLE is an irregularly published, refereed periodical on
the Internet.  The charter of the Anterior Technology IN MODERATION
NETWORK is to provide the Internet and Usenet community with
useful, instructive and entertaining information which satisfies
commonly accepted standards of good taste and principled discourse.
All contributions and editorial comments to THE CRUCIBLE are
reviewed and published without attribution.  Cogent, cohesive,
objective, frank, polemic submissions are welcomed.

Mail contributions/editorial comments to:       crucible@fernwood.mpk.ca.us

--------------------------------------------------------------------------

         A Critical Analysis of the Internet Management Situation:
The Internet Lacks Governance

ABSTRACT

At its July 1989 meeting, the Internet Activities Board made some
modifications in the management structure for the Internet.  An
outline of the new IAB structure was distributed to the Internet
engineering community by Dr. Robert Braden, Executive Director.  In
part, the open letter stated:

"These changes resulted from an appreciation of our successes,
especially as reflected in the growth and vigor of the IETF, and
in rueful acknowledgment of our failures (which I will not
enumerate).  Many on these lists are concerned with making the
Internet architecture work in the real world."

In this first issue of THE INTERNET CRUCIBLE we will focus on the
failures and shortcomings in the Internet.  Failures contain the
lessons one often needs to achieve success.  Success rarely leads to a
search for new solutions.  Recommendations are made for short and long
term improvements to the Internet.

A Brief History of Networking

The Internet grew out of the early pioneering work on the ARPANET.  This
influence was more than technological, the Internet has also been
significantly influenced by the economic basis of the ARPANET.

The network resources of the ARPANET (and now Internet) are "free".
There are no charges based on usage (unless your Internet connection
is via an X.25 Public Data Network (PDN) in which case you're well
endowed, or better be).  Whether a site's Internet connection
transfers 1 packet/day or a 1M packets/day, the "cost" is the same.
Obviously, someone pays for the leased lines, router hardware, and the
like, but this "someone" is, by and large, not the same "someone" who
is sending the packets.

In the context of the Research ARPANET, the "free use" paradigm was an
appropriate strategy, and it has paid handsome dividends in the form
of developing leading edge packet switching technologies.
Unfortunately, there is a significant side-effect with both the
management and technical ramifications of the current Internet
paradigm: there is no accountability, in the formal sense of the word.

In terms of management, it is difficult to determine who exactly is
responsible for a particular component of the Internet.  From a
technical side, responsible engineering and efficiency has been
replaced by the purchase of T1 links.

Without an economic basis, further development of short-term Internet
technology has been skewed.  The most interesting innovations in
Internet engineering over the last five years have occurred in
resource poor, not resource rich, environments.

Some of the best known examples of innovative Internet efficiency
engineering are John Nagle's tiny-gram avoidance and ICMP
source-quench mechanisms documented in RFC896, Van Jacobsen's
slow-start algorithms and Phil Karn's retransmission timer method.

In the Nagle, Jacobsen and Karn environments, it was not possible or cost
effective to solve the performance and resource problems by simply adding
more bandwidth -- some innovative engineering had to be done.
Interestingly enough, their engineering had a dramatic impact on our
understanding of core Internet technology.

It should be noted that highly efficient networks are important when
dealing with technologies such as radio where there is a finite amount
of bandwidth/spectrum to be had.  As in the Nagle, Jacobsen and Karn
cases, there are many environments where adding another T1 link can
not be used to solve the problem.  Unless innovation continues in
Internet technology, our less than optimal protocols will perform
poorly in bandwidth or resource constrained environments.

Developing at roughly the same time as Internet technology have been
the "cost-sensitive" technologies and services, such as the various
X.25-based PDNs, the UUCP and CSNET dial-up networks.  These
technologies are all based on the notion that bandwidth costs money
and the subscriber pays for the resources used.  This has the notable
effect of focusing innovation to control costs and maximize efficiency
of available resources and bandwidth.  Higher efficiency is achieved
by concentrating on sending the most amount of information through the
pipe in the most efficient manner thereby making the best use of
available bandwidth/cost ratio.

For example, bandwidth conservation in the UUCP dial-up network has
multiplied by leaps and bounds in the modem market with the innovation
of Paul Baran's (the grandfather of packet switching technology)
company, Telebit, which manufactures a 19.2KB dial-up modem especially
optimized for UUCP and other well known transfer protocols.  For
another example, although strictly line-at-a-time terminal sessions
are less "user friendly" than character-oriented sessions, they make
for highly efficient use of X.25 PDN network resources with echoing
and editing performed locally on the PAD.

While few would argue the superiority of X.25 and dial-up CSNET and
UUCP, these technologies have proved themselves both to spur
innovation and to be accountable.  The subscribers to such services
appreciate the cost of the services they use, and often such costs
form a well-known "line item" in the subscriber's annual budget.

Nevertheless, the Internet suite of protocols are eminently
successful, based solely on the sheer size and rate of growth of both
the Internet and the numerous private internets, both domestically and
internationally.  You can purchase internet technology with a major
credit card from a mail order catalog.  Internet technology has
achieved the promise of Open Systems, probably a decade before OSI
will be able to do so.

Failures of the Internet

The evolution and growth of Internet technology have provided the
basis for several failures.  We think it is important to examine
failures in detail, so as to learn from them.  History often tends to
repeat itself.

Failure 1:- Network Nonmanagement

The question of responsibility in todays proliferated Internet is
completely open.  For the last three years, the Internet has been
suffering from non-management.  While few would argue that a
centralized czar is necessary (or possible) for the Internet, the fact
remains there is little to be done today besides finger-pointing when
a problem arises.

In the NSFNET, MERIT is in charge of the backbone and each regional
network provider is responsible for its respective area.  However,
trying to debug a networking problem across lines of responsibility,
such as intermittent connectivity, is problematic at best.  Consider
three all too true refrains actually heard from NOC personal at the
helm:

"You can't ftp from x to y?  Try again tomorrow, it will
probably work then."

"If you are not satisfied with the level of [network]
service you are receiving you may have it disconnected."

"The routers for network x are out of table space for routes,
which is why hosts on that network can't reach your new
(three-month old) network.  We don't know when the routers will
be upgraded, but it probably won't be for another year."

One might argue that the recent restructuring of the IAB may work
towards bringing the Internet under control and Dr. Vinton G. Cerf's
recent involvement is a step in the right direction.  Unfortunately,
from a historical perspective, the new IAB structure is not likely to
be successful in achieving a solution.  Now the IAB has two task
forces, the Internet Research Task Force (IRTF) and the Internet
Engineering Task Force (IETF).  The IRTF, responsible for long-term
Internet research, is largely composed of the various task forces
which used to sit at the IAB level.  The IETF, responsible for the
solution of short-term Internet problems, has retained its
composition.

The IETF is a voluntary organization and its members participate out
of self interest only.  The IETF has had past difficulties in solving
some of the Internet's problems (i.e., it has taken the IETF well over
a year to not yet produce RFCs for either a Point-To-Point Serial Line
IP or Network Management enhancements).  It is unlikely that the IETF
has the resources to mount a concerted attack against the problems of
today's ever expanding Internet.  As one IETF old-timer put it: "No
one's paid to go do these things, I don't see why they (the IETF
management) think they can tell us what to do" and "No one is paying
me, why should I be thinking about the these things?"

Even if the IETF had the technical resources, many of the Internet's
problems are also due to lack of "hands on" management.  The IETF

o  Bites off more than it can chew;
o  Sometimes fails to understand a problem before making a solution;
o  Attempts to solve political/marketing problems with technical
solutions;
o  Has very little actual power.

The IETF has repeatedly demonstrated the lack of focus necessary to
complete engineering tasks in a timely fashion.  Further, the IRTF is
chartered to look at problems on the five-year horizon, so they are
out of the line of responsibility.  Finally, the IAB, per se, is not
situated to resolve these problems as they are inherent to the current
structure of nonaccountability.

During this crisis of non-management, the Internet has evolved into a
patch quilt of interconnected networks that depend on lots of
seat-of-the-pants flying to keep interoperating.  It is not an unusual
occurrence for an entire partition of the Internet to remain
disconnected for a week because the person responsible for a key
connection went on vacation and no one else knew how to fix it.  This
situation is but one example of an endemic problem of the global
Internet.

Failure 2:- Network Management

The current fury over network management protocols for TCP/IP is but a
microcosm of the greater Internet vs. OSI debate going on in the
marketplace.  While everyone in the market says they want OSI, anyone
planning on getting any work done today buys Internet technology.  So
it is with network management, the old IAB made the CMOT an Internet
standard despite the lack of a single implementation, while the only
non-proprietary network management protocol in use in the Internet is
the SNMP.  The dual network management standardization blessings will
no doubt have the effect of confusing end-users of Internet
technology--making it appear there are two choices for network
management, although only one choice, the SNMP has been implemented.
The CMOT choice isn't implemented, doesn't work, or isn't
interoperable.

To compound matters, after spending a year trying to achieve consensus
on the successor to the current Internet standard SMI/MIB, the MIB
working group was disbanded without ever producing anything: the
political climate prevented them from resolving the matter.  (Many
congratulatory notes were sent to the chair of the group thanking him
for his time.  This is an interesting new trend for the
Internet--congratulating ourselves on our failures.)

Since a common SMI/MIB could not be advanced, an attempt was made to
de-couple the SNMP and the CMOT (RFC1109).  The likely result of
RFC1109 will be that the SNMP camp will continue to refine their
experience towards workable network management systems, whilst the
CMOT camp will continue the never-ending journey of tracking OSI while
producing demo systems for trade shows exhibitions.  Unfortunately the
end-user will remain ever confused because of the IAB's controversial
(and technically questionable) decision to elevate the CMOT prior to
implementation.

While the network management problem is probably too large for the
SNMP camp to solve by themselves they seem to be the only people who
are making any forward progress.

Failure 3:- Bandwidth Waste

Both the national and regional backbone providers are fascinated with
T1 (and now T3) as the solution towards resource problems.  T1/T3
seems to have become the Internet panacea of the late 80's.  You never
hear anything from the backbone providers about work being done to get
hosts to implement the latest performance/congestion refinements to
IP, TCP, or above.  Instead, you hear about additional T1 links and
plans for T3 links.  While T1 links certainly have more "sex and
sizzle" than efficient technology developments like slow-start, tiny
gram avoidance and line mode telnet, the majority of users on the
Internet will probably get much more benefit from properly behaving
hosts running over a stable backbone than the current situation of
misbehaving and semi-behaved hosts over an intermittent catenet.

Failure 4:- Routing

The biggest problem with routing today is that we are still using
phase I (ARPANET) technology, namely EGP.  The EGP is playing the role
of routing glue in providing the coupling between the regional IGP and
the backbone routing information.  It was designed to only accommodate
a single point of attachment to the catenet (which was all DCA could
afford with the PSNs).  However with lower line costs, one can build a
reasonably inexpensive network using redundant links.  However the EGP
does not provide enough information nor does the model it is based
upon support multiple connections between autonomous systems.  Work is
progressing in the Interconnectivity WG of the IETF to replace EGP.
They are in the process of redefining the model to solve some of the
current needs.  BGP or the Border Gateway Protocol (RFC1105) is an
attempt to codify some of the ideas the group is working on.

Other problems with routing are caused by regionals wanting a backdoor
connection to another regional directly.  These connections require
some sort of interface between the two routing systems.  These
interfaces are built by hand to avoid routing loops.  Loops can be
caused when information sent into one regional network is sent back
towards the source.  If the source doesn't recognize the information
as its own, packets can flow until their time to live field expires.

Routing problems are caused by the interior routing protocol or IGP.
This is the routing protocol which is used by the regionals to pass
information to and from its users.  The users themselves can use a
different IGP than the regional.  Depending on the number of
connections a user has to the regional network, routing loops can be
an issue.  Some regionals pass around information about all known
networks in the entire catenet to their users.  This information
deluge is a problem with some IGPs.  Newer IGPs such as the new OSPF
from the IETF and IGRP from cisco attempt to provide some information
hiding by adding hierarchy.  OSPF is the internets first attempt at
using a Dykstra type algorithm as an IGP.  BBN uses it to route
between their packet switch nodes below the 1822 or X.25 layer.

Unstable routing is caused by hardware or hosts software.  Older BSD
software sets the TTL field in the IP header to a small number.  The
Internet today is growing and its diameter has exceed the software's
ability to reach the other side.  This problem is easily fixed by
knowledgeable systems people, but one must be aware of the problem
before they can fix it.

Routing problems are also perceived when in fact a serial line problem
or hardware problem is the real cause.  If a serial line is
intermittent or quickly cycles from the up state into the down state
and back again, routing information will not be supplied in a uniform
or smooth manner.  Most current IGPs are Bellman-Ford based and employ
some stabilizing techniques to stem the flow of routing oscillations
due to "flapping" lines.  Often when a route to a network disappears,
it may take several seconds for it to reappear.  This can occur at the
source router who waits for the route to "decay" from the system.
This pause should be short enough so that active connections persist
but long enough that all routers in the routing system "forget" about
routes to that network.  Older host software with over-active TCP
retransmission timers will time out connections instead of persevering
in the face of this problem.  Also routers, according to RFC1009, must
be able to send ICMP unreachables when a packet is sent to a route
which is not present in its routing database.  Some host products on
the market close down connections when a single ICMP reachable is
received.  This bug flies in the face of the Internet parable "be
generous in what you accept and rigorous in what you send".

Many of the perceived routing problems are really complex multiple
interactions of differing products.

Causes of the Failures

The Internet failures and shortcomings can be traced to several sources:

First and foremost, there is little or no incentive for efficiency
and/or economy in the current Internet.  As a direct result, the
resources of the Internet and its components are limited by factors
other than economics.  When resources wear thin, congestion and poor
performance result.  There is little to no incentive to make things
better, if 1 packet out of 10 gets through things "sort of work".  It
would appear that Internet technology has found a loophole in the
"Tragedy of The Commons" allegory--things get progressively worse and
worse, but eventually something does get through.

The research community is interested in technology and not economics,
efficiency or free-markets.  While this tack has produced the Internet
suite of protocols, the de facto International Standard for Open
Systems, it has also created an atmosphere of intense in-breeding
which is overly sensitive to criticism and quite hardened against
outside influence.  Meanwhile, the outside world goes on about
developing economically viable and efficient networking technology
without the benefit of direct participation on the part of the
Internet.

The research community also appears to be spending a lot of its time
trying to hang onto the diminishing number of research dollars
available to it (one problem of being a successful researcher is
eventually your sponsors want you to be successful in other things).
Despite this, the research community actively shuns foreign technology
(e.g., OSI), but, inexplicably has not recently produced much
innovation in new Internet technology.  There is also a dearth of new
and nifty innovative applications on the Internet.  Business as usual
on the Internet is mostly FTP, SMTP and Telnet or Rlogin as it has
been for many years.  The most interesting example of a distributed
application on the Internet today is the Domain Name System, which is
essentially an administrative facility, not an end-user service.

The engineering community must receive equal blame in these matters.
While there have been some successes on the part of the engineering
community, such as those by Nagel, Jacobsen and Karn mentioned above,
the output of the IETF, namely RFCs and corresponding implementations,
has been surprisingly low over its lifetime.

Finally, the Internet has become increasingly dependent on vendors for
providing implementations of Internet technology.  While this is no
doubt beneficial in the long-term, the vendor community, rather than
investing "real" resources when building these products, do little
more than shrink-wrap code written primarily by research assistants at
universities.  This has lead to cataclysmic consequences (e.g., the
Internet worm incident, where Sendmail with "debug" command and all
was packaged and delivered to customers without proper consideration).
Of course, when problems are found and fixed (either by the vendor's
customers or software sources), the time to market with these fixes is
commonly a year or longer.  Thus, while vendors are vital to the
long-term success of Internet technology, they certainly don't receive
high marks in the short-term.

Recommendations

Short-term solutions (should happen by year's end):

In terms of hardware, the vendor community has advanced to the point
where the existing special-purpose technologies (Butterfly, NSSs) can
be replaced by off-the-shelf routers at far less cost and with
superior throughput and reliability.  Obvious candidates for upgrade
are both the NSFNET and ARPANET backbones.  Given the extended
unreliability of the mailbridges, the ARPA core is an immediate
candidate (even though the days of net 10 are numbered).

In terms of software, ALL devices in the Internet must be network
manageable.  This is becoming ever more critical when problems must be
resolved.  Since SNMP is the only open network management protocol
functioning in the Internet, all devices must support SNMP and the
Internet standard SMI and MIB.

Host implementations must be made to support the not-so-recent TCP
enhancements (e.g., those by Nagle, Jacobsen and Karn) and the more
recent linemode TELNET.

The national and regional providers must coordinate to share network
management information and tools so that user problems can be dealt
with in a predictable and timely fashion.  Network management tools
are a big help, but without the proper personnel support above this,
the benefits can not be fully leveraged.

The Internet needs leadership and hands-on guidance.  No one is
seemingly in charge today, and the people who actually care about the
net are pressed into continually fighting the small, immediate
problems.

Long-term solutions:

To promote network efficiency and a free-market system for the
delivery of Internet services, it is proposed to switch the method by
which the network itself is supported.  Rather than a top-down
approach where the money goes from funding agencies to the national
backbone or regional providers, it is suggested the money go directly
to end-users (campuses) who can then select from among the network
service providers which among them best satisfies their needs and
costs.

This is a strict economic model: by playing with the full set of the
laws of economics, a lot of the second-order problems of the Internet,
both present and on the horizon, can be brought to heel.  The Internet
is no longer a research vehicle, it is a vibrant production facility.
It is time to acknowledge this by using a realistic economic model in
the delivery of Internet services to the community (member base).

When Internet sites can vote with their pocketbooks, some new
regionals will be formed; some, those which are non-performant or
uncompetitive, will go away; and, the existing successful ones will
grow.  The existing regionals will then be able to use their economic
power, as any consumer would, to ensure that the service providers
(e.g., the national backbone providers) offer responsive service at
reasonable prices.  "The Market" is a powerful forcing function: it
will be in the best interests of the national and regional providers
to innovate, so as to be more competitive.  Further, such a scheme
would also allow the traditional telecommunications providers a means
for becoming more involved in the Internet, thus allowing
cross-leverage of technologies and experience.

The transition from top-down to economic model must be handled
carefully, but this is exactly the kind of statesmanship that the
Internet should expect from its leadership.

-------

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Is Al Gore The Father of the Internet?^

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Al Gore The Father of the Internet?^
Newsgroups: alt.folklore.computers,talk.politics.misc
Date: Wed, 27 Sep 2000 21:07:13 GMT
Anne & Lynn Wheeler writes:
... misc refs from the '80s.

not quite from the 80s ... but Oct. 1990. The following is all the .com domains in the Oct. 1990 domain list
bhp.com.au cpg.com.au jsf.com.au miden.com.au misadl.com.au mits.com.au telecom.com.au netcom.ubc.ca inrs-telecom.uquebec.ca hsa.com.world.utoronto.ca 3com.com 3mail.3com.com esd.3com.com 3m.com a-t.com aaa.com aablue.com aai.com aati.com ab.com abb.com ims.abb.com abbott.com able.com acadch.com acc.com acci.com accurate.com accutec.com acd.com acs.com acw.com addamax.com adelie.com adi.com adobe.com ads.com advansoft.com advsci.com aer.com ag.com agfa.com ags.com aha.com aic.com aii.com aim.com ais.com ait.com aladdin.com alc.com alcoa.com al.alcoa.com algorists.com alisa.com alliant.com allied.com alphacdc.com alphalpha.com altos.com always.com amc.com amcc.com amd.com Amdahl.com ameritech.com amg.com amgen.com ami.com amis.com cadr.amis.com amix.com amoco.com nap.amoco.com trc.amoco.com amp.com ams.com amsinc.com mr.ams.com ana.com analytics.com andersen.com anzus.com aocgl.com apex.com apldbio.com apldmt.com apollo.com apple.com afsg.apple.com aux.apple.com cambridge.apple.com support.apple.com applicon.com applix.com aps.com arbortext.com arco.com ardent.com ariel.com arrent.com arris.com artel.com artisans.com asa.com asi.com ast.com astrosoft.com astute.com ateng.com athenanet.com atherton.com athsys.com ati.com atpal.com att.com astro.att.com homer.att.com attmail.com mercury.att.com astro.nj.att.com homer.nj.att.com mercury.nj.att.com phone.nj.att.com tempo.nj.att.com zoo.nj.att.com phone.att.com tempo.att.com uso.att.com zoo.att.com audiofax.com auratek.com aurora.com auspex.com auto-trol.com autodesk.com autoex.com autosys.com avatar.com avco.com aware.com axecore.com bally.com balr.com bang.com banyan.com barn.com barton.com basis.com bbc.com bbn.com bct.com bd.com bdb.com bdm.com bdt.com beartrack.com bec.com beckman.com bedrock.com bell-atl.com bellcore.com bae.bellcore.com base.bellcore.com cc.bellcore.com ctsd.bellcore.com ctt.bellcore.com wif.ctt.bellcore.com facs.bellcore.com garden.bellcore.com hmslab.bellcore.com leis.bellcore.com nps.bellcore.com osn.bellcore.com picst.bellcore.com bendix.com bfm.com bigsur.com binky.com biocad.com biogfx.com bioimage.com bis.com bison.com bitblocks.com bitstream.com bli.com bma.com boeing.com cfd.boeing.com bohica.com bony.com bostech.com boxhill.com bp.com brs.com bruker.com bse.com bsg.com bss.com bsw.com bti.com btr.com bull.com atpc.bull.com az05.bull.com cr.bull.com ladc.bull.com ma02.bull.com ma30.bull.com phx.bull.com pws.bull.com test.pws.bull.com uk22.bull.com bungi.com byte.com c3.com cac.com cadence.com cadnetix.com cadzooks.com cai.com calcomp.com camb.com camex.com capitol.com capmkt.com cayman.com ccc.com ccs.com ccur.com ocpt.ccur.com sdc.ccur.com tinton.ccur.com westford.ccur.com cdc.com ahse.cdc.com aqeng.cdc.com arh.cdc.com cdl.cdc.com cpg.cdc.com css.cdc.com dayfac.cdc.com dms.cdc.com eta.cdc.com iis.cdc.com svl.cdc.com teg.cdc.com udev.cdc.com cds.com ce.com ceco.com cei.com celestial.com cen.com centec.com cfc.com cfg.com cfi.com cgi.com challenger.com charcoal.com chestnut.com chevron.com chipcom.com chips.com chron.com ci.com cichlid.com cimage.com cims.com cimtek.com cinnet.com cirr.com cisco.com citib.com citicorp.com clarinet.com claris.com clarity.com clear.com clearpoint.com cli.com clsi.com cmc.com cmi.com cns.com coat.com code3.com codonics.com cogwheel.com coherent.com commodore.com comp.com compaq.com compass.com compmail.com compu.com compuserve.com compzrs.com comsat.com comsys.com conmicro.com consult.com consumers.com contel.com asc.contel.com asd.contel.com ctc.contel.com fss.contel.com imsd.contel.com iss.contel.com oasis.contel.com c2.oasis.contel.com irm.oasis.contel.com lab.oasis.contel.com kirk.lab.oasis.contel.com nestest3.oasis.contel.com ssar.oasis.contel.com sstest.oasis.contel.com wtp.contel.com control.com convergent.com convex.com coral.com corollary.com cos.com cpd.com cps.com cpu.com cray.com asso24.cray.com craycos.com crj.cray.com cri.com crlabs.com crsfld.com cs.com csc.com sd.csc.com starlab.csc.com csf.com csi.com csms.com csps.com cssinc.com cstreet.com cti.com ctron.com cts.com cummins.com cumulus.com cwi.com cyanamid.com cygnus.com darkside.com das.com data-io.com datacube.com datagram.com datapoint.com sat.datapoint.com davidsys.com dawntech.com dbaccess.com dbsoft.com ddi.com ddmi.com de.com dec.com dco.dec.com decision.com pa.dec.com deere.com dell.com delta.com deltam.com demott.com desktalk.com dg.com aos.dg.com apex.dg.com ceo.dg.com dle.dg.com irvine.dg.com mceo.dg.com oceo.dg.com rockville.dg.com rtp.dg.com sv.dg.com ts.dg.com webo.dg.com dhbrown.com dhl.com dialogic.com digibd.com dimed.com dimension.com dis.com disney.com dl.com dlogics.com dmc.com domain.com dorsai.com dpw.com dra.com draper.com drd.com dri.com dsa.com dsc.com dsi.com dss.com dtg.com dtseng.com dupont.com ei.dupont.com fsg.dupont.com wizards.dupont.com ecaard.com eci.com eda.com edg.com edge.com eds.com eeg.com efc.com efi.com eiffel.com eklektix.com ekrl.com elan.com electro.com elm.com em.com emulex.com encore.com marlboro.encore.com b7.marlboro.encore.com energetic.com entity.com epi.com epic.com episupport.com epoch.com equi.com ere-net.com ericsson.com es.com escom.com esi.com esl.com esri.com essnjay.com etg.com etinc.com everexn.com excelan.com execu.com expert.com eye.com eyring.com fac.com wdl.fac.com fai.com fairfax.com faxon.com fb.com fenris.com ferranti.com fibercom.com fibermux.com flood.com fluke.com flying-disk.com fmc.com ctc.fmc.com ford.com srl.ford.com fpr.com fps.com frame.com franklin.com franz.com frey.com frox.com fsc.com fsg.com ftp.com ether.ftp.com pro10.ftp.com pro4.ftp.com star1.ftp.com vines.ftp.com fubarsys.com futures.com fx.com gammalink.com gat.com gateway.com gaussian.com gca.com gcg.com gcm.com gd.com ge.com geac.com cho.ge.com crd.ge.com dab.ge.com dnet.ge.com lynn.ge.com med.ge.com nbc.ge.com gencorp.com gene.com genrad.com gensym.com geoworks.com syr.ge.com gist.com glaxo.com gli.com gmr.com gnail.com goldhill.com gordian.com gore.com gould.com grand.com graphon.com grebyn.com greystone.com grumbly.com grumman.com gryphon.com gsg.com gsidanet.com gsminc.com gss.com gtc.com gte.com gtech.com gtetele.com guru.com gvc.com hac.com 1e.hac.com autonet.hac.com cb.hac.com ccad.hac.com dnet.hac.com edsg.hac.com edsg1.hac.com edsg2.hac.com gm.hac.com gsg.hac.com gt.hac.com hls.hac.com hns.hac.com hops.hac.com hrl.hac.com hss.hac.com ieg.hac.com imagen.hac.com hackercorp.com msg.hac.com profs.hac.com rsg.hac.com scg.hac.com scg1.hac.com sixpack.hac.com hadron.com hamavnet.com harker.com harris-atd.com harris.com ccd.harris.com corp.harris.com csd.harris.com hdw.csd.harris.com mkt.csd.harris.com ssd.csd.harris.com epg.harris.com ess.harris.com r3com.ess.harris.com ic1z.harris.com lbp.harris.com sc.harris.com semi.harris.com daq.semi.harris.com decnet.semi.harris.com joisy.semi.harris.com mis.semi.harris.com mlb.semi.harris.com mtp.semi.harris.com nj.semi.harris.com rtp.semi.harris.com ss.harris.com harvardsq.com haus.com hawaii.com sets.hawaii.com hazeltine.com hcr.com herctec.com heurikon.com hewm.com hfconsulting.com hitachi.com hls.com hns.com hnsx.com hollander.com honda.com honeywell.com cfsat.honeywell.com cfsmo.honeywell.com crl.honeywell.com dasd.honeywell.com dsg.honeywell.com hi-csc.honeywell.com mavd.honeywell.com micro.honeywell.com src.honeywell.com ssavd.honeywell.com ssec.honeywell.com horizon.com hotline.com hp.com an.hp.com apollo.hp.com ch.apollo.hp.com aso.hp.com atl.hp.com avo.hp.com bbn.hp.com belgium.hp.com boi.hp.com bpo.hp.com bri.hp.com calgary.hp.com ce.hp.com cnd.hp.com col.hp.com corp.hp.com cos.hp.com ctgsc.hp.com cup.hp.com cv.hp.com desk.hp.com dtc.hp.com fc.hp.com gr.hp.com grenoble.hp.com gva.hp.com hpl.hp.com xpd.hpl.hp.com youdenpc.hpl.hp.com iag.hp.com lsid.hp.com lvld.hp.com mal.hp.com mayfield.hp.com mcm.hp.com msr.hp.com neth.hp.com nsr.hp.com pafc.hp.com ptp.hp.com rose.hp.com sc.hp.com scd.hp.com sdd.hp.com sde.hp.com sgp.hp.com sid.hp.com spd.hp.com spk.hp.com sqf.hp.com tky.hp.com uksr.hp.com vcd.hp.com wal.hp.com waterloo.hp.com yhp.hp.com hrb.com hri.com hsi.com htc.com hyperion.com i-sight.com i2wash.com iaps.com ibc.com ibm.com austin.ibm.com awdpa.ibm.com cambridge.ibm.com cary.ibm.com iinus1.ibm.com kingston.ibm.com fddi.kingston.ibm.com nic.kingston.ibm.com se.ibm.com ctp.se.ibm.com watson.ibm.com ibuki.com icad.com icc.com ice.com icg.com icl.com icom.com icon.com ics.com icus.com ide.com ids.com idsila.com idx.com ie.com iex.com ig.com iii.com ila.com dialnet.ila.com ileaf.com hq.ileaf.com imagen.com imagesys.com imatron.com imax.com imp.com imsl.com in.com incsys.com indcomp.com indetech.com indsys.com inference.com info.com infocomm.com informix.com infoserv.com ingr.com ingres.com ininx.com inland.com inmet.com inmos.com innovative.com inset.com instep.com integrity.com intek.com intel.com isc.intel.com iwarp.intel.com intellicorp.com com.intellicorp.com sc.intel.com intercon.com interlan.com unixlab.interlan.com interlink.com hq.interlink.com md.interlink.com intermec.com interop.com intuit.com intuitive.com ipoint.com iptech.com irby.com ironics.com irvine.com isa.com isc-br.com isc.com i88.isc.com ico.isc.com ima.isc.com ism.isc.com itx.isc.com iuk.isc.com ivy.isc.com sos.ivy.isc.com iscs.com isg.com isi.com island.com ispi.com isx.com itcmn.com itcorp.com ivc.com jar.com jbsys.com jbx.com jobsoft.com johngalt.com joiner.com jones.com jsoft.com jts.com jupiter.com jvc.com jyacc.com kai.com kaman.com kccs.com kenlaw.com kepler.com kesa.com kesmai.com kevex.com kew.com key.com kfw.com kinetics.com kithrup.com klgai.com kodak.com btc.kodak.com epps.kodak.com kohala.com kpc.com kroy.com ksr.com kurz-ai.com kyoto.com lachman.com lakesys.com lantron.com lawnet.com leedata.com legato.com leids.com lever.com lexicon.com lgc.com lia.com liant.com lighthouse.com lilink.com link.com litle.com litwin.com ljr.com llhs.com ln.com lockheed.com austin.lockheed.com decnet.lockheed.com gelac.lockheed.com laic.lockheed.com lasc-research.lockheed.com lasc.lockheed.com lisc.lockheed.com lmsc.lockheed.com profs.lockheed.com rscs.lockheed.com scf.lockheed.com space.lockheed.com stc.lockheed.com locus.com logicon.com logitech.com lorentzian.com lotus.com lrw.com ls.com lsil.com lsmi.com lti.com lucid.com barrnet.net.lucid.com lurnix.com lvsun.com lynx.com magic.com malamud.com marble.com bos.marble.com nyc.marble.com sjc.marble.com masa.com maspar.com mathworks.com matrix.com matrox.com maxim.com maya.com mbf.com mcc.com aca.mcc.com mccall.com cad.mcc.com cp.mcc.com cyc-west.mcc.com pkg.mcc.com sw.mcc.com mckee.com mckinsey.com mcs.com mdaeng.com mdc.com mdcbbs.com mdcgwy.mdc.com mdhc.mdc.com dnc.mdhc.mdc.com sap.mdc.com sl.mdc.com mdi.com medcom.com medicus.com medtronic.com megalo.com megasys.com meiko.com mentat.com mentor.com meridian.com meridiantc.com merit-tech.com merk.com meso.com metaphor.com metaware.com metrolink.com metron.com metter.com mga.com mgh.com mgi.com microplex.com microunity.com microvation.com microware.com migration.com mike.com mil3.com millipore.com mindcraft.com mips.com cdc.com.mips.com svl.cdc.com.mips.com mitech.com mitek.com mitel.com mks.com ml.com mmc.com airob.mmc.com bal.mmc.com cap.mmc.com den.mmc.com mml.mmc.com orl.mmc.com arpa.orl.mmc.com vbg.mmc.com mmdf.com mmm.com mmmg.com mmc.mmmg.com serc.mmm.com mmsac.com moldev.com monsanto.com moore.com mop.com morgan.com morningstar.com moscom.com mot.com aieg.mot.com comm.mot.com corp.mot.com csd.mot.com ecg.mot.com gss.mot.com mcd.mot.com austin.mcd.mot.com chg.mcd.mot.com phx.mcd.mot.com sjc.mcd.mot.com urbana.mcd.mot.com mcrc.mot.com rtsg.mot.com sps.mot.com mps.com mpx.com mq.com msc.com msdc.com msdrl.com mti.com mtxinu.com murphy.com mv.com myrias.com mystic.com naitc.com nas.com natinst.com nbi.com ncd.com ncr.com ncube.com ndl.com nds.com nec.com dl.nec.com bsd.dl.nec.com csl.dl.nec.com ssd.dl.nec.com sj.nec.com net.com netagw.com netcs.com netlabs.com netsys.com network.com netx.com neural.com next.com nirvonics.com nissan.com nixdorf.com nli.com nls.com nma.com northrop.com cirm.northrop.com dsd.northrop.com nad.northrop.com cam.nad.northrop.com eng.nad.northrop.com qa.nad.northrop.com nrtc.northrop.com novell.com npd.novell.com prv.npd.novell.com router.npd.novell.com slc.npd.novell.com prv.novell.com vms.novell.com wc.novell.com nrc.com nrc.com.nrc.com nsc.com nsi.com ntdms10.com nth.com nti.com nw.com nynexst.com mac.nynexst.com nyt.com nytel.com object.com objy.com ocs.com octel.com octopus.com odb.com odetics.com odi.com odyssey.com olivetti.com atc.olivetti.com orc.olivetti.com omnet.com omni.com omnicomp.com omron.com ontek.com ontologic.com optigfx.com opustech.com ora.com oracle.com ar.oracle.com at.oracle.com au.oracle.com be.oracle.com ca.oracle.com ch.oracle.com cl.oracle.com de.oracle.com dk.oracle.com es.oracle.com fi.oracle.com fr.oracle.com gr.oracle.com hk.oracle.com ie.oracle.com it.oracle.com jp.oracle.com kr.oracle.com mx.oracle.com my.oracle.com nl.oracle.com no.oracle.com nz.oracle.com pt.oracle.com se.oracle.com sg.oracle.com th.oracle.com uk.oracle.com us.oracle.com orainc.com oresoft.com ori-cal.com os.com osa.com osc.com ox.com aa.ox.com amex.ox.com cboe.ox.com dwl.ox.com oxford.com ny.ox.com oz.com pacbell.com dublin.pacbell.com fm.pacbell.com ns.pacbell.com orange.pacbell.com sf370.pacbell.com smds.pacbell.com srv.pacbell.com sysengr.pacbell.com pacer.com pacesetter.com pae.com panasonic.com ncrl.panasonic.com panda.com paradigm.com paradise.com paradyne.com paragon.com parallax.com parcplace.com parvenu.com pbi.com pcc.com pcr.com pcs.com pcssc.com pda.com pdi.com pdx.com pegasus.com pei.com peregrine.com performix.com pergamon.com perot.com persoft.com petra.com pharlap.com pharmacia.com pharos.com phase2.com philips.com picker.com pinnacle.com plexus.com plumhall.com plus5.com pm.com pmp.com point4.com polaroid.com polygen.com pooh.com portal.com posix.com postimage.com powerminds.com pra.com practic.com prc.com gis.prc.com prenhall.com prepnet.com pghsun.prepnet.com prime.com prisma.com prlgx.com probtek.com process.com progcons.com progress.com propress.com prospect.com protec.com proteon.com pc.proteon.com proto.com psg.com psi.com ca.psi.com reston.psi.com psitech.com troy.psi.com psocg.com psp.com pti.com ptl.com pubnet.com pubserv.com pw.com pwfl.com pyramid.com pzbaum.com qed.com qms.com qti.com quad.com qualcomm.com quantum.com quest.com quick.com quintus.com quorum.com quotron.com rabbit.com rac.com radius.com raidernet.com rasna.com rasterops.com rational.com ray.com rdc.com rdl.com ready.com eng.ready.com reasoning.com remtech.com resource.com resumix.com retix.com reuter.com rfengr.com rga.com ricoh.com risc.com anaheim.risc.com rjl.com robec.com rohmhaas.com rok.com las.aero.rok.com pro.aero.rok.com cadcam.rok.com cdc.rok.com cr.rok.com dallas.rok.com nb.rok.com roses.rok.com com.roses.rok.com ssme.rok.com rosemount.com rosetta.com ross.com rpal.com rpal.com.rpal.com elements.rpal.com narnia.rpal.com rsa.com rsi.com rss.com rsw.com rtech.com s3.com sabbagh.com saber.com sadtler.com sae.com sage.com sagepub.com saic.com ast.saic.com cseic.saic.com dayton.saic.com sasig.saic.com tucson.saic.com salestech.com samsung.com solbourne.com.samsung.com sanders.com sarnoff.com sas.com sbc.com texbell.sbc.com sbi.com sbiny.com sbs.com sc-scicon.com sca.com scc.com sccsi.com sceard.com sceptre.com sch.com sci.com sco.com scs.com sctc.com scubed.com scum.com sda.com sdi.com sea.com segue.com sei.com seiden.com semaphore.com sequent.com sequor.com sgi.com shearson.com shell.com shiva.com shownet.com si.com sia.com siemens.com hgs.siemens.com psi.siemens.com sc1.siemens.com sc2.siemens.com scr.siemens.com scscpg.siemens.com sead.siemens.com sgi.siemens.com shi.siemens.com sme.siemens.com sml.siemens.com sigma.com silma.com silvlis.com simpact.com sky.com slb.com asc.slb.com ase.slb.com asl.slb.com ate.slb.com bl.ate.slb.com sj.ate.slb.com uk.ate.slb.com cad.slb.com aa.cad.slb.com bl.cad.slb.com eps.slb.com geco.slb.com st.geco.slb.com hds.slb.com scr.slb.com sdr.slb.com sinet.slb.com sjo.sinet.slb.com slcs.slb.com lispm.slcs.slb.com smr.slb.com spar.slb.com spt.slb.com verisys.slb.com slc.com sli.com slv.com smc.com smithkline.com sms.com snide.com sns.com sobeco.com softest.com soi.com solbourne.com solutions.com sony.com sfc.sony.com smsc.sony.com sophia.com spacesoft.com sparc.com sparta.com spartacus.com spatial.com spdcc.com specialix.com spectra.com splat.com sprint.com sps.com sq.com sqnt.com squibb.com sr.com sra.com sri.com ai.sri.com cam.sri.com csl.sri.com ides.csl.sri.com css.sri.com erg.sri.com gec.sri.com isl.sri.com istc.sri.com itstd.sri.com nisc.sri.com wdc.sri.com ssc.com ssdl.com ssesco.com ssi.com ssssc.com ssw.com stardent.com stargate.com starnet.com starnine.com starstech.com statsci.com std.com steel.com stellar.com stephsf.com stepstone.com sterling.com stingray.com stortek.com stratus.com cac.stratus.com diag.stratus.com hw.stratus.com hwsrv.stratus.com mae.stratus.com mfg.stratus.com mis.stratus.com mktg.stratus.com pubs.stratus.com sqa.stratus.com sw.stratus.com swdc.stratus.com test.stratus.com ts.stratus.com stride.com sts.com stsusa.com stx.com su.com sumitomo.com sun.com sunquest.com sunrise.com sunriver.com surfsoft.com sware.com sybase.com symbolics.com scrc.symbolics.com symlab.com symult.com synaptics.com synchrods.com synnet.com synopsys.com synoptics.com synthesis.com syntrex.com sysadmin.com syssoft.com t3plus.com net.t3plus.com tandem.com dsg.tandem.com expand.tandem.com scpd.tandem.com transfer.tandem.com tandy.com tartan.com tasc.com taumet.com tcc.com tce.com tcs.com tcsc.com tdi.com teamtech.com techbook.com technicon.com tegra.com tek.com ens.tek.com teknowledge.com telebit.com telematics.com teleos.com telesoft.com tellabs.com teltone.com tera.com teramicro.com teraplex.com tetra.com tg.com tgs.com tgv.com thalatta.com think.com dialnet.think.com thomas.com thundercat.com thyrsus.com ti.com tic.com csc.ti.com tis.com ba.tis.com la.tis.com tmc.com tmmnet.com tnt.com toad.com tol-ed.com topologix.com touch.com trace.com transact.com transarc.com trav.com triangle.com tripos.com truevision.com trw.com bmd.trw.com coyote.trw.com decnet.trw.com demo.trw.com dsd.trw.com emc.trw.com etdesg.trw.com fp.trw.com inc.trw.com ind.trw.com ulana.ind.trw.com nafb.trw.com nba.trw.com rc.trw.com applenet.rc.trw.com decnet.rc.trw.com sdd.trw.com sedd.trw.com spf.trw.com stg.trw.com test.trw.com ulana.trw.com tsm.com tss.com ttank.com ttc.com tti.com twg.com twinsun.com twwells.com txt.com tymnet.com archer.tymnet.com tynan.com ub.com uci.com ueci.com ultimate.com ultra.com unicom.com uniface.com unifi.com unify.com unipress.com uniscope.com unisoft.com unisyn.com unisys.com adms-rad.unisys.com bluebell.unisys.com wwt.bluebell.unisys.com cam.unisys.com dc.cam.unisys.com nisd.cam.unisys.com sin.cam.unisys.com culv.unisys.com dev.unisys.com gvl.unisys.com isf.unisys.com ksp.unisys.com mcl.unisys.com prc.unisys.com reston.unisys.com rtc.reston.unisys.com stars.reston.unisys.com rmtc.unisys.com rosslyn.unisys.com stars.rosslyn.unisys.com rsvl.unisys.com slc.unisys.com sm.unisys.com sp.unisys.com elric.sp.unisys.com planet8.sp.unisys.com tredydev.unisys.com unocal.com ur-guh.com usi.com uss.com uswest.com utc.com utoday.com uucom.com uujobs.com val.com validgh.com vcg.com velvet.com verbatim.com verdix.com verifone.com veritas.com verity.com versatec.com versoft.com viar.com vicom.com vicor.com vicorp.com viewlogic.com vine.com visix.com visoftware.com vista.com vistanm.com vistatech.com visus.com vitalink.com vitec.com vmp.com volt.com vortex.com vpharm.com vpl.com vse.com vsg.com vsi.com wa.com waii.com wallich.com walney.com wang.com warburg.com wcscnet.com wdi.com wds.com wec.com weitek.com wellfleet.com wesco.com westmark.com wgate.com wingman.com wj.com wlk.com worlds.com worldspan.com wpg.com wri.com wrkgrp.com wrq.com wrs.com wsg.com ww.com wyse.com xanadu.com xavax.com xed.com xei.com xerox.com xeroxep.com europarc.xerox.com osbunorth.xerox.com parc.xerox.com wrc.xerox.com xait.xerox.com xsis.xerox.com xidak.com xlnt.com xopusw.com xtech.com xwind.com xylogics.com i.xylogics.com xyplex.com zds.com zehntel.com zipcode.com zone1.com zst.com incom.de softcom.dk dycom.fi salcom.fi intracom.gr sixcom.it oucom.osaka-u.ac.jp hanscom.af.mil centcom.mil eucom.mil j6.eucom.mil pacom.mil socom.mil parcom.nl casecom.co.nz telecom.co.nz corp.telecom.co.nz paccom.net.nz waikato.paccom.net.nz incomsec.org manchester-computing-centre.ac.uk cms.manchester-computing-centre.ac.uk graphics.manchester-computing-centre.ac.uk local-service.manchester-computing-centre.ac.uk british-telecom.co.uk te.british-telecom.co.uk national-computing-centre.co.uk

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Competitors to SABRE? Big Iron

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Competitors to SABRE?  Big Iron
Newsgroups: alt.folklore.computers
Date: Thu, 28 Sep 2000 04:24:03 GMT
"David C. Barber" writes:
One thing about SABRE, they were known for having the fastest big iron available. Though when they went to IIRC S/360 mod 91 and S/360 mod 195, I wonder how much they had to reprogram to use those machines properly.

Anyone know what the big reservation systems are running on today?


In the late '60s they moved to PARS ... which became ACP (airline control program) ... which morphed into TPF (transaction processing facility). Besides airlines there are also a number of financial operations implemented on TPF (see the ibm web pages below).

misc. ref:
https://www.garlic.com/~lynn/96.html#29
https://www.garlic.com/~lynn/99.html#24
https://www.garlic.com/~lynn/99.html#103
https://www.garlic.com/~lynn/99.html#152
http://www.sebek.co.uk/whatis.htm
https://web.archive.org/web/20010210031953/http://www.sebek.co.uk/whatis.htm
http://www.s390.ibm.com/products/tpf/tpflnks.htm
http://www.tpfug.org/
http://smtp.vsoftsys.com/Vssiprod.htm
https://web.archive.org/web/20010405175219/http://smtp.vsoftsys.com/Vssiprod.htm
http://www.webtravelnews.com/344.htm
http://www.Amdahl.com/doc/products/asg/links.htm

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Is a VAX a mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is a VAX a mainframe?
Newsgroups: alt.folklore.computers
Date: Thu, 28 Sep 2000 15:06:06 GMT
Lars Poulsen writes:
A cluster is quite a different beast from a multiprocessor. When we say multi-processor, we generally mean a tightly coupled system where two or more processors share the same physical memory, and in the classical case run a single operating system image, with each processor grabbing ready-to-run processes from the same scheduler run queue. Such systems have been around for many years, as you note in the paragraph that I have omitted. But a cluster is something else, and I first encountered it in VAX/VMS around 1984: A loosely coupled group of machines, running separate system images, but often sharing file systems through dual-access disk controllers.

the 360 disk controllers started out "twin-tailed". Various OS components took advantage of it for loosely-coupled (non-shared-memory) configurations. It was also used for OS/360 MP configurations since the standard 360 SMP ... which only had integrated shared memory and integrated I/O had to be simulated by having devices connected to 360 channels specific to each processor.

It was also used by PARS/ACP ... for the airline control program (recent discussion in this NG on sabre)

The disk controller in the 370 line (3830) expanded this support so that disks could be accessed via eight different paths. The 360 loosely-coupled support utilized reserve/release ... i.e. the whole device was reserved for a period and then released.

For ACP there was an enhancement to the 3830 disk controller that supported fine-grain locking semantics in the controller itself (early to mid-70s) ... supporting eight machine clusters.

For "HONE" (
https://www.garlic.com/~lynn/2000c.html#30 ) a simulated compare&swap CCW sequence was used. Standard disk I/O sequence were non-interruptable on a device. IBM "CKD" disk devices included I/O operations that compared equal/high/low/etc on disk and did conditional "branches" (TICs) in the I/O program (CCWs). HONE used the compare&swap sequence to build large "single-system-image" complex in the '78 time-frame supporting the field (salesman, support, etc).

When TPF (ACP follow-on) finally supported SMP multiprocessing, it broke its eight processor limitation. These days, TPF can be found with eight machine clusters where each machine might be a ten (or more) processor SMP (yielding 80 or more processors in the cluster).

In the late '70s, my wife also spent a year in POK in charge of mainframe "loosely-coupled" architecture. In the late '80s, my wife and I ran a skunk-works that resulted in the HA/CMP (High Availability Cluster Multi-Processing) product (
https://www.garlic.com/~lynn/95.html#13 )

misc. refs:
https://www.garlic.com/~lynn/2000e.html#21
https://www.garlic.com/~lynn/2000e.html#6
https://www.garlic.com/~lynn/2000c.html#21
https://www.garlic.com/~lynn/2000c.html#30
https://www.garlic.com/~lynn/2000c.html#45
https://www.garlic.com/~lynn/2000c.html#47
https://www.garlic.com/~lynn/2000b.html#38
https://www.garlic.com/~lynn/2000.html#31
https://www.garlic.com/~lynn/99.html#71
https://www.garlic.com/~lynn/99.html#88
https://www.garlic.com/~lynn/99.html#89
https://www.garlic.com/~lynn/99.html#149
https://www.garlic.com/~lynn/98.html#35a
https://www.garlic.com/~lynn/96.html#15
https://www.garlic.com/~lynn/93.html#26

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Is Tim Berners-Lee the inventor of the web?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Tim Berners-Lee the inventor of the web?
Newsgroups: alt.folklore.computers
Date: Fri, 29 Sep 2000 15:27:02 GMT
jones@cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) writes:
So, he invented the web, but without him, the web would have happened anyway, except without homage to SGML. Perhaps we'd all be writing our web pages in a TeX or Troff based enhancement of gopher's index notation.

and its precursor GML ...originally implemented in CMS' "script" text processor ... and as an aside, note that CERN (where he worked) was a long-time CMS shop ...since the early '70s (i.e. extensive use of GML in the shop for possibly 20 years or more at the time the work happened)

misc. ref:
https://www.garlic.com/~lynn/2000e.html#0
https://www.garlic.com/~lynn/2000e.html#1
https://www.garlic.com/~lynn/2000d.html#30
https://www.garlic.com/~lynn/99.html#28
https://www.garlic.com/~lynn/99.html#197

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

older nic cards

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: older nic cards
Newsgroups: alt.folklore.computers
Date: Fri, 29 Sep 2000 17:46:57 GMT
guy-jin writes:
instead of the usual cable (bnc?) or RJ-45 plugs, it has what looks kinda like a serial port, but larger, and female. it has 15 pins in a D formation. i have been told that this is an 'aui' interface, but i want to make sure. where would one find an adapter from this kind of connector to rj-45?

aui ... "thicknet", original ethernet ... until thinnet/bnc came along ... then it late mid to late-80s twisted pair started appearing (first time I used it was with synoptics twisted pair support).

i actually have some aui/bnc & aui/rj45 adapters laying around. The problem I found with some of the AUI adapters on some AUI machines is that the flange & male pins were not deep enuf to mate with the female plug on some machines (i.e. the female plug is slightly recessed and the body of the adapter makes contact with the surrounding housing preventing the plug from being fully inserted). I'm not familar with that particular laptop ... so can't say if it would be a problem or not. I know that I had the problem on various PCs and on a SGI Indy.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Test and Set: Which architectures have indivisible instructions?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Test and Set: Which architectures have indivisible instructions?
Newsgroups: comp.arch,comp.sys.m68k
Date: Sat, 30 Sep 2000 13:19:10 GMT
nygren writes:
I had a couple of questions about hardware support for shared memory multi-processing:

1) Which processors have an indivisible instruction like the 680X0's test and set (TAS)?


360s had T&S (360 mnemonic TS) in the 60s ... at least for 360/65 & 360/67 SMP. charlie did extensive work on fine-grain locking for SMP version of CP/67 on 360/67 and came up with the idea of compare&swap in the early 70s (the term compare&swap was selected because CAS are charlie's initials).

There was push back getting CAS into 370 architecture ... the challenge by people that "owned" the 370 architecture was to formulate programming models where CAS was used in single processor environments. The description using CAS for multi-threaded application running enabled for interrupts on single processor machines was eventually devised. The architecture group expanded CAS to two instructions ... one that operated on single word and one that operated on double word ... and changed the actual mnemonics to CS & CDS (compare&swap and compare double & swap). The two instructions were added to the 370 architecture in the early '70s.

programming examples for multithreaded operation
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/A%2e6%2e2?SHELF=

current instruction description
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/7%2e5%2e23

misc. refs:
https://www.garlic.com/~lynn/93.html#9
https://www.garlic.com/~lynn/93.html#14
https://www.garlic.com/~lynn/93.html#22
https://www.garlic.com/~lynn/99.html#88
https://www.garlic.com/~lynn/99.html#89

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Al Gore, The Father of the Internet (hah!)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore, The Father of the Internet (hah!)
Newsgroups: alt.folklore.computers,talk.politics.misc,alt.fan.rush-limbaugh
Date: Sat, 30 Sep 2000 13:32:29 GMT
djim55@datasync.com (D.J.) writes:
Hmmm. I wonder if a PC computer can do ocean current modeling ? Doubtful actually.

example of large array of workstation processors providing teraflop for various modeling efforts
https://www.garlic.com/~lynn/2000c.html#77
https://www.garlic.com/~lynn/2000d.html#2
https://www.garlic.com/~lynn/2000d.html#3

many existing "supercomputers" are workstation or PC processor chips hooked together in large arrays ... using a variety of interconnect.

random other ref:
https://www.garlic.com/~lynn/2000e.html#11
It isn't even clear how much of the NREN funding was actually spent (I remember various corporations commenting about being asked to donate commercial products to participate in NIIT "test bed"). Also, HPCC supercomputers started to shift to being large arrays of workstation &/or PC processor chips (workstations & PCs enabled the consumer online market as well as providing basis for todays supercomputers).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

OCF, PC/SC and GOP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OCF, PC/SC and GOP
Newsgroups: alt.technology.smartcards
Date: Sat, 30 Sep 2000 15:21:07 GMT
stevei_2000 writes:
Scott...I dont know whether you knowthe history of Unix or not....the whole idae of Unix was to have a standard operating system that you can write portable code in C over any hardware running Unix...As you know..Uniox was split by the IBM alliance into multiple flavours...

i think there was a split between AT&T Unix and BSD Unix ...

IBM commisioned a PC unix based on AT&T system/3 early on for the IBM/PC. IBM then did a AT&T S5V2/3(?) with lots of modifications for the PC/RT (aka AIX2) and also "AOS" (BSD4.3) for the same machine. IBM also offered a "Locus" (UCLA) port to 370 and PS2 (i.e. AIX/370 and AIX/PS2).

Then there was work by SUN and AT&T to do some merge of AT&T Unix and BSD unix for System 5 V4. Then there was the OSF work (HP, DEC, IBM, etc, including picking up some stuff from CMU mach & UCLA locus work).

misc. ref:
http://www.ee.ic.ac.uk/docs/software/unix/begin/appendix/history.html
http://www.be.daemonnews.org/199909/usenix-kirk.html
https://web.archive.org/web/20010222211622/http://www.be.daemonnews.org/199909/usenix-kirk.html
http://www.byte.com/art/9410/sec8/art3.htm
http://www.albion.com/security/intro-2.html
http://www.wordesign.com/unix/coniglio.htm
https://web.archive.org/web/20010222205957/http://www.wordesign.com/unix/coniglio.htm
http://www.fnal.gov/docs/UNIX/unix_at_fermilab/htmldoc/rev1997/uatf-5.html
http://www.sci.hkbu.edu.hk/comp-course/unix/

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Is Al Gore The Father of the Internet?^

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Al Gore The Father of the Internet?^
Newsgroups: alt.folklore.computers,talk.politics.misc
Date: Sat, 30 Sep 2000 15:41:01 GMT
Anne & Lynn Wheeler writes:
Many of the tcp/ip implementations & products in the Interop '88 & NSFNET1 era were based on BSD tcp/ip software. A lot of the consideration at the time was about the BSD code being free from the AT&T licensing (not government licensing). I remember the BSD "free"

much of the internet/tcp/ip software deployed during the 80s & 90s was based on the BSD network code.

Little snippet that I ran across regarding the gov/arpa involvement in supporting the dominant infrastructure deployed for the internet
He said (paraphrased) that every DARPA meeting ended up the same, with the Military coming in and giving CSRG (at UCB, the group that worked on BSD) a stern warning that they were to work on the Operating System, and that BBN will work on the networking. Every time, Bob Fabry, then the adviser of CSRG, would "Yes them to death" and they'd go off and just continue the way they were going. Much to the frustration of the DARPA advisery board.

... also
The next Part in the Saga of CSRG is very important, as it led up to the lawsuit and the creation of the "future" BSD Lite. The release of the Networking Release 1, under what is now known as the Berkeley license, because of the need to separate the networking code from AT&T owned code. After that, the base system was still developed, The VM

ref:
http://www.be.daemonnews.org/199909/usenix-kirk.html
https://web.archive.org/web/20010222211622/http://www.be.daemonnews.org/199909/usenix-kirk.html

Interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Vint Cerf and Robert Kahn and their political opinions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vint Cerf and Robert Kahn and their political opinions
Newsgroups: alt.folklore.computers
Date: Sat, 30 Sep 2000 19:28:47 GMT
"Joel C. Ewing" writes:
DARPA, NSF, and DoE were all involved, by Congressional mandate & funding. But the real impetus that enabled the explosive growth into what is today known as the Internet was the opening up of the network to private for-profit companies in 1991. Because the network was

NSFNET backbone had an AUP (acceptable use policy) that didn't permit for profit messages. For profit companies could send messages over NSFNET ... they just couldn't be associated with -profit activities.

There were big portions of the internet in the '80s that didn't have the same commercial AUP restrictions as the NSFNET backbone with regard to commercial uses.

I've conjectured that the NSFNET backbone status as a non-profit entity and not carrying "for profit" messages ... was as much an issue with commerical companies being able to supply stuff to NSFNET and take a tax write-off on the donation.

There was lots of stuff that DARPA, NSF, DOD, etc had been funding that were in fact being used for commercial purposes (despite statements like: NYSERNet, Inc. recognizes as acceptable all forms of data communications across its network, except where federal subsidy of connections may require limitations)

The BSD networking code ... which was used as a base for a large part of all the intenet software delivered on workstations & PCs ... which did as much as anything to fuel the internet ... is an example.

Not only that ... but CSRG was being explicitly directed by DARPA not to be doing networking software i.e. if they had followed DARPA direction the explosive growth in numbers of workstations & PCs on the internet in the 80s & 90s, I believe would have been significantly curtailed.

An issue with regard to the BSD networking code ... wasn't whether it had been done by a non-profit university using darpa &/or other gov. funding ... but whether there was any AT&T UNIX licensing issues.

misc. refs:
http://www.be.daemonnews.org/199909/usenix-kirk.html
https://web.archive.org/web/20010222211622/http://www.be.daemonnews.org/199909/usenix-kirk.html
https://www.garlic.com/~lynn/2000c.html#26
https://www.garlic.com/~lynn/2000e.html#5

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Is Tim Berners-Lee the inventor of the web?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Tim Berners-Lee the inventor of the web?
Newsgroups: alt.folklore.computers
Date: Sun, 01 Oct 2000 22:16:43 GMT
Howard S Shubs writes:
And e-mail dates from when? How about the first actual file transfer, not necessarily meaning ftp?

1) Stipulated that an inter-net is a network of networks, argument?
2) Stipulated that the networks may be of different types.
3) So an inter-net which is made of of heterogenous networks or even of different types of machines, would still be an internet.
4) "Internet" is not a trade mark, as far as I know.
5) "World Wide Web" is not a trade mark, as far as I know.
6) TCP/IP is not required for a network.
7) Therefore, requiring that -the- internet -has- to have TCP/IP to exist is faulty.
8) Therefore, the internet existed before 1983.

I'd love to read of information disproving this.


networking existed prior to 1983.

big thing that tcp/ip brought was IP ... or "internet protocol" ... along with gateways to interconnect multiple different networks.

About the time of the conversion to tcp/ip ... arpanet had about 200 nodes that were pretty homogeneous network ... and the internal network had nearly 1000 mainframe network nodes. During nearly the entire life of the arpanet ... the internal network was larger than the arpanet. One of the characteristics of the internal network (in contrast to the ARPANET) ... was that effectively every network node implemented a gateway funciton (just about from its start) ... allowing a diversity of different networking nodes to be attached to the internal network (drastically simplifying attachment of additional machines & growing the configurations).

Several factors contributed to the explosion in the "internet" after the 1983 conversion:

1) support of internetworking protocol
2) appearing local area network technology that could be integrated into the "internet" via the IP protocol layer
3) relatively powerful computing devices at inexpensive prices in the form of workstations and PCs
4) BSD (and other) tcp/ip (like MIT PC) support that was relatively easily ported to a variety of workstations and PCs.

The internal network of mainframe machines had grown to 2000 nodes by 1984 ... but with the availability of all those LAN-based workstations and PCs that had internetworking access to the internet ... the number of internet nodes passed the number of internal network mainframe nodes sometime in the mid-80s (i.e. the number of LAN connected PCs and workstations as internet nodes exceeded the number of internal network mainframe nodes).

The conversion from a homogeneous networking environment to a heterogeneous networking environment with gateways ... and supporting the emerging LAN technology as part of the heterogenity was one of the most significant things for the "network" in the 80s.

Given the limited size of the ARPANET and the requirement for relatively homogeneous operation at the protocol level (i.e. no "internetworking" capability) ... the internal network with its gateway function in just about every major node ... more closely represented a world-wide "internet" ... than the pre-83 ARPANET ever did.

misc refs:
https://www.garlic.com/~lynn/99.html#112
https://www.garlic.com/~lynn/2000e.html#13
https://www.garlic.com/~lynn/2000e.html#14
https://www.garlic.com/~lynn/2000e.html#18
https://www.garlic.com/~lynn/2000e.html#19
https://www.garlic.com/~lynn/2000e.html#23
https://www.garlic.com/~lynn/2000e.html#26
https://www.garlic.com/~lynn/2000e.html#28
https://www.garlic.com/~lynn/2000e.html#29
https://www.garlic.com/~lynn/internet.htm

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key https://www.garlic.com/~lynn/

Cerf et.al. didn't agree with Gore's claim of initiative.

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cerf et.al. didn't agree with Gore's claim of initiative.
Newsgroups: alt.fan.rush-limbaugh,seattle.politics,alt.politics,alt.folklore.computers
Date: Wed, 04 Oct 2000 14:26:58 GMT
"Mike" writes:
If Bush wanted to "roadblock" algore, he would have vetoed the bill. Asking algore to justify the bill (if your partisan spin is correct) is not "roadblocking" it.

BTW, Clave claims Bush merely "rubberstamped" the bill. You two are so busy agreeing with each other you failed to notice you contradicted each other. Apparently your "well accepted fact" is not known by your compatriots.


note news reporting from the period:
https://www.garlic.com/~lynn/2000d.html#70

appears that both Bush and Gore were proposing some forms of advanced funding ... and they appear to have needed to work out various differences.

Also with respect to previous posts about congress removing the "for profit" restriction on the internet ... in actuallity the only part of the internet that appears to have that restriction was the limited number of NSFNET backbone nodes. The other network acceptable use policies primarily make reference to such restrictions in the context of that data might have to flow thru such a restricted barrier when travelling to other (inter)networks.

given that many of these networks heavily involved non-profit institutions, universities, government funding ... etc ... there seems to have been something unique about the federal funding for the these limited number of NSFNET backbone nodes ... possibly even a congressional mandate by whoever in congress was backing NSFNET. If that is true ... then it is likely any case of congress removing restrictions ... was a case of congress givith and congress taketh away (i.e. they may not have been doing something so much that was really noble ... but cleaning up mess that they themselves may have caused).

misc. other refs:
https://www.garlic.com/~lynn/2000e.html#19
https://www.garlic.com/~lynn/2000e.html#26
https://www.garlic.com/~lynn/2000e.html#28
https://www.garlic.com/~lynn/2000e.html#29

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

Tektronics Storage Tube Terminals

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tektronics Storage Tube Terminals
Newsgroups: alt.folklore.computers
Date: Wed, 04 Oct 2000 14:30:37 GMT
"David C. Barber" writes:
I remember using some Tektronics raster storage tube graphic terminals in the mid to later 70s. They had a green screen that an image could be "written" into that would glow brightly until "erased". There was also a "write-through" mode that wasn't permanent. When used as an ASCII terminal,

IBM also had a special version of them that attached as a "co-screen" to a 3270 terminal ... that effectively allowed "cheap" 2250/3250 ... and would run at channel speeds (or at least 3270 controller speeds, 640kbytes/sec .. that is bytes not bits).

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key https://www.garlic.com/~lynn/

War, Chaos, & Business

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: War, Chaos, & Business
Newsgroups: alt.folklore.military
Date: Thu, 05 Oct 2000 03:54:41 GMT
I admit to be quite biased and I was very pleased when the following URLs were forwarded to me:
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/
http://www.defense-and-society.org/FCS_Folder/boyd_thesis.htm
https://web.archive.org/web/20010722090426/http://www.defense-and-society.org/FCS_Folder/boyd_thesis.htm

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

War, Chaos, & Business (web site), or Col John Boyd

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: War, Chaos, & Business (web site), or Col John Boyd
Newsgroups: alt.folklore.military
Date: Thu, 05 Oct 2000 14:23:37 GMT
Colin Campbell writes:
So?

on the web site a pointer to the time article, when the time article appeard (in the early '80s):

US Defence Spending: Are Billions being Wasted:
http://www.infowar.com/iwftp/cspinney/spinney.htm
https://web.archive.org/web/20010122080100/http://infowar.com/iwftp/cspinney/spinney.htm

A call to chuck suggested that John should be called.

It was my introduction to John Boyd and his Patterns of Conflict and Organic Design for Command and Control. A later copy of Organic Design for Command and Control is also.
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/

At the time of desert storm, US News & Reports carried an article on John ... and the "fight to change how america fights" ... making references to the major and cols. at the time as john's "jedi knights". A more detailed description

"Thinking like marines"

is also on the above web site.

There are sveral short synopsis of John ... following written by Dr. Hammond, Director of the "Center for Strategy and Technology" at the Air War College.
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/

from the above reference ... a quote from Krulak, commandant of the marine corps
As I write this, my mind wanders back to that morning in February, 1991, when the military might of the United States sliced violently into the Iraqi positions in Kuwait. Bludgeoned from the air nearly round the clock for six weeks, paralyzed by the speed and ferocity of the attack. The Iraqi army collapsed morally and intellectually under the onslaught of American and Coalition forces. John Boyd was an architect of that victory as surely as if he'd commanded a fighter wing or a maneuver division in the desert. His thinking, his theories, his larger than life influence, were there with us in Desert Storm. He must have been proud of what his efforts wrought.

...>
http://www.au.af.mil/au/awc/awcgate/awccsat.htm

also on the web site a pointer to
http://www.defense-and-society.org/FCS_Folder/boyd_thesis.htm
https://web.archive.org/web/20010722090426/http://www.defense-and-society.org/FCS_Folder/boyd_thesis.htm

"From Air Force Figher Pilot to Marine Corps Warfighting: Colonel John Boyd, His Theories on War, and their Unexpected Legacy"

as well as a wealth of other material on Col. John Boyd.

.......

my own reference to john (from the past)
https://www.garlic.com/~lynn/94.html#8

& a posted in this ng not too long ago:
https://www.garlic.com/~lynn/2000c.html#85

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

War, Chaos, & Business (web site), or Col John Boyd

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: War, Chaos, & Business (web site), or Col John Boyd
Newsgroups: alt.folklore.military
Date: Thu, 05 Oct 2000 15:44:55 GMT
misc. other
http://www.belisarius.com/modern_business_strategy/mie/mie_33.htm
https://web.archive.org/web/20020217191358/http://belisarius.com/modern_business_strategy/moore/mie_33.htm
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/

"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

From the dedication of Boyd Hall, United States Air Force Weapons School, Nellis Air Force Base, Nevada. 17 September 1999


& ...
http://www.defense-and-society.org/FCS_Folder/comments/c199.htm
https://web.archive.org/web/20010412225142/http://www.defense-and-society.org/FCS_Folder/comments/c199.htm

other posts & URLs (from around the web) mentioning Boyd and/or OODA-loops
https://www.garlic.com/~lynn/subboyd.html

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

War, Chaos, & Business (web site), or Col John Boyd

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: War, Chaos, & Business (web site), or Col John Boyd
Newsgroups: alt.folklore.military
Date: Thu, 05 Oct 2000 18:05:53 GMT
velovich@aol.com.CanDo (V-Man) writes:
Lynn Wrote:
>Colin Campbell writes:
>>So?
>
>on the web site a pointer to the time article, when the time article
>appeard (in the early '80s):

Uhm, Lynn, since you missed it, this group is NOT for current policy issues. We might get on that topic on occaisnion, but you really want to go over to Sci.Military.Moderated - it's far more topical there.


subjects from '83 (& later) are precluded here as "current"

is the following from pre-83 ... permissable?
http://www.defense-and-society.org/FCS_Folder/comments/c199.htm
https://web.archive.org/web/20010412225142/http://www.defense-and-society.org/FCS_Folder/comments/c199.htm

Much to the dismay of the autocrats at Wright-Pat, the Mad Major's theory of energy-maneuverability (E-M) turned out to be a stunning success. It provided a universal language for translating tactics into engineering specifications and vice versa and revolutionized the way we look at tactics and design fighter airplanes.

Boyd used it to explain why the modern F-4 Phantom performed so poorly when fighting obsolete MiG-17s in Vietnam and went on to devise new tactics for the Phanto whereupon Air Force pilots began to shoot down more MiGs.

He used it to re-design the F-15, changing it from an 80,000-pound, swing-wing, sluggish behemoth, to a 40,000-pound fixed-wing, high-performance, maneuvering fighter. His crowning glory was his use of the theory to evolve the lightweight fighters that eventually became the YF-16 and YF-17 prototype and then to insist that the winner be chosen in the competitive market of a free-play flyoff.


--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

FW: NEW IBM MAINFRAMES / OS / ETC.(HOT OFF THE PRESS)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FW: NEW IBM MAINFRAMES / OS / ETC.(HOT OFF THE PRESS)
Newsgroups: bit.listserv.ibm-main
Date: Thu, 05 Oct 2000 23:11:38 GMT
bblack@FDRINNOVATION.COM (Bruce Black) writes:
Seymore or someone, refresh my memory (another senior moment, more like a senior day), did AOS become VS1 or VS2, none of the above, all of the above, or what?

AOS become SVS ... misc. refs:
https://www.garlic.com/~lynn/2000c.html#34

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

I'll Be! Al Gore DID Invent the Internet After All ! NOT

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I'll Be! Al Gore DID Invent the Internet After All ! NOT
Newsgroups: alt.fan.rush-limbaugh,alt.folklore.urban,seattle.politics,alt.politics,alt.folklore.computers
Date: Sat, 07 Oct 2000 01:55:04 GMT
korpela@ellie.ssl.berkeley.edu (Eric J. Korpela) writes:
AOL and Compuserve connected to the Internet because that's what their users wanted. At that point, and before, the internet was seen as a threat. Once users started jumping ship to ISPs, connectivity was their only means of survival. They tried to fight to protect their client businesses by charging exhorbitant internet access fees on top of their user fees. That just annoyed their rapidly departing users even more. Only when they were reduced to being ISPs themselves did the trend reverse itself.

some of the small to mid-size online service providers ... including places like online banking (in '80s & early '90s) ... have made the statement that in the "old" days they would have 40-50 different software versions of their online support ... for different versions of operating systems, different versions of modems, etc. & that this represented a huge expense with little or no benefit.

The advent of the internet service providers and interoperable internet allowed them to essentially eliminate all that overhead, expense and trouble (along with significant customer call center overhead).

As customers found that more and more of their online service providers were moving to interoperable, generic support that meets all of their requirements ... it represented quite a consumer convenience (it wasn't just AOL, Prodigy, Compuserve providing online service ... but little & middle tier guys including bunches of business specific places like both consumer and commercial banking).

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

I'll Be! Al Gore DID Invent the Internet After All ! NOT

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: I'll Be! Al Gore DID Invent the Internet After All ! NOT
Newsgroups: alt.fan.rush-limbaugh,alt.folklore.urban,seattle.politics,alt.politics,alt.folklore.computers
Date: Sat, 07 Oct 2000 14:26:36 GMT
"Mike" writes:
The internet of the 80's could not remotely handle the volume today, either. The internet hardware & software was continually upgraded to handle the increasing volume, and there's no reason to believe that the same would not have happend to FidoNET.

in the case of usenet ... it is mostly a broadcast protocol. In '93, pagesat had a full usenet feed over satellite running at 9600 baud (continuously, 24hrs/day, 7days/week). I did a lot of fixes for the driver for dos & unix ... and co-authored an article for boardwatch magazine (the article had picture of me standing in front of the R/O dish). During the period they upgraded to 19.2k ... in anticipation of increased bandwidth requirements.

for non-broadcast technology ... a large part of the bandwidth is the store & forward point-to-point bandwidth simulating broadcast(i.e. incoming bandwidth to one node ... and the outgoing bandwidth from that node to all the nodes it forwards it to, aggregated for every node that does forwarding).

while usenet bandwidth has increased from '93 ... the use of internet point-to-point store&forward to simulate broadcast requires significantly more aggregate bandwidth than the basic broadcast bandwidth requirements of the data being distributed.

random refs:
https://www.garlic.com/~lynn/2000.html#38

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Sun, 08 Oct 2000 13:19:47 GMT
andrew writes:
OK, so you're off to do some e-shopping. You click on the padlock and it says "this certificate belongs to bogus.com" and "this certificate was issued by snakeoil CA" (no I don't mean the CA generated by OpenSSL, I mean one of the "normal" ones like verisign or thawte...).

as an aside ... the content validation done by a client of a server SSL certificate is to compare the server's domain name with the domain name in the server SSL certificate. One of the supposed claims for this feature is weaknesses in the domain name infrastructure.

When a server registers for a domain name SSL certificate ... the CA has to authenticate the domain name request with the domain name infrastructure (as the authoritative source for domain names) as to the owner of the domain name (i.e. does the requester of the certificate actually have rights to the domain name being specified).

In other words, the CA infrastructure is dependent on the same domain name infrastructure that is supposedly the thing that the whole process is attempting to fix.

Now one of the methods to improve the integrity of the domain name system (so that CA's can rely on them ... and minimize things like domain name hijacking ... where I could hijack a domain name and then obtain a certificate for that domain name) is to register public key along with the domain name.

However, if public keys are registered (along with the domain name), the existing domain name infrastructure could return the public key in addition to other information.

This creates something of a catch-22 for ca infrastructure ... fixing the domain name integrity (with public keys) so that CAs can rely on domain name integrity as the authoritative source for domain names ... also creates the avenue making the domain name certificates redundant and superfluous.

random refs:
https://www.garlic.com/~lynn/aepay5.htm#rfc2931
https://www.garlic.com/~lynn/aepay5.htm#rfc2915
https://www.garlic.com/~lynn/aepay4.htm
https://www.garlic.com/~lynn/aadsmore.htm#client1
https://www.garlic.com/~lynn/aadsmore.htm#client2
https://www.garlic.com/~lynn/aadsmore.htm#client3
https://www.garlic.com/~lynn/aadsmore.htm#client4

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Mon, 09 Oct 2000 02:41:53 GMT
Daniel James writes:
If the signatures were to be checked only by the bank then you would be quite right. The advantage of using certificates is that the trader can verify the payment instruction before sending it to the bank. A trader would certainly want to do this before issuing a signed receipt/order confirmation.

You're quite right to point out the existence of failure modes - compromise of a bank's CA certificate would be disastrous - but these are technical problems that can be overcome. Consider that banks already have to handle secret keys whose compromise would be very expensive to their business (e.g. PIN verification keys for ATM cards) and that they have equipment and procedures in place to do this.


a big part of certificates were to provide means of authentication in an offline environment between parties w/o prior relationship (need some authoratative 3rd party that both parties trusted to attest to something ... analogous to letters of credit in days of sailing ships).

in retail business with consumer ... having consumer "identity" certificates ... creates privacy issues.

in retail business with consumer ... and doing electronic financial transactions ... the transactions are done online ... and for the most part a merchant doesn't really care who you are ... they just care that the bank guarentess that the merchant will get paid (the bank cares that ther authorized entity for an account is, in fact, the person originating the transactions).

for the most part the account operation ... the part that allows a transaction to be executed ... and a pin environment ... used for authenticating a transaction have an extensive integrated data processing system, extensive integrated business continuity operation, triple redundancy in multiple physical locations, etc.

Currently, most CAs represent independant data processing operations ... which can represent expense duplication.

Various financial operations have done relying-party-only certificates, which address both privacy concerns and liability concerns. Effectively, certificate contains the account number.

Such a mechanism, integrated with standard account management, has an account owner sign & send a public key to a financial institution's RA. The RA validates some stuff, and has the financial institution create a certificate, stores the original in the account record and returns a copy of the certificate to the key/account owner.

The account owner, originates a transaction, signs it, and appends the signature and certificate and sends it on its way to the financial institution. The financial institution extracts the account number from the transaction, reads the account record, and gets enough information from the account record (including the original of the certificate) to authenticate and authorize the transaction.

Given that the financial institution needs to read the account record to obtain meaningful information (including the certificate original), the account owner can do some transaction payload optimization and compress his certificate copy that is appended to the transaction. Any field that is in both the copy of the certificate and the original of the certificate (stored in the account record) can be compressed from the returned certificate.

Since every field in the copy of the certificate is also in the original of the certificate, it is possible to compress the certificate appended to the transaction to zero bytes. This can be significant when an uncompressed certificate is 10-50 times larger than the financial transaction it is appended to.

Since the financial institution is finding that the account owner is alwas able to compress the attached certificate to zero bytes (for attaching to transactions), the financial institution, when returning the certificate to the entity registering their public key, does some pre-optimization and returns a pre-compressed zero byte certificate (saving the overhead of having the account owner have to do it on every transaction).

misc. refs:
https://www.garlic.com/~lynn/ansiepay.htm#aadsnew2

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

IBM's Workplace OS (Was: .. Pink)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's Workplace OS (Was: .. Pink)
Newsgroups: alt.folklore.computers
Date: Mon, 09 Oct 2000 12:35:07 GMT
Burkhard Dietrich Burow writes:
Long ago I attended a 'Workplace microkernel' presentation. While some in the audience could appreciate the technology, no one could figure out why anyone would want it. e.g. IIRC, a main message was that one could run Unix on an IBM mainframe. i.e. >10 times the Sun/SGI/.. cost with no obvious benefit.

then there was the precursor, SAA ... where you could run all you PC programs on an IBM mainframe and use your PC as a mainframe output terminal.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Why trust root CAs ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Mon, 09 Oct 2000 12:42:04 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
There's promise there, but also problems. I've not been keeping up, but I understand that one problem is that they've not figured out how to sign all of the RR's in .com before it's time to sign them all again. It takes time to sign 30,000,000 records with a public key. Another problem is that adding signatures make packets on the wire a lot bigger.

DNS does allow for different types of requests with different types of information returned. for part of the issue, clients can selectively request keys & signatures (like they do today in SSL). increase of bits on the wire has got to be significantly less than the current SSL setup.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Mon, 09 Oct 2000 13:02:34 GMT
Daniel James writes:
Then, of course, only the bank can verify the signature. I think it's important that other parties involved in the transaction can verify the customer's signature, and so the full certificate will have to be sent with each transaction, and the full certification chain (and revocation lists) must be available to all parties. I agree that adds somewhat to the volume of comms trafic for each transaction, but it does buy useful security.

the other parties in a retail transaction want to know that the bank has to say about the financial transaction between the account owner and the bank. The fact that the other parties even have to see the transaction is an artifact of the current POS online environment (the online terminal happens to be at the merchant site). That transaction flow is likely to remain for some time ... but with all parties having online capability ... there is nothing to prevent the transaction flow to reflect the actuality of the parties involved (i.e. the retail transaction is an instruction from the consumer to their bank, that other parties even see the transaction is an left-over artifact of the current world).

Because of an artifact of existing POS online technology ... the other parties are evesdropping on something between the consumer and the consumer's financial institution. Institutionalizing that evesdropping would make it worse. In at least some of the bank card world there are even association quidelines about the merchant not doing anything more than certain minimums with the transaction (in part to address privacy issues).

Furthermore, in the current world, financial institutions tend to want to see all transactions against their accounts (& not having things like fraud attempts being truncated).

As stated previously, the merchant wants to know that the consumer's bank will pay ... the merchant getting a transaction from the consumer's bank indicating that they will get their money satisfies that requirement. Institutionalizing all the other parties evesdropping on consumers' instructions to their financial institution is also aggravating privacy issues (i.e. just because an implementation artifact has other people's messages flowing thru something that I have access to ... doesn't mean that i should be monitoring it).

Now, there can be a case that the return transaction from the financial institution to the merchant be signed and carry a certificate ... but the current bank->merchant infrastructure ... to some extent, operates within the bounds of a trusted network ... alliviating much of the authentication requirement that occurs in an untrusted network environment.

There can also be other transactions that might need authentication, (and could benefit from public key authentication) but the specific discussion was about retail transactions where the consumer is sending directions to their financial institution.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

IBM's Workplace OS (Was: .. Pink)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's Workplace OS (Was: .. Pink)
Newsgroups: alt.folklore.computers
Date: Mon, 09 Oct 2000 14:28:01 GMT
Anne & Lynn Wheeler writes:
then there was the precursor, SAA ... where you could run all you PC programs on an IBM mainframe and use your PC as a mainframe output terminal.

In the initial application migration from mainframe to PCs, much of the business world somewhat recognized the business continuity requirement to manage the corporate data/assets.

For instance ... a lot of business modeling had migrated from APL (and other kinds) business models to PC spreadsheets. A lot more of the business people could participate in doing what-if questions. However, having valuable unsecured, unbacked-up corporate assets (the data) on PCs opened a huge business risk (some relatively recent studies have indicated that 50% businesses that have not backed up business critical data and the disk goes bad, go bankrupt within the first month ... for instance loose billing file and not able to send out bills ... which has a horrible downside effect on cash flow).

The disk storage division attempted to address that with mainframe products that provided fast, effecient, cost-effective management of corporate asset PC-resident data (back on the mainframe). However, in a turf war with the communication division ... they lost. As a result, the design point, price/performance & paradigm for mainframe support of PCs was essentially relegated to treating the PCs as a display device.

With that restriction ... to keep the mainframe "in the game", pretty much required trying to migrate the execution of PC applications back to the mainframe.

SAA was heavily tied into that.

Another aspect, is that the communication division ... had a host-centric, point-to-point communication operation w/o a networking layer. Because of the lack of a networking layer, even the LAN products were deployed as large bridged structures (no network layer, no routing).

I got in deep dudu when I first did a ppresentation to the IS-managers of a large corporate client where I presented real networking, multi-segment, routed LANs, high-speed host interconnect and the first 3-layer architecture.

The internal SAA and LAN groups came down hard. It was the place in life for PCs to be treated as display terminals. Since there was such low inter-PC traffic and such low mainframe<->PC traffic, that having 300 (or more) PCs sharing a common 16mbit/sec bandwidth was perfectly acceptable.

16mbit/sec T/R was obviously better than 10mbit ethernet (although there were a number of studies that showed it was possible to get more effective thruput out of ethernet than 16mbit T/R), and since nobody used the bandwidth anyway, having 300 PCs sharing the same 16mbits wasn't a problem (compared to ten, routed ethernet segments with only 30 PCs sharing 10mbit ... each segment actually having more effective throughput).

The idea of 3-tier & significant mainframe<->PC bandwdith was a real threat to the SAA group ... because many of the things that SAA wanted to migrate off PCs to the mainframe would go instead, to the servers in the middle (and/or applications stayed on the PC and used the middle layer and the host backend as disk farm).

postings in reply to question on origins of middleware
https://www.garlic.com/~lynn/96.html#16
https://www.garlic.com/~lynn/96.html#17

random refs:
https://www.garlic.com/~lynn/96.html#14
https://www.garlic.com/~lynn/98.html#50
https://www.garlic.com/~lynn/99.html#36
https://www.garlic.com/~lynn/99.html#38
https://www.garlic.com/~lynn/99.html#123
https://www.garlic.com/~lynn/99.html#124
https://www.garlic.com/~lynn/99.html#201
https://www.garlic.com/~lynn/99.html#202
https://www.garlic.com/~lynn/2000.html#75

misc. post from 1994 ...
https://www.garlic.com/~lynn/94.html#33b

Speaking of HSDT, in the mid-80s we were contracting for equipment for parts of the HSDT project from some companies in Japan. On the Friday before I was to leave for a meeting, somebody (in the US) announced a new newsgroup on high-speed data communication including in the posting the following definitions:


low-speed               <9.6kbits
medium-speed            19.2kbits
high-speed              56kbits
very high-speed         1.5mbits

On Monday morning on the wall of a conference room in Japan was:

low-speed               <20mibts
medium-speed            100mbits
high-speed              200-300mbits
very high-speed         >600mbits

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Where are they now : Taligent and Pink

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where are they now : Taligent and Pink
Newsgroups: alt.folklore.computers
Date: Mon, 09 Oct 2000 15:15:33 GMT
Tom Van Vleck writes:
The business model for all this was never completely clear, and in the summer of 1995, upper management quit en masse. IBM

probably spring '95, i did a one-week JAD with dozen or so taligent people on use of taligent for business critical applications. there were extensive classes/framework for GUI & client/server support, but various critical pieces were missing.

The net of the JAD was about a 30% hit to the taligent base (I think two new frameworks plus hits to the existing frameworks) to support business critical applications.

Taligent was also going thru rapid maturity (outside of the personal computing, GUI paradigm) ... a sample business application required 3500 classes in taligent and only 700 classes in a more mature object product targeted for the business environment.

i think that shortly after taligent vacated their building ... sun java group moved in.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Mon, 09 Oct 2000 15:57:33 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
There's promise there, but also problems. I've not been keeping up, but I understand that one problem is that they've not figured out how to sign all of the RR's in .com before it's time to sign them all again. It takes time to sign 30,000,000 records with a public key. Another problem is that adding signatures make packets on the wire a lot bigger.

... also there are a number of different places/transaction where public key & digital signatures might benefit domain name infrastructure.

at its simplest the example was registering public keys at the same time as the domain name was registered. this would benefit the SSL domain name certificates ... since domain name infrastructure could require that changes to domain name info has to be signed ... making domain name hijacking much harder (i.e. i can't hijack a domain name and then get a valid ssl certificate for that domain).

on the other hand, with such keys registered as part of the domain name entries ... clients could optionally request the key be returned in addtion to host->ip-number. using that public key as part of server authentication and SSL session setup would reduce the bits on the wire compared to the existing client/server SSL handshaking.

... i.e. the solution to improving integrity for domain name SSL certificates is also a solution for making domain name SSL certificates obsolete, redundant and superfluous.

--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/

Where are they now : Taligent and Pink

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where are they now : Taligent and Pink
Newsgroups: alt.folklore.computers
Date: Tue, 10 Oct 2000 02:27:57 GMT
another object operating system in the valley ... was sun's dolphin ... java got a lot from it.

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

How did Oracle get started?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How did Oracle get started?
Newsgroups: alt.folklore.computers
Date: Tue, 10 Oct 2000 03:40:22 GMT
Joe Morris writes:
Oracle is one company I don't know about the early days. Was it started as a university research project or something? It was in the 1970's so I'm guessing it was originally coded for mainframes. True?

i didn't pay much attention back then ... there was the original system/r at san jose research in the mid to late 70s, then ingres w/stonebreaker et al at berkeley and then ingres, custom hardware w/britton-lee. I have some recollection of epstein leaving britton-lee and they hired a person that i was doing some work with in san jose to replace him. epstein, i believe was involved w/teradata between britton-lee and sybase.

I was involved in some of the technology transfer from san jose to endicott for sql/ds. baker then did a lot of the technology transfer from endicott back to STL for DB2. baker then showed up at oracle (but i don't remember the dates for that).

I do have recollection of Nippon Steel announcing they were buying Oracle in the late '80s ... and then Oracle canceling the deal ... I believe after a really good quarter.

somewhere in all that informix, tandem, etc.

The were all unix platforms with some VMS and misc. other platforms thrown in.

misc refs:
http://www.mcjones.org/System_R/mrds.html
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Teradata.html
http://www.mcjones.org/System_R/other.html
http://epoch.cs.berkeley.edu:8000/redbook/lec1.html
http://www.dbmsmag.com/9609d13.html
http://infoboerse.doag.de/mirror/frank/faqora.htm
https://web.archive.org/web/20020215000745/http://infoboerse.doag.de/mirror/frank/faqora.htm
http://www.pointbase.com/about/management_team.html
https://web.archive.org/web/20010401020537/http://www.pointbase.com/about/management_team.html

note in one of the other web/online oracle histories ... it says oracle wasn't available until 1984 on ibm mainframe (under vm/370 ... same platform that system/r was developed on).

folling from one of the above urls.

What is Oracle's history?
1977 Relational Software Inc. (currently Oracle Corporation) established

1978 Oracle V1 ran on PDP-11 under RSX, 128 KB max memory. Written in assembly language. Implementation separated Oracle code and user code. Oracle V1 was never officially released.

1980 Oracle V2 released on DEC PDP-11 machine. Still written in PDP-11 assembly language, but now ran under Vax/VMS.

1982 Oracle V3 released, Oracle became the first DBMS to run on mainframes, minicomputers, and PC's. First release to employ transactional processing. Oracle V3's server code was written in C.

1983 Relational Software Inc. changed its name to Oracle Corporation.

1984 Oracle V4 released, introduced read consistency, was ported to multiple platforms, first interoperability between PC and server.

1986 Oracle V5 released. Featured true client/server, VAX-cluster support, and distributed queries. (first DBMS with distributed capabilities).


--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Tue, 10 Oct 2000 16:20:10 GMT
Daniel James writes:
Most of the certificates in use today associate a public key with an identity that is expressed in terms of a person's name and address. X.509 rather presupposes that's the sort of thing people will want to do when it sets out the fields that can exist in a DName. It doesn't really have to be that way - a certificate needs /something/ to identify the owner, but that something doesn't have to contain a name and address as long as it's unique to

majority of the certificate use today is the SSL domain name server certificate (aka HTTPS or secure web), possibly >95% all instances where an event occurs where a certificate is authenticating ... and possibly 99.99999999% of the casess where a client is authenticating a sever.

The SSL domain name server certificate associates the public key and the host name or domain name. The client checks the name in the certificate against the web address.

The authoritative resource for domain name ownership is the domain name infrastructure ... which CA's have to rely on when authenticating a request for a SSL domain name server certificate.

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

Why trust root CAs ?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why trust root CAs ?
Newsgroups: sci.crypt
Date: Wed, 11 Oct 2000 13:29:19 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
In other words, since when is a DUNS number a proof of identity, honesty, financial stability, or anything else?

... D&B ... several years ago i was doing some consulting under a registered DBA name (but wasn't in D&B). company i was signing a new contract with had a process that included a D&B check. since this was a two person operation ... we didn't at the time have a D&B ... but D&B called us up ... gave us a D&B number and took down our information over the phone. This information, they provided back to the company we were signing a contract with (they may have done something else also ... but if they did, i saw no evidence of it).

with regard to a domain name ... i can register a DBA and open a checking account with that DBA, get D&B registration ... hijack a domain name and provide all information to the CA that correctly validates (i.e. the domain name validates with the domain name infrastructure ... and all the other information provided also validates).

in the ssl domain name server certificate case ... all the client is doing is checking that the web address they are using and the domain name in the certificate match.

if there is any addition information in a certificate & it doesn't correspond with what a client might expect, oh well ... out of the millions of people that might do a SSL operation with the server & also actually physically look at any other information that may be part of a ssl domain name server certificate is possibly countable on fingers & toes.

a CA can authenticate stuff it has direct knowledge of and for the rest relies on authoritative sources for that information (like domain name infrastructure as the authoratative source for domain name ownership)

also as to regard to DBAs ... in the past i've purchased computer equipment with a bank card and later got the statement ... the legal DBA name on the statement of the business I bought the equipment from ... bore no correlation with the name of the store that I bought the equipment from. I did call the store and confirm their legal DBA name.

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

Why not an IBM zSeries workstation?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why not an IBM zSeries workstation?
Newsgroups: comp.arch
Date: Thu, 12 Oct 2000 01:55:11 GMT
hack@watson.ibm.com (hack) writes:
for a bit-mapped display, mouse, full-duplex keyboard and Ethernet. I still have one in my office. The performance indicators are: 150Kips (370), 900K memory, 64M disk -- and yet it had subsecond interactive response time. It was very much faster than the XT/370 that came out a bit later (even though the latter had a whopping 4M of memory, if I remember correctly).

xt/370 prior to customer ship at 384kbytes ... with some difficulty it was expanded to 512kbytes before shipping to customers (for awhile I was being blamed for holding up customer ship by several months).

a problem was that paging was a cross-processor call to the 8088 which would typically do i/o to a 100ms/access xt hard disk .. and there were a lot of things that would result in lots of paging ... although not as many as in 384k (where a lot of measurements showed extensive page thrashing).

cms file access would also be a cross-processor call to the 8088 which then did an i/o to (same) 100msc/access xt hard disk.

the xt/370 (& at/370) were co-processor cards in the PC.

there was a different machine out of POK that was a separate box with an adapter card and a big thick cable from the PC to the separate box. It had 4mbytes of memory and faster processor.

random refs:
https://www.garlic.com/~lynn/96.html#23
https://www.garlic.com/~lynn/2000.html#5
https://www.garlic.com/~lynn/2000.html#29

in the past, i received email to the effect that the "POK" machine in the following list was actually one of the separate 4mbyte memory boxes ... even tho it was listed as a 4341.
https://www.garlic.com/~lynn/99.html#110

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

Why not an IBM zSeries workstation?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why not an IBM zSeries workstation?
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 13 Oct 2000 15:08:16 GMT
Anne & Lynn Wheeler writes:
in the past, i received email to the effect that the "POK" machine in the following list was actually one of the separate 4mbyte memory boxes ... even tho it was listed as a 4341.

https://www.garlic.com/~lynn/99.html#110


recent thread in alt.folklore.computers on tektronics storage tube terminals, I believe the same group that had did the 4mbyte 370 machine in box that attached to pc ... also did the 3270ga ... tectronics storage tube attached to 3270 running at channel (or at least 3270 controller) speed.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

VLIW at IBM Research

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VLIW at IBM Research
Newsgroups: bit.listserv.ibm-main
Date: Fri, 13 Oct 2000 15:22:40 GMT
bblack@FDRINNOVATION.COM (Bruce Black) writes:
"Edward J. Finnell, III" wrote:

>FWIW....interesting to say the least.

"Fascinating, Captain. They are using technology unknown to the Federation"


the low end 370 machines tended to be microprocessors that programmed the 370 instruction set (i.e. potentially 10:1 microporcessor instructions per 370 instructions), like 115, 125, 135, 145, etc.

the high-end machines were horizontal microcode machines ... which instead of being rated in microprocessor instructions per 370 instructions, it was more a case of avg. machine cycles per 370 instructions (since a horizontal instruction could be doing multiple things concurrently). For instance, one of the enhancements going from 165 to 168 was that the avg. cycle per instruction dropped from about 2.1 cycles/instruction to 1.6 cycles/instruction (i.e. improved implementation &/or better overlap of 370 oeprations).

The 3830 disk controller was also horizontal microcode engine ... while the 3880 disk controller was a vertical microcode engine (jib-prime) for control functions and special hardware for data movement.

random refs:
https://www.garlic.com/~lynn/95.html#3
https://www.garlic.com/~lynn/2000b.html#38
https://www.garlic.com/~lynn/2000c.html#75
https://www.garlic.com/~lynn/2000e.html#6

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why not an IBM zSeries workstation?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why not an IBM zSeries workstation?
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 13 Oct 2000 19:34:51 GMT
glass2 writes:
I'm a little confused by your reference to the '4mbyte 370 machine in a box' phrase. I'm not aware of such a machine (although, I'll readily confess that it's possible one existed without my knowledge).

I'm aware of the XT/370, which was a partial implementation of a S/370 architecture machine on a card-set which plugged into a XT. This contained 512K of real storage, and I think it had a 4M virtual storage size.

Then, there was the AT/370, which was a similar card-set which plugged into an AT. I'm a little fuzzy on the memory sizes that this had.

Following this (if you'll discount the A74 internal project), there was the 7437. This was a full S/370 implementation, with 16M of real memory, that was in a box that cabled to an adapter card in a PS/2.


there was a A74 370 box done by group in POK ... which eventually morphed into the 7437. A74 was Beausoleil's department in POK.

I had both A74 and a couple XT/370s all running in the same office at one time.

The A74 had about 350kips (370) processor.

Compared to XT/370 (which used a modified 68k) it was much closer to 370. I provided that changes to the pc/370 kernel for it to run on A74.

It had differences from 370 ... like 4k keys instead of 2k keys ... more like 3081 XA.

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why not an IBM zSeries workstation?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why not an IBM zSeries workstation?
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 13 Oct 2000 19:39:17 GMT


Date: 88/06/22  18:51:005
To: Distribution

A74 Users:

I am happy to announce that the A74/370 processor has finally been
approved for special bid orders as the "IBM 7437 VM/SP Technical
Workstation". This approval was deliverd by Ed Kfoury, President of SPD
on Wednesday, June 8th.

Those of you who are familiar with the earlier version of the A74/370
should note that the special bid version of the unit has the following
features:

o  Attachment to PS/2 Models 60, 70, and 80
     o  16-bit interface to the PS/2
o  16 MB real memory with ECC
     o  All S/370 I/O instructions are supported
o  Full VM/SP Release 5
o  PER
o  VM Assist
     o  Hardware Time of Day Clock
o  Writable Control Store
     o  PS/2 5080 Adapter (optional)
o  MYTE 3270 Emulation and Host Attach (DFT, SNA, Bisync)
o  5 to 7 times faster Transparent File Access to Host Minidisks
o  CE support for field repair

The performance of the 7437 processor itself has not changed, but the
increased speed of the PS/2 and VM Assist microcode combine to deliver
better overall system throughput.  When coupled with an optional 5080
adapter for the PS/2, the workstation offers exceptional processing
capability for engineering applications like CADES, CATIA, CADAM, CBDS
and GPG.  Please contact Gary Smith (GSMITH at PLKSB) for more
information.

I would like to thank all of you who have helped in one way or another
to make this product available to our customers.  This is an exciting
time for IBM as it shows that we are serious about our 370 customers.
We now provide them with a common architecture solution for their
workstation and mainframe requirements.

W. F. Beausoleil

... snip ... top of post, old email index

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why not an IBM zSeries workstation?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why not an IBM zSeries workstation?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 14 Oct 2000 04:57:11 GMT
jmaynard@thebrain.conmicro.cx (Jay Maynard) writes:
On Fri, 13 Oct 2000 19:34:51 GMT, Anne & Lynn Wheeler wrote:
It had differences from 370 ... like 4k keys instead of 2k keys
... more like 3081 XA.

I thought the storage key 4K-byte feature was standard on the 4341 and 303x machines, too...and that was part (most?) of the reason you needed to run MVS/SP on them.


3033 had cross-memory services and some other enhancements & needed mvs/sp to support cross memory services ... s/370 extended facility

the stuff was retrofitted to 4341 for a large client who wanted to run mvs/sp on a couple hundred 4341s.

part of the problem for actual thruput for cross-memory hardware on 3033 (compared to software implementation) was that it caused significant increased strain on the table lookaside buffer (TLB) which possibly could even result in degrade performance.

misc. refs:
https://www.garlic.com/~lynn/98.html#11
https://www.garlic.com/~lynn/2000c.html#35
https://www.garlic.com/~lynn/2000c.html#83
https://www.garlic.com/~lynn/2000c.html#84
http://www.hpl.hp.com/features/bill_worley_interview.html
https://web.archive.org/web/20000816002838/http://www.hpl.hp.com/features/bill_worley_interview.html

"3033" extensions:
http://publib.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/DA1A7002/7.5.6
http://publib.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/ID4T0U01/6.4.85

misc quotes from (F.2 Comparison of Facilities between System/370 and 370-XA)
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/F%2e2
The following items, which are part of the basic computing function in System/370, are not provided in 370-XA: BC mode, interval timer, and 2K-byte protection blocks.

Only single-key 4K-byte protection blocks are provided, but the storage-key-exception control is not.

The 370-XA translation provides only the 4K-byte page size and only the 1M-byte segment size.


also ...
http://publib.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/DA1A7002/7.5.62
http://publib.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/EZ9A5003/7.4.75.1

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/

Why not an IBM zSeries workstation?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why not an IBM zSeries workstation?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 14 Oct 2000 05:15:59 GMT
also misc. "3033" entries from
http://www.isham-research.demon.co.uk/chrono.txt

IBM 3033          77-03 78-03 12  VERY LARGE S/370+EF INSTRUCTIONS
AMH V7            77-03 78-09 18  AMDAHL RESP. TO 3033 (1.5-1.7) V6
IBM 3033MP        78-03 79-09 18  MULTIPROCESSOR OF 3033
IBM 3033AP        79-01 80-02 13  ATTACHED PROCESSOR OF 3033 (3042)
IBM 3033          79-11 79-11 00  -15% PURCHASE PRICE CUT
IBM 3033N         79-11 80-01 04  DEGRADED 3033, 3.9MIPS
IBM 3033AP        80-06 80-08 02  3033 ATTACHED PROCESSOR
IBM 3033          80-06 81-10 16  Ext. Addr.=32MB REAL ADDR.;MP ONLY
IBM D.Addr.Sp.    80-06 81-06 12  Dual Address Space for 3033
IBM 3033XF        80-06 81-06 12  OPTIONAL HW/FW PERF. ENHANCE FOR MVS/SP
IBM 3033 24MB     80-11 81-11 12  24MB REAL MEM. FOR 3033UP, AP
IBM 3033S         80-11 81-01 02  2.2MIPS, DEGRADED 3033 (ENTRY 3033 MODEL)
IBM 3033N UPGR.   80-11 80-11 00  9%-14% PERF. IMPROVE, NO CHARGE
IBM 3033 PRICE    81-10           10% IN US, 12-20% EUROPE PURCH. ONLY
IBM 3033S PERF.   81-10 82-06 08  NO-CHARGE PERF. BOOST BY 8%-10%
IBM 3033          82-03           16% PUR.PRICE CUT, -14%Mem.Price($31K/MB)
IBM 3033          82-03           3033 Placed on LIMITED-NEW PRODUCTION

--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/


next, previous, subject index - home