From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: What good and old text formatter are there ? Newsgroups: alt.folklore.computers Date: 12 Sep 2000 17:30:03 -0600jones@cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) writes:
Then "G". "M", & "L" (also all at CSC, 545 tech sq) ... added GML to "script" a couple years later. This was standardized later as SGML. We have since seen it show up in HTML, XML, etc.
Claim has been made that the original CSC script was also ported to Tandy and misc. other PCs in the early '80s.
misc. refs (i think still good):
https://web.archive.org/web/20231001185033/http://www.sgmlsource.com/history/roots.htm
https://web.archive.org/web/20001201190700/http://www.sgmlsource.com/history/roots.htm
random refs:
https://www.garlic.com/~lynn/97.html#9
https://www.garlic.com/~lynn/99.html#42
https://www.garlic.com/~lynn/99.html#43
https://www.garlic.com/~lynn/99.html#67
https://www.garlic.com/~lynn/99.html#197
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: What good and old text formatter are there ? Newsgroups: alt.folklore.computers Date: 12 Sep 2000 19:41:01 -0600SGML Users' Group History moved, updated URL
that is in addition to Goldfarb's history at:
https://web.archive.org/web/20231001185033/http://www.sgmlsource.com/history/roots.htm
https://web.archive.org/web/20001201190700/http://www.sgmlsource.com/history/roots.htm
with respect to 6670 output device (recent showed up in different thread in this same newsgrup), 3800 support had been put into "script" ... and was supported both with GML "tags" and "runoff?" tags for formating. The 3800 supported then was modified to support 6670.
minor refs:
https://www.garlic.com/~lynn/2000d.html#81
correction in the above 6670 ref, OPD (? office products by whatever name) had added the computer interface; SJR had extended it for postscript & other types of support.
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Ridiculous Newsgroups: comp.arch Date: 16 Sep 2000 08:34:30 -0600mash@mash.engr.sgi.com (John R. Mashey) writes:
There were also 360/50 configurations (standard 2msec memory) that had Ampex add-on memory ... and possibly some 360/75 configurations also.
Some shops had software that would not also execute programs directly in the slower memory ... but also do things like copy programs down to higher speed memory before execution.
360/67s were 360/65 with virtual memory hardware added (8-entry fully associative table look-aside buffer and other stuff). For SMPs there were other differences between 65 and 67. The 65 duplex was basically two 65s that had their memory addressing combined and a couple other things. The 67 duplex had a "channel controller" which supported hardware configuration of the channels (i/o buses) and memory boxes ... along with tri-ported memory bus (independent memory access for the two processors and i/o activity). The different memory added slightly to the memory access latency for cpu-intensive workloads. However, combined cpu intensive and i/o intensive workload had higher thruput on a "half-duplex" 360/67 (duplex hardware configured to run as independant processors) than a "simplex" 360/67.
There was also a custom triplex 360/67 (I think done for Lockheed on a goverment project) that had a special "channel controller" that was software configurable. In cases of various kinds of faults ... the kernel could re-configure the channel controller to fence off the faulty component.
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Ridiculous Newsgroups: comp.arch Date: 16 Sep 2000 08:40:51 -0600Anne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Ridiculous Newsgroups: comp.arch Date: 16 Sep 2000 14:38:32 -0600mash@mash.engr.sgi.com (John R. Mashey) writes:
In order to have common I/O capability, devices had to be "twin-tailed", i.e. each device (or the controller for the device) was connected with two different i/o channels (one for each processor). For devices that weren't "twin-tailed", I/O requests had to be queued for the specific processor that owned the I/O channel the device was connected to.
I believe the os/360 mp software relied primarily on a spin-lock on the kernel (supervisor state) code.
The 360/67 duplex channel controller gave each cpu access to all I/O channels in the configuration.
There was a lot of early fine-grain locking work done using the 360/67 duplex configuration at CSC ... cummulating in the compare&swap work that eventually showed in 370s.
random refs:
https://www.garlic.com/~lynn/93.html#22
https://www.garlic.com/~lynn/94.html#02
https://www.garlic.com/~lynn/98.html#16
https://www.garlic.com/~lynn/99.html#88
https://www.garlic.com/~lynn/99.html#89
https://www.garlic.com/~lynn/99.html#102
https://www.garlic.com/~lynn/99.html#103
https://www.garlic.com/~lynn/99.html#139
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is Al Gore The Father of the Internet?^ Newsgroups: alt.folklore.computers,talk.politics.misc Date: 17 Sep 2000 12:45:13 -0600I don't know any specific reference there. There is the stuff from 1994
At the time of Interop '88 ... there was a lot of migration of tcp/ip into commercial sector and many of the networks had "acceptable use policies" that allowed various commercial activities.
In the era of NSFNET1 about that time (& Interop '88) .... there was the NSFNET1 contract funding the NSFNET1 backbone with a dozen or so sites ... but using lots of commercial products (i.e. the service was NSF funded, and used the contract to buy commercial products to implement the service). There was also a lot of commerically "donated" stuff for NSFNET1 (rumours that value exceeded the funding from NSF). A complicating factor is that the commerical companies probably took a non-profit "donation" tax write-off for the stuff donated to NSFNET1. Possibly as much as anything else, NSFNET needed to have "non-commercial" acceptable use policy in order to maintain non-profit tax-write-off status(?).
Many of the tcp/ip implementations & products in the Interop '88 & NSFNET1 era were based on BSD tcp/ip software. A lot of the consideration at the time was about the BSD code being free from the AT&T licensing (not government licensing). I remember the BSD "free" code ... which may or may not have had NSF &/or gov. funding (possibly also the Cornell domain name system stuff?, UofTenn SNMP stuff?, etc) .... were more along the lines of the GNU licensing issues i.e. the base code is "free" univ & gov. licensed stuff (developed with gov. &/or nonprofit funding support) .... you have to charge for some other added value ... like packaging & support.
At the time of Interop '88, commercial TCP/IP products and services were well established. One of the "big" items that I remember from Interop '88 was the guy that was responsible for a lot of the SNMP stuff done at a University was moving into commercializing/productizing the SNMP (not too long before this ... it wasn't clear whether the heavy weight network monitoring stuff would win out or whether SNMP would win out).
The commercializing issues regarding HPCC, NREN, etc were more like 6-7 years later.
random refs:
https://www.garlic.com/~lynn/2000d.html#77
https://www.garlic.com/~lynn/2000d.html#71
https://www.garlic.com/~lynn/2000d.html#72
https://www.garlic.com/~lynn/2000d.html#73
https://www.garlic.com/~lynn/2000d.html#74
https://www.garlic.com/~lynn/2000d.html#78
https://www.garlic.com/~lynn/2000d.html#79
https://www.garlic.com/~lynn/2000d.html#80
https://www.garlic.com/~lynn/internet.htm
copyright notice from some bsd tcp/ip software
/ Copyright (c) 1983, 1986 Regents of the University of California. All rights reserved. Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph are duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use acknowledge that the software was developed by the University of California, Berkeley. The name of the University may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED ''AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE. /
Interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Ridiculous Newsgroups: comp.arch,alt.folklore.computers Date: 20 Sep 2000 23:15:44 -0600Jan Vorbrueggen writes:
the 370/115 & 370/125 shared a common 9 position memory bus and a common microcode engines for dedicated functions. In both systems, dedicated i/o functions (like disk controller, communication controller, etc) was implemented on a dedicated microcode engine and took up one of the nine positions. The 115 implemented the 370 instruction set on the same microcode engine as was used for all the other dedicated tasks ... which provided about 80kips 370 (i.e. at about 10:1, the base engine was about 800kips). The 125 used all the same components as the 115 but used a faster microcode engine for the 370 instruction set yielding about 120 kips 370 (at 10:1 ratio, native engine about 1.2mips).
VAMPS was a 370/125 configuration that would use up to five of the nine positions for 125 370 engines (the basic 115 & 125 were 370 uniprocessor, even tho the underlying architecture was multiprocessor ... just with different engines for dedicated functions.
I worked on the VAMPS project concurrently with the ECPS effort ... various
references:
https://www.garlic.com/~lynn/94.html#21
https://www.garlic.com/~lynn/94.html#27
https://www.garlic.com/~lynn/94.html#28
https://www.garlic.com/~lynn/2000.html#12
https://www.garlic.com/~lynn/2000c.html#50
https://www.garlic.com/~lynn/2000c.html#76
The SMP work used a single kernel lock ... but a number of kernel functions were migratede into microcode and in some cases offloaded onto dedicated engines (i.e. a lot of paging was offloaded to the disk controller engine). The optimization resulted in situation where the majority of the workloads had a 90% non-kernel/10% kernel execution ratio. Dispatching of tasks and taks queue management was dropped into the engine microcode and ran with "fine-grain" locking on all "370" processor microcode.
When a task required kenel services, an attempt was made to obtain the kernel lock, if the kernel lock was already held (by another processor), a super lightweight request was queued and the processor would look for other work to do.
>From the 370 instruction set standpoint, the migration of dispatching
into the processor "hardware" (actually micrcode) resumbled some of
the i432 work done a number of years later. Misc. refs:
https://www.garlic.com/~lynn/2000c.html#68
https://www.garlic.com/~lynn/2000d.html#10
https://www.garlic.com/~lynn/2000d.html#11
Because of various optimization and offloading specific function to dedicated processors, in a fully comfigured five (370, total of nine microcode engines) processor system, the normal aggregate time spent executing kernel instructions (bracketed by the kernel lock) was normally no more than 50% of a single processor. This mitigated the restriction that the single kernel lock limited the aggregate kernel activity thruput to no more than 100% of a single processor.
For various reasons the product never shipped to customers. However, the work was adapted to standard 370 kernel to support 370/158 & 370/168 SMP multiprocessors. A standard kernel was slightly re-oganized so that the parts of the kernel that had been dropped into the microcode (in the VAMPS design) were modified with fine-grain locking support. The remaining portion of the kernel was bracketed with a single (sub-)kernel lock. Code operating under fine-grain locking when it encountered a situation that required transition to the portion of the kernel with the single lock, would attempt to obtain the lock. If the processor was unable to obtain the lock, it queued a super lightweight kernel request and attempted to find other work. The processor that held the kernel lock, when it finished its current task would check (& dequeue/execute) pending kernel requests prior to releasing the kernel lock.
This was a significant enhancement from the earlier os/360 SMP work that used a single kernel "spin-lock" (i.e. a task running on a processor needing kernel services would enter a tight "spin-loop" for however long was necessary for the kernel to become available). The careful choice of the kernel functions for fine-grain locking resulted in less than 10% of the kernel being modified for fine-grain locking but that 10% represented 905 of the time spent executing in the kernel. Furthermore, rather than adopting the single kernel "spin-lock" convention that had been common up until then, the implementation would queue light-weight requests for kernel services (rather than spinning, waiting for those kernel services to become available).
The implementation was implemented on an existing kernel release &
deployed on the internal HONE systems in fall '77 (supposedly at the
time, the largest "single system image" configuration in the world,
eight multiprocessor complexes all sharing the same disk pool), misc
hone refs:
https://www.garlic.com/~lynn/2000.html#75
https://www.garlic.com/~lynn/2000c.html#30
https://www.garlic.com/~lynn/2000c.html#49
and then incorporated into a later release for customers in '78
misc/random refs:
https://www.garlic.com/~lynn/2000.html#78
https://www.garlic.com/~lynn/2000b.html#38
https://www.garlic.com/~lynn/2000b.html#65
https://www.garlic.com/~lynn/2000d.html#47
https://www.garlic.com/~lynn/2000d.html#82
https://www.garlic.com/~lynn/2000e.html#4
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Ridiculous Newsgroups: comp.arch,alt.folklore.computers Date: 21 Sep 2000 07:35:14 -0600plugh@NO.SPAM.PLEASE (Caveman) writes:
The 158 & 168 configurations mentioned were 32kbyte & 64kbyte cache, two processor machines. Queuing request for the processor that already had the kernel lock: 1) avoided spin lock, 2) the high use portions of the kernel had fine-grain locking, 3) it tended to preserve "kernel" cache hit (i.e. interrupts, task switching, & kernel/non-kernel transitions all tended to create lots of cache misses because of locality transitions, if processor already had the kernel lock, queueing a request for that processor tended to re-use kernel code already in the cache), 4) the kernel lock was against lower-usage, longer path length operations (once executed would have preserved very little non-kernel cache lines, 5) processes that tended to make lower-usage longer-path kernel calls tended to be various i/o request; the request queueing tended to migrate them to the same processor ... leaving processes that made few such calls on the other processor ... tending to improve cache-hit ratios on both processors (i.e the processor switching implied by the queueing had a slight tendency to cluster processes with similar characteristics on the same processor, while at the "micro" level process switching would seem bad on cache-hits, the clustering effect at the "macro" level actually improved overall cache-hits and improved thruput, although it was somewhat a characteristic of the relative cache sizes and kernel pathlengths involved).
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is a VAX a mainframe? Newsgroups: alt.folklore.computers Date: 22 Sep 2000 09:44:38 -0600jmfbahciv writes:
The one exception was the 360 model 67s SMPs which had both shared memory and shared I/O implementations (the 67 was also the only 360 model that supported virtual memory).
The controller/device multiple connections was also what was used to implement IBM 360 clusters (i.e. multiple 360 processors, not sharing memory, but sharing I/O devices).
The IBM 370 line came out (eventually) with virtual memory as standard, but all of the IBM 370 SMP implementations had non-shared I/O channel. The 370 line also had asymmetric multiprocessing implementations, multiple processors sharing memory but some processors with no I/O capability at all.
misc. refs from the Numa/SMP/etc discussion on comp.arch
https://www.garlic.com/~lynn/2000e.html#2
https://www.garlic.com/~lynn/2000e.html#4
https://www.garlic.com/~lynn/2000e.html#6
https://www.garlic.com/~lynn/2000e.html#7
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Checkpointing (was spice on clusters) Newsgroups: comp.arch Date: 22 Sep 2000 10:07:43 -0600nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Simpler variations were restarting on a machine in the same datacenter and/or on the same machine (after it had been brought back into service, also could be used for less planned outages, load balancing, etc).
misc. refs:
https://www.garlic.com/~lynn/99.html#10
https://www.garlic.com/~lynn/2000.html#64
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is Al Gore The Father of the Internet?^ Newsgroups: alt.folklore.computers,talk.politics.misc Date: 22 Sep 2000 20:33:05 -0600... misc refs from the '80s.
slightly related prior posts:
https://www.garlic.com/~lynn/2000d.html#70
https://www.garlic.com/~lynn/2000d.html#72
https://www.garlic.com/~lynn/2000d.html#73
https://www.garlic.com/~lynn/2000d.html#76
================================================================
misc. announcement
Date: 4 January 1988, 14:12:35 EST
To: distribution
Subject: NSFNET Technical Review Board Kickoff Meeting 1/7/88
On November 24th, 1987 the National Science Foundation announced that
MERIT, supported by IBM and MCI was selected to develop and operate
the evolving NSF Network and gateways integrating 12 regional
networks. The Computing Systems Department at IBM Research will
design and develop many of the key software components for this
project including the Nodal Switching System, the Network Management
applications for NETVIEW and some of the Information Services Tools.
I am asking you to participate on an IBM NSFNET Technical Review
Board. The purpose of this Board is to both review the technical
direction of the work undertaken by IBM in support of the NSF Network,
and ensure that this work is proceeding in the right direction. Your
participation will also ensure that the work complements our strategic
products and provides benefits to your organization. The NSFNET
project provides us with an opportunity to assume leadership in
national networking, and your participation on this Board will help
achieve this goal.
... snip ... top of post, old email index, NSFNET email
=================================
John Markoff, NY Times, 29 December 1988, page D1 In an article titled 'A Supercomputer in Every Pot' a proposal for a nationwide 'data superhighway' is discussed. The following points are made: - The network would link supercomputers at national laboratories (Princeton, NJ; College Park, MD; Ithaca, NY; Pittsburgh, PA; Chicago, Ill; Lincoln, Neb; Boulder, Colo; Salt Lake City, Utah; San Diego, CA; Palo Alto, CA; and Seattle, WA). - This network would also be at the top of a hierarchy of a number of slower speed networks operated by a number of government agencies. - Fiber-optic cable operated at 3 gigabits/sec. - Previous attempt to meet the need is the two-year old NsfNet research network which links five of the supercomputer laboratories with 1.5 Mbs lines. - Feeling among the academic community that a federal funding and coordination are necessary; with it serving as a model for later commercial efforts. - Federal legislation for initial financing and construction of a National Research Network introduced in October, 1988, by Senator Albert Gore. - Five-year development and implementation period mentioned for protocols, hardware, etc. - A proposal for a regional network linking Univ of Pa, Princeton, and IBM's Watson Research Labs (Hourglass Project) was mentioned as potentially providing a 'preview of some of the services' of the national network.=================================================================================
Subject: Paving way for data 'highway' Carl M Cannon, San Jose Mercury News, 17 Sep 89, pg 1E National High-Performance Computer Technology Act of 1989 - US Senate Committee on Commerce, Science and Transportation - $1.8 billion over next 5 years - Research and development of supercomputer hardware/software/networks - Senator Albert Gore, Jr and 8 co-sponsors . "My bill will definitely get out of committee ... pass in this congress ... maybe this year." - Also introduced in the House of Representatives - Newly declared allies in George Bush's Office of Science and Technology Policy Senator Albert Gore, Jr - 10 years old when Senator Al Gore Sr planned the nation's interstate highway system - "I was impressed ... I noticed how the ride from Carthage Tennessee to Washington DC got shorter and shorter." - Al Gore Jr is the main proponent behind the US superhighway of tomorrow . nationwide network . transporting billions of pieces of data per second . among scholars, students, scientists, and even children - "A fundamental change is taking place ... no longer rely on ink and paper to record information ... we count on computer systems to record informational digitally." - "The concept of a library will have to change with technology ... new approach .... the 'digital library'" - Coined the "computer superhighway" 9 years ago . 1986: shepherded legislation thru Congress while in the House of Rep - prepare a report on hi-performance computers and networking . 1987: Report signed by William R Graham, Pres Regan's science adviser - Urged a coordinated, long-range strategy - support high performance computer research - apply the research to the rest of the nation, including industry . 1989: White House Office of Science and Technology Policy - followup report was a minor block buster in technology policy - Senators have received a series of 3 presentations . a primer on the power of supercomputers . "Wiring the World"; What's possible thru networks . "The freight that can be carried on this highway" - Joe B Wyatt, Vanderbilt University chancellor . University librarian estimated the world's authors would produce 1 million new titles per year by the year 2000 . This is the "freight" a national supercomputer network would carry . Not Vanderbilt's existing library . "store and transport to those who need it" - James H Billington, librarian of Congress . "88 million items in the Library of Congress" . "Largest collection of recorded information and knowledge ever assembled in one place here on Capitol Hill" . "The nation's most important single resource for the information age" . "establishment of a national research and education network would give an immense boost to the access of these materials" . "allow the LofC to provide much more of its unequaled data and resources than can now be obtained only by visiting Washington" - John Seely Brown, Xerox PARC vice president . "power of information technology to help meet the demand for knowledge is unquestionable" . But knowledge workers are already overburdened by - information explosion - increasing complexity - ever-accelerating pace of change - Where networks are already developed, the results can be stunning . US Geological Survey demonstrated - instantaneous combination of 15 types of electronic maps - helps municipalities figure out where they can safely authorize drilling wells for drinking water - Computations without the database or high-performance computers would be impossible D. Allan Bromley, science adviser to George Bush - recommended spending the same amount of money Gore is requesting . supercomputers and supercomputer networks - Still not a formal budget proposal - Four agencies are spending $500 million per year on computing research . Defense Advanced Research Projects Agency . Department of Energy . National Aeronautics and Space Administration . National Science Foundation - Bromley's recommendation makes it an easier issue for Congress to support . Either Al Gore's bill, or individual appropriations to the agencies . Sends a signal to the agencies that they have allies for these projects Stephen L Squires, DARPA chief scientist for Information, Science, Technology - "There is a real urgency for this" - US is in a critical stage in technology development - "wait 2-3 years ... a lot of people will be starved for resources" - "enormous demand, not just in computing, but in scientific fields" - "People can see what would make their dreams come true"=======================================================================
NATIONAL CENTER FOR SUPERCOMPUTERING APPLICATIONS (NCSA) VISIT Oct. 26, 1989 NCSA is a part of the University of Illnois and one of the leading-edge Engineering and Scientific Applications. For, example, NCSA demonstrated Visualization at Siggraph'89 (Boston) across multiple hardware platforms (Cray-2, SUN, Alliant FX-80, Ultranet, AT&T) and across long distances (Urbana----Boston). NCSA work has been cited in numerous proposal of technology advancement such as Gore's 3-Gigabits "Data Highway" congressinal bill. Representing NCSA will be: Dr. Larry Smarr -Director of NCSA; Prof. of Physics & Astronomy Dr. Karl-Heinz Winkler -Deputy Director for Science, Technology and Education of NCSA; Professor of Physics, Astronautical and Aeronautical Engineering, Mechanical & Industial Engineering Dr. Melanie Loots -Research Scientist of NCSA, Computational Chemistry:
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is Al Gore The Father of the Internet?^ Newsgroups: alt.folklore.computers,talk.politics.misc Date: 23 Sep 2000 10:43:41 -0600toor@y1.jdyson.net (John S. Dyson) writes:
This was during a period of "high technology" gov. sponsorship and churn. It was time of gov. supporting (univ) supercomputing centers (in some cases the money went for the buildings, there was some talk regarding UC system, that it went to the campus that was "due" the next new building), high-speed gigabit fibernetworks and HDTV.
NSFNET backbone (56kbit, T1, then T3) was coming at a time when there was a vast explosion in networking ... both non-profit & profit. The contribution of the prior NSF activity supporting CSNET & gov. support for various university TCP/IP activities in the early '80s ... along with the introduction of workstations & PCs ... fueled the explosion ... as well as the online service providers (the internet is as much a service issue for the public as it is a networking technology issue).
There was hope that the gov. support for high-end projects in HPCC (supercomputers), NREN (gigabit fiber networks), & HDTV (electronic component manufacturing) would have a trickle down effect into the US economy.
There, in fact, seemed to be the reverse happening. Once the low-end stuff (consumer online access) started hitting critical mass ... the commercial funding trickled "up" into the high-end stuff (at the time, more recognized in the HDTV area).
It isn't even clear how much of the NREN funding was actually spent (I remember various corporations commenting about being asked to donate commercial products to participate in NIIT "test bed"). Also, HPCC supercomputers started to shift to being large arrays of workstation &/or PC processor chips (workstations & PCs enabled the consumer online market as well as providing basis for todays supercomputers).
This was probably more recognized in the various HDTV gov. activites ... which appeared to be more slanted towards attempting to bias standards & regulations to help US corporations (as opposed to direct technology funding). The issue was that TV market (at the time) was thousands of times bigger than computer market ... and HDTV components would be as advanced as anything in the computer market .. whoever dominated the HDTV/TV market possibly would take over the whole electronic industry (computers, networking, components, etc).
At least in some areas, there started to be shift from direct gov. technology funding (i.e. fund a university to write TCP/IP code) to targeted services using of commercial products (aka a NSFNET backbone). It isn't to say that it was bad to try and continue (in the 80s/90s) the gov. funding of research & strategic technologies ... it was just that it seemed that commercial market penetration (for some areas) had reached a point by the mid to late 80s that commercial & profit operations funding started to dominate (i.e. gov. funding has tended to be more productive in areas of pure research ... and not as productive later in technology cycle when it has started to be commercialized).
If there was any doubt at the time ... Interop '88 was a large
commerical "internet" show ... where significant numbers of univ
researchers started showing up in commerical companies.
INTEROP 88: The 3rd TCP/IP Interoperability Conference and Exhibition
will be held at the Santa Clara Convention Center and Doubletree Hotel
from September 26 through 30th, 1988. The format is 2 days of
tutorials followed by 3 days of technical session (16 in all). For
the first time, there will also be an Interoperability exhibition
where vendors will show TCP/IP systems on a "Show and Tel-Net" which
additionally will be connected to the Internet.
A number of vendors, known as the "Netman" group will be demonstrating
an experimental network management system based on the ISO CMIP/CMIS
protocols.
For more information on the conference contact:
Advanced Computing Environments
480 San Antonio Road, Suite 100
Mountain View, CA 94040
(415) 941-3399
The show had four "backbone" networks with machines in many booths
connected to two or more of the backbone networks. As the machines
were starting to be connected on Sunday ... the networks started to
crash and burn. Early Monday morning (the day the show started) the
problem was identified.
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
& as usual the random urls:
https://www.garlic.com/~lynn/94.html#34
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/99.html#40
https://www.garlic.com/~lynn/2000c.html#12
https://www.garlic.com/~lynn/2000c.html#21
https://www.garlic.com/~lynn/2000d.html#2
https://www.garlic.com/~lynn/2000d.html#3
https://www.garlic.com/~lynn/2000d.html#70
https://www.garlic.com/~lynn/2000d.html#71
https://www.garlic.com/~lynn/2000d.html#72
https://www.garlic.com/~lynn/2000d.html#73
https://www.garlic.com/~lynn/2000d.html#76
https://www.garlic.com/~lynn/2000d.html#77
https://www.garlic.com/~lynn/2000d.html#78
https://www.garlic.com/~lynn/2000e.html#5
https://www.garlic.com/~lynn/2000e.html#10
https://www.garlic.com/~lynn/internet.htm
--
Anne & Lynn Wheeler | lynn@garlic.com https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Restricted Y-series PL/1 manual? (was Re: Integer overflow exception) Newsgroups: comp.arch Date: 23 Sep 2000 16:36:00 -0600"Rostyslaw J. Lewyckyj" writes:
I was at a university where IBM did beta installation of PL/1 and there were all sort of security(?) restrictions. All evidence of its existance was supposed to have been obliterated after the end of the trial period ... and there was some incident where there was suspicion that somebody at the university had made a (unauthorized) copy.
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: internet preceeds Gore in office. Newsgroups: alt.folklore.computers,talk.politics.misc Date: 24 Sep 2000 09:02:21 -0600toor@y1.jdyson.net (John S. Dyson) writes:
One of the successes of the internal network was essentially a gateway layer at each node with heterogeneous networking support (from the start). While the original arpanet introduced packets ... it was a homogeneous network. It wasn't until the IP-layer came along that it got real gateways and heterogeneous network support ... which ... with BSD TCP/IP support ported to workstations and PCs ... allowed it to really start to take off.
misc: refs:
https://www.garlic.com/~lynn/99.html#39
https://www.garlic.com/~lynn/99.html#44
https://www.garlic.com/~lynn/99.html#112
https://www.garlic.com/~lynn/2000e.html#5
Internal Net posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: internet preceeds Gore in office. Newsgroups: alt.folklore.computers,talk.politics.misc Date: 25 Sep 2000 07:55:33 -0600jmfbahciv writes:
JES/NJE networking was introduced in the mid-70s with networking "node" definitions mapped into the HASP/JES 256-entry "psuedo-device" table (i.e. HASP originall defined psuedo-reader/printer/punch devices with the table). Typical JES system might have 40-50 defined psuedo devices (for spool) leaving 200 or so entries available for defining network nodes.
The internal network was already larger than the JES/NJE limit by the time the original support was implemented. JES increased the number of definition slots to 999 (from 256) after the internal network had exceeded 1000 nodes.
The internal network gateways used an eight-byte alphanumeric field.
The internal network gateways were even used to interconnect different HASP/JES/NJE systems operating at different release levels. HASP/JES/NJE had bit-specific header fields that could vary from release to release (and frequently different releases wouldn't interoperate, and in some cases a JES/NJE at one release could cause the operating system in another machine running a different JES/NJE release to crash). The internal network gateways would have gateway support for all of the various HASP/JES/NJE protocol flavors and if necessary do the bit-field conversions from one release to a different release.
The early IBM HASP/JES methodology suffered from the same vision limitation as the early ARPANet work ... supporting homogeneous (bit-specific) networking w/o the concept of gateways. In contrast, the internal network methodology incorporated the concept of gateways from just about the original implemenation.
misc. ref:
https://www.garlic.com/~lynn/99.html#113
as an aside, my wife does have a patent (4195351, 3/25/1980) on some
of the early (token passing) LAN technology
https://www.garlic.com/~lynn/2000c.html#53
Internal Net posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: internet preceeds Gore in office. Newsgroups: alt.folklore.computers,talk.politics.misc Date: 25 Sep 2000 10:11:24 -0600some misc. other refs:
early internal network started out with cpremote on cp/67 ... at 545 tech. sq (including talking to 1130). one of the early distributed development projects was support and testing of relocation hardware on the 370/145.
370s were initially delivered w/o virtual addressing enabled (and in some cases even present). 145s had the hardware support ... even tho it wasn't enabled. there was even a flap about the "lights" on the 145 front panel (before announcement of relocation on 370 hardware). One of the lights was labled dlat ... (indicating address translation mode).
there was also a "pentagon papers" type flap where a document on address translation was leaked. there was a big internal investigation ... and after that all (internal) copiers were modified to have an identification installed (copies made would have the copier ID printed on each page copied).
In any case, there was distributed project between 145 plant in Endicott (NY) and 545 tech sq (cambridge, mass) ... using the network support. It included having a version of CP/67 (360/67) modified to simulate 370 relocation architecture (different from the 360/67 relocation architecture) on a 360/67. Then a different CP/67 was modified so that it "ran" using 370 relocation architecture (in the simulated 370 virtual machine running on a real 360/67).
This modified CP/67 was operational and running, a year before there was real 370/145 hardware available. The modified CP/67 "I" ... was used as the initial test of 370/145 when the hardware became available. The initial IPL/boot failed ... it turned out that the 145 engineers had implemented part of the relocation architecture wrong. The modified CP/67 was temporarily pathed to correspond with the wrong hardware implementation until the engineers were able to correct the machine.
CPREMOTE was eventually renamed VNET and after several enhancements and wide deployment internally, was maded available to customers in the mid-70s (as part of a combined VNET & JES2/NJE offering).
Parts of this was also the basis for BITNET in north america and EARN in europe (the number of nodes quoted for the internal network ... only included those machines on the internal corporate network ... and none of the BITNET or EARN nodes).
Internal Net posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: First OS with 'User' concept? Newsgroups: alt.folklore.computers Date: Wed, 27 Sep 2000 15:05:57 GMTTerry Kennedy writes:
I'm slowly unpacking stuff that has been in storage for over a year. Somewhere in all the stuff is a CP/67 program logic manual (PLM).
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: X.25 lost out to the Internet - Why? Newsgroups: alt.folklore.computers,comp.dcom.telecom.tech Date: Wed, 27 Sep 2000 17:10:50 GMTLizard Blizzard writes:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is Al Gore The Father of the Internet?^ Newsgroups: alt.folklore.computers,talk.politics.misc Date: Wed, 27 Sep 2000 21:03:22 GMTAnne & Lynn Wheeler writes:
announcing the switch-over to TCP/IP and some early comments about the
success of the switch. Note that the "IP" introduced internetworking
and gateways ... easing the integration of a large number of different
networks.
Date: 30 Dec 1982 14:45:34 EST (Thursday)
From: Nancy Mimno <mimno@Bbn-Unix>
Subject: Notice of TCP/IP Transition on ARPANET
To: csnet-liaisons at Udel-Relay
Cc: mimno at Bbn-Unix
Via: Bbn-Unix; 30 Dec 82 16:07-EST
Via: Udel-Relay; 30 Dec 82 13:15-PDT
Via: Rand-Relay; 30 Dec 82 16:30-EST
ARPANET Transition 1 January 1983
Possible Service Disruption
---------------------------------
Dear Liaison,
As many of you may be aware, the ARPANET has been going through
the major transition of shifting the host-host level protocol
from NCP (Network Control Protocol/Program) to TCP-IP
(Transmission Control Protocol - Internet Protocol). These two
host-host level protocols are completely different and are
incompatible. This transition has been planned and carried out
over the past several years, proceeding from initial test
implementations through parallel operation over the last year,
and culminating in a cutover to TCP-IP only 1 January 1983. DCA
and DARPA have provided substantial support for TCP-IP
development throughout this period and are committed to the
cutover date.
The CSNET team has been doing all it can to facilitate its part
in this transition. The change to TCP-IP is complete for all the
CSNET host facilities that use the ARPANET: the CSNET relays at
Delaware and Rand, the CSNET Service Host and Name Server at
Wisconsin, the CSNET CIC at BBN, and the X.25 development system
at Purdue. Some of these systems have been using TCP-IP for
quite a while, and therefore we expect few problems. (Please
note that we say "few", not "NO problems"!) Mail between Phonenet
sites should not be affected by the ARPANET transition. However,
mail between Phonenet sites and ARPANET sites (other than the
CSNET facilities noted above) may be disrupted.
The transition requires a major change in each of the more
than 250 hosts on the ARPANET; as might be expected, not all
hosts will be ready on 1 January 1983. For CSNET, this means
that disruption of mail communication will likely result between
Phonenet users and some ARPANET users. Mail to/from some ARPANET
hosts may be delayed; some host mail service may be unreliable;
some hosts may be completely unreachable. Furthermore, for some
ARPANET hosts this disruption may last a long time, until their
TCP-IP implementations are up and working smoothly. While we
cannot control the actions of ARPANET hosts, please let us know
if we can assist with problems, particularly by clearing up any
confusion. As always, we are or (617)497-2777.
Please pass this information on to your users.
Respectfully yours,
Nancy Mimno
CSNET CIC Liaison
... snip ... top of post, old email index
================================================
================================================
some observations about the success of the switch-over
Date: 02/02/83 23:49:45
To: CSNET mailing list
Subject: CSNET headers, CSNET status
You may have noticed that since ARPANET switched to TCP/IP and the
new version of software on top of it, message headers have become
ridiculously long. Some of it is because of tracing information
that has been added to facilitate error isolation and "authentication",
and some of it I think is a bug (the relay adds a 'From' and a 'Date'
header although there already are headers with that information in
the message). This usually doesn't bother people on the ARPANET
because they have smart mail reading programs that understand the
headers and only display the relevant ones. I have proposed a
mail reader/sender program that understands about ARPANET headers
(RFC822) as a summer project, so maybe we will sometime enjoy the
same priviledge.
The file CSNET STATUS1 on the CSNET disk (see instructions below
for how to access it) contains some clarification of the problems
that have been experienced with the TCP/IP conversion. Here is a
summary:
- Nodes that don't yet talk TCP (but the old NCP) can be accessed
through the UDel-Relay. So if you think you have problems reaching
a node because of this, append @Udel-Relay to the ARPANET address.
- You can find out about the status of hosts (e.g., if they run
TCP or not) by sending ANY MESSAGE to Status@UDel-Relay (capitalization
is NOT significant).
- If your messages are undeliverable, you get a notice after two days,
and your messages get returned after 4 days.
- Avoid using any of the fancy address forms allowed by the new
header format (RFC822).
- The TCP transition was a lot more trouble than the ARPANET people had
anticipated.
... snip ... top of post, old email index
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is Al Gore The Father of the Internet?^ Newsgroups: alt.folklore.computers,talk.politics.misc Date: Wed, 27 Sep 2000 21:05:21 GMTAnne & Lynn Wheeler writes:
From: geoff@FERNWOOD.MPK.CA.US (the terminal of Geoff Goodfellow) Newsgroups: comp.protocols.tcp-ip Subject: THE INTERNET CRUCIBLE - Volume 1, Issue 1 Date: 1 Sep 89 03:12:06 GMT THE CRUCIBLE INTERNET EDITION an eleemosynary publication of the August, 1989 Anterior Technology IN MODERATION NETWORK(tm) Volume 1 : Issue 1 Geoff Goodfellow Moderator In this issue: A Critical Analysis of the Internet Management Situation THE CRUCIBLE is an irregularly published, refereed periodical on the Internet. The charter of the Anterior Technology IN MODERATION NETWORK is to provide the Internet and Usenet community with useful, instructive and entertaining information which satisfies commonly accepted standards of good taste and principled discourse. All contributions and editorial comments to THE CRUCIBLE are reviewed and published without attribution. Cogent, cohesive, objective, frank, polemic submissions are welcomed. Mail contributions/editorial comments to: crucible@fernwood.mpk.ca.us -------------------------------------------------------------------------- A Critical Analysis of the Internet Management Situation: The Internet Lacks Governance ABSTRACT At its July 1989 meeting, the Internet Activities Board made some modifications in the management structure for the Internet. An outline of the new IAB structure was distributed to the Internet engineering community by Dr. Robert Braden, Executive Director. In part, the open letter stated: "These changes resulted from an appreciation of our successes, especially as reflected in the growth and vigor of the IETF, and in rueful acknowledgment of our failures (which I will not enumerate). Many on these lists are concerned with making the Internet architecture work in the real world." In this first issue of THE INTERNET CRUCIBLE we will focus on the failures and shortcomings in the Internet. Failures contain the lessons one often needs to achieve success. Success rarely leads to a search for new solutions. Recommendations are made for short and long term improvements to the Internet. A Brief History of Networking The Internet grew out of the early pioneering work on the ARPANET. This influence was more than technological, the Internet has also been significantly influenced by the economic basis of the ARPANET. The network resources of the ARPANET (and now Internet) are "free". There are no charges based on usage (unless your Internet connection is via an X.25 Public Data Network (PDN) in which case you're well endowed, or better be). Whether a site's Internet connection transfers 1 packet/day or a 1M packets/day, the "cost" is the same. Obviously, someone pays for the leased lines, router hardware, and the like, but this "someone" is, by and large, not the same "someone" who is sending the packets. In the context of the Research ARPANET, the "free use" paradigm was an appropriate strategy, and it has paid handsome dividends in the form of developing leading edge packet switching technologies. Unfortunately, there is a significant side-effect with both the management and technical ramifications of the current Internet paradigm: there is no accountability, in the formal sense of the word. In terms of management, it is difficult to determine who exactly is responsible for a particular component of the Internet. From a technical side, responsible engineering and efficiency has been replaced by the purchase of T1 links. Without an economic basis, further development of short-term Internet technology has been skewed. The most interesting innovations in Internet engineering over the last five years have occurred in resource poor, not resource rich, environments. Some of the best known examples of innovative Internet efficiency engineering are John Nagle's tiny-gram avoidance and ICMP source-quench mechanisms documented in RFC896, Van Jacobsen's slow-start algorithms and Phil Karn's retransmission timer method. In the Nagle, Jacobsen and Karn environments, it was not possible or cost effective to solve the performance and resource problems by simply adding more bandwidth -- some innovative engineering had to be done. Interestingly enough, their engineering had a dramatic impact on our understanding of core Internet technology. It should be noted that highly efficient networks are important when dealing with technologies such as radio where there is a finite amount of bandwidth/spectrum to be had. As in the Nagle, Jacobsen and Karn cases, there are many environments where adding another T1 link can not be used to solve the problem. Unless innovation continues in Internet technology, our less than optimal protocols will perform poorly in bandwidth or resource constrained environments. Developing at roughly the same time as Internet technology have been the "cost-sensitive" technologies and services, such as the various X.25-based PDNs, the UUCP and CSNET dial-up networks. These technologies are all based on the notion that bandwidth costs money and the subscriber pays for the resources used. This has the notable effect of focusing innovation to control costs and maximize efficiency of available resources and bandwidth. Higher efficiency is achieved by concentrating on sending the most amount of information through the pipe in the most efficient manner thereby making the best use of available bandwidth/cost ratio. For example, bandwidth conservation in the UUCP dial-up network has multiplied by leaps and bounds in the modem market with the innovation of Paul Baran's (the grandfather of packet switching technology) company, Telebit, which manufactures a 19.2KB dial-up modem especially optimized for UUCP and other well known transfer protocols. For another example, although strictly line-at-a-time terminal sessions are less "user friendly" than character-oriented sessions, they make for highly efficient use of X.25 PDN network resources with echoing and editing performed locally on the PAD. While few would argue the superiority of X.25 and dial-up CSNET and UUCP, these technologies have proved themselves both to spur innovation and to be accountable. The subscribers to such services appreciate the cost of the services they use, and often such costs form a well-known "line item" in the subscriber's annual budget. Nevertheless, the Internet suite of protocols are eminently successful, based solely on the sheer size and rate of growth of both the Internet and the numerous private internets, both domestically and internationally. You can purchase internet technology with a major credit card from a mail order catalog. Internet technology has achieved the promise of Open Systems, probably a decade before OSI will be able to do so. Failures of the Internet The evolution and growth of Internet technology have provided the basis for several failures. We think it is important to examine failures in detail, so as to learn from them. History often tends to repeat itself. Failure 1:- Network Nonmanagement The question of responsibility in todays proliferated Internet is completely open. For the last three years, the Internet has been suffering from non-management. While few would argue that a centralized czar is necessary (or possible) for the Internet, the fact remains there is little to be done today besides finger-pointing when a problem arises. In the NSFNET, MERIT is in charge of the backbone and each regional network provider is responsible for its respective area. However, trying to debug a networking problem across lines of responsibility, such as intermittent connectivity, is problematic at best. Consider three all too true refrains actually heard from NOC personal at the helm: "You can't ftp from x to y? Try again tomorrow, it will probably work then." "If you are not satisfied with the level of [network] service you are receiving you may have it disconnected." "The routers for network x are out of table space for routes, which is why hosts on that network can't reach your new (three-month old) network. We don't know when the routers will be upgraded, but it probably won't be for another year." One might argue that the recent restructuring of the IAB may work towards bringing the Internet under control and Dr. Vinton G. Cerf's recent involvement is a step in the right direction. Unfortunately, from a historical perspective, the new IAB structure is not likely to be successful in achieving a solution. Now the IAB has two task forces, the Internet Research Task Force (IRTF) and the Internet Engineering Task Force (IETF). The IRTF, responsible for long-term Internet research, is largely composed of the various task forces which used to sit at the IAB level. The IETF, responsible for the solution of short-term Internet problems, has retained its composition. The IETF is a voluntary organization and its members participate out of self interest only. The IETF has had past difficulties in solving some of the Internet's problems (i.e., it has taken the IETF well over a year to not yet produce RFCs for either a Point-To-Point Serial Line IP or Network Management enhancements). It is unlikely that the IETF has the resources to mount a concerted attack against the problems of today's ever expanding Internet. As one IETF old-timer put it: "No one's paid to go do these things, I don't see why they (the IETF management) think they can tell us what to do" and "No one is paying me, why should I be thinking about the these things?" Even if the IETF had the technical resources, many of the Internet's problems are also due to lack of "hands on" management. The IETF o Bites off more than it can chew; o Sometimes fails to understand a problem before making a solution; o Attempts to solve political/marketing problems with technical solutions; o Has very little actual power. The IETF has repeatedly demonstrated the lack of focus necessary to complete engineering tasks in a timely fashion. Further, the IRTF is chartered to look at problems on the five-year horizon, so they are out of the line of responsibility. Finally, the IAB, per se, is not situated to resolve these problems as they are inherent to the current structure of nonaccountability. During this crisis of non-management, the Internet has evolved into a patch quilt of interconnected networks that depend on lots of seat-of-the-pants flying to keep interoperating. It is not an unusual occurrence for an entire partition of the Internet to remain disconnected for a week because the person responsible for a key connection went on vacation and no one else knew how to fix it. This situation is but one example of an endemic problem of the global Internet. Failure 2:- Network Management The current fury over network management protocols for TCP/IP is but a microcosm of the greater Internet vs. OSI debate going on in the marketplace. While everyone in the market says they want OSI, anyone planning on getting any work done today buys Internet technology. So it is with network management, the old IAB made the CMOT an Internet standard despite the lack of a single implementation, while the only non-proprietary network management protocol in use in the Internet is the SNMP. The dual network management standardization blessings will no doubt have the effect of confusing end-users of Internet technology--making it appear there are two choices for network management, although only one choice, the SNMP has been implemented. The CMOT choice isn't implemented, doesn't work, or isn't interoperable. To compound matters, after spending a year trying to achieve consensus on the successor to the current Internet standard SMI/MIB, the MIB working group was disbanded without ever producing anything: the political climate prevented them from resolving the matter. (Many congratulatory notes were sent to the chair of the group thanking him for his time. This is an interesting new trend for the Internet--congratulating ourselves on our failures.) Since a common SMI/MIB could not be advanced, an attempt was made to de-couple the SNMP and the CMOT (RFC1109). The likely result of RFC1109 will be that the SNMP camp will continue to refine their experience towards workable network management systems, whilst the CMOT camp will continue the never-ending journey of tracking OSI while producing demo systems for trade shows exhibitions. Unfortunately the end-user will remain ever confused because of the IAB's controversial (and technically questionable) decision to elevate the CMOT prior to implementation. While the network management problem is probably too large for the SNMP camp to solve by themselves they seem to be the only people who are making any forward progress. Failure 3:- Bandwidth Waste Both the national and regional backbone providers are fascinated with T1 (and now T3) as the solution towards resource problems. T1/T3 seems to have become the Internet panacea of the late 80's. You never hear anything from the backbone providers about work being done to get hosts to implement the latest performance/congestion refinements to IP, TCP, or above. Instead, you hear about additional T1 links and plans for T3 links. While T1 links certainly have more "sex and sizzle" than efficient technology developments like slow-start, tiny gram avoidance and line mode telnet, the majority of users on the Internet will probably get much more benefit from properly behaving hosts running over a stable backbone than the current situation of misbehaving and semi-behaved hosts over an intermittent catenet. Failure 4:- Routing The biggest problem with routing today is that we are still using phase I (ARPANET) technology, namely EGP. The EGP is playing the role of routing glue in providing the coupling between the regional IGP and the backbone routing information. It was designed to only accommodate a single point of attachment to the catenet (which was all DCA could afford with the PSNs). However with lower line costs, one can build a reasonably inexpensive network using redundant links. However the EGP does not provide enough information nor does the model it is based upon support multiple connections between autonomous systems. Work is progressing in the Interconnectivity WG of the IETF to replace EGP. They are in the process of redefining the model to solve some of the current needs. BGP or the Border Gateway Protocol (RFC1105) is an attempt to codify some of the ideas the group is working on. Other problems with routing are caused by regionals wanting a backdoor connection to another regional directly. These connections require some sort of interface between the two routing systems. These interfaces are built by hand to avoid routing loops. Loops can be caused when information sent into one regional network is sent back towards the source. If the source doesn't recognize the information as its own, packets can flow until their time to live field expires. Routing problems are caused by the interior routing protocol or IGP. This is the routing protocol which is used by the regionals to pass information to and from its users. The users themselves can use a different IGP than the regional. Depending on the number of connections a user has to the regional network, routing loops can be an issue. Some regionals pass around information about all known networks in the entire catenet to their users. This information deluge is a problem with some IGPs. Newer IGPs such as the new OSPF from the IETF and IGRP from cisco attempt to provide some information hiding by adding hierarchy. OSPF is the internets first attempt at using a Dykstra type algorithm as an IGP. BBN uses it to route between their packet switch nodes below the 1822 or X.25 layer. Unstable routing is caused by hardware or hosts software. Older BSD software sets the TTL field in the IP header to a small number. The Internet today is growing and its diameter has exceed the software's ability to reach the other side. This problem is easily fixed by knowledgeable systems people, but one must be aware of the problem before they can fix it. Routing problems are also perceived when in fact a serial line problem or hardware problem is the real cause. If a serial line is intermittent or quickly cycles from the up state into the down state and back again, routing information will not be supplied in a uniform or smooth manner. Most current IGPs are Bellman-Ford based and employ some stabilizing techniques to stem the flow of routing oscillations due to "flapping" lines. Often when a route to a network disappears, it may take several seconds for it to reappear. This can occur at the source router who waits for the route to "decay" from the system. This pause should be short enough so that active connections persist but long enough that all routers in the routing system "forget" about routes to that network. Older host software with over-active TCP retransmission timers will time out connections instead of persevering in the face of this problem. Also routers, according to RFC1009, must be able to send ICMP unreachables when a packet is sent to a route which is not present in its routing database. Some host products on the market close down connections when a single ICMP reachable is received. This bug flies in the face of the Internet parable "be generous in what you accept and rigorous in what you send". Many of the perceived routing problems are really complex multiple interactions of differing products. Causes of the Failures The Internet failures and shortcomings can be traced to several sources: First and foremost, there is little or no incentive for efficiency and/or economy in the current Internet. As a direct result, the resources of the Internet and its components are limited by factors other than economics. When resources wear thin, congestion and poor performance result. There is little to no incentive to make things better, if 1 packet out of 10 gets through things "sort of work". It would appear that Internet technology has found a loophole in the "Tragedy of The Commons" allegory--things get progressively worse and worse, but eventually something does get through. The research community is interested in technology and not economics, efficiency or free-markets. While this tack has produced the Internet suite of protocols, the de facto International Standard for Open Systems, it has also created an atmosphere of intense in-breeding which is overly sensitive to criticism and quite hardened against outside influence. Meanwhile, the outside world goes on about developing economically viable and efficient networking technology without the benefit of direct participation on the part of the Internet. The research community also appears to be spending a lot of its time trying to hang onto the diminishing number of research dollars available to it (one problem of being a successful researcher is eventually your sponsors want you to be successful in other things). Despite this, the research community actively shuns foreign technology (e.g., OSI), but, inexplicably has not recently produced much innovation in new Internet technology. There is also a dearth of new and nifty innovative applications on the Internet. Business as usual on the Internet is mostly FTP, SMTP and Telnet or Rlogin as it has been for many years. The most interesting example of a distributed application on the Internet today is the Domain Name System, which is essentially an administrative facility, not an end-user service. The engineering community must receive equal blame in these matters. While there have been some successes on the part of the engineering community, such as those by Nagel, Jacobsen and Karn mentioned above, the output of the IETF, namely RFCs and corresponding implementations, has been surprisingly low over its lifetime. Finally, the Internet has become increasingly dependent on vendors for providing implementations of Internet technology. While this is no doubt beneficial in the long-term, the vendor community, rather than investing "real" resources when building these products, do little more than shrink-wrap code written primarily by research assistants at universities. This has lead to cataclysmic consequences (e.g., the Internet worm incident, where Sendmail with "debug" command and all was packaged and delivered to customers without proper consideration). Of course, when problems are found and fixed (either by the vendor's customers or software sources), the time to market with these fixes is commonly a year or longer. Thus, while vendors are vital to the long-term success of Internet technology, they certainly don't receive high marks in the short-term. Recommendations Short-term solutions (should happen by year's end): In terms of hardware, the vendor community has advanced to the point where the existing special-purpose technologies (Butterfly, NSSs) can be replaced by off-the-shelf routers at far less cost and with superior throughput and reliability. Obvious candidates for upgrade are both the NSFNET and ARPANET backbones. Given the extended unreliability of the mailbridges, the ARPA core is an immediate candidate (even though the days of net 10 are numbered). In terms of software, ALL devices in the Internet must be network manageable. This is becoming ever more critical when problems must be resolved. Since SNMP is the only open network management protocol functioning in the Internet, all devices must support SNMP and the Internet standard SMI and MIB. Host implementations must be made to support the not-so-recent TCP enhancements (e.g., those by Nagle, Jacobsen and Karn) and the more recent linemode TELNET. The national and regional providers must coordinate to share network management information and tools so that user problems can be dealt with in a predictable and timely fashion. Network management tools are a big help, but without the proper personnel support above this, the benefits can not be fully leveraged. The Internet needs leadership and hands-on guidance. No one is seemingly in charge today, and the people who actually care about the net are pressed into continually fighting the small, immediate problems. Long-term solutions: To promote network efficiency and a free-market system for the delivery of Internet services, it is proposed to switch the method by which the network itself is supported. Rather than a top-down approach where the money goes from funding agencies to the national backbone or regional providers, it is suggested the money go directly to end-users (campuses) who can then select from among the network service providers which among them best satisfies their needs and costs. This is a strict economic model: by playing with the full set of the laws of economics, a lot of the second-order problems of the Internet, both present and on the horizon, can be brought to heel. The Internet is no longer a research vehicle, it is a vibrant production facility. It is time to acknowledge this by using a realistic economic model in the delivery of Internet services to the community (member base). When Internet sites can vote with their pocketbooks, some new regionals will be formed; some, those which are non-performant or uncompetitive, will go away; and, the existing successful ones will grow. The existing regionals will then be able to use their economic power, as any consumer would, to ensure that the service providers (e.g., the national backbone providers) offer responsive service at reasonable prices. "The Market" is a powerful forcing function: it will be in the best interests of the national and regional providers to innovate, so as to be more competitive. Further, such a scheme would also allow the traditional telecommunications providers a means for becoming more involved in the Internet, thus allowing cross-leverage of technologies and experience. The transition from top-down to economic model must be handled carefully, but this is exactly the kind of statesmanship that the Internet should expect from its leadership. ---------
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is Al Gore The Father of the Internet?^ Newsgroups: alt.folklore.computers,talk.politics.misc Date: Wed, 27 Sep 2000 21:07:13 GMTAnne & Lynn Wheeler writes:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Competitors to SABRE? Big Iron Newsgroups: alt.folklore.computers Date: Thu, 28 Sep 2000 04:24:03 GMT"David C. Barber" writes:
misc. ref:
https://www.garlic.com/~lynn/96.html#29
https://www.garlic.com/~lynn/99.html#24
https://www.garlic.com/~lynn/99.html#103
https://www.garlic.com/~lynn/99.html#152
http://www.sebek.co.uk/whatis.htm
https://web.archive.org/web/20010210031953/http://www.sebek.co.uk/whatis.htm
http://www.s390.ibm.com/products/tpf/tpflnks.htm
http://www.tpfug.org/
http://smtp.vsoftsys.com/Vssiprod.htm
https://web.archive.org/web/20010405175219/http://smtp.vsoftsys.com/Vssiprod.htm
http://www.webtravelnews.com/344.htm
http://www.Amdahl.com/doc/products/asg/links.htm
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is a VAX a mainframe? Newsgroups: alt.folklore.computers Date: Thu, 28 Sep 2000 15:06:06 GMTLars Poulsen writes:
It was also used by PARS/ACP ... for the airline control program (recent discussion in this NG on sabre)
The disk controller in the 370 line (3830) expanded this support so that disks could be accessed via eight different paths. The 360 loosely-coupled support utilized reserve/release ... i.e. the whole device was reserved for a period and then released.
For ACP there was an enhancement to the 3830 disk controller that supported fine-grain locking semantics in the controller itself (early to mid-70s) ... supporting eight machine clusters.
For "HONE" (
https://www.garlic.com/~lynn/2000c.html#30 ) a simulated
compare&swap CCW sequence was used. Standard disk I/O sequence were
non-interruptable on a device. IBM "CKD" disk devices included I/O
operations that compared equal/high/low/etc on disk and did
conditional "branches" (TICs) in the I/O program (CCWs). HONE used the
compare&swap sequence to build large "single-system-image" complex in
the '78 time-frame supporting the field (salesman, support, etc).
When TPF (ACP follow-on) finally supported SMP multiprocessing, it broke its eight processor limitation. These days, TPF can be found with eight machine clusters where each machine might be a ten (or more) processor SMP (yielding 80 or more processors in the cluster).
In the late '70s, my wife also spent a year in POK in charge of
mainframe "loosely-coupled" architecture. In the late '80s, my
wife and I ran a skunk-works that resulted in the HA/CMP
(High Availability Cluster Multi-Processing) product
(
https://www.garlic.com/~lynn/95.html#13 )
misc. refs:
https://www.garlic.com/~lynn/2000e.html#21
https://www.garlic.com/~lynn/2000e.html#6
https://www.garlic.com/~lynn/2000c.html#21
https://www.garlic.com/~lynn/2000c.html#30
https://www.garlic.com/~lynn/2000c.html#45
https://www.garlic.com/~lynn/2000c.html#47
https://www.garlic.com/~lynn/2000b.html#38
https://www.garlic.com/~lynn/2000.html#31
https://www.garlic.com/~lynn/99.html#71
https://www.garlic.com/~lynn/99.html#88
https://www.garlic.com/~lynn/99.html#89
https://www.garlic.com/~lynn/99.html#149
https://www.garlic.com/~lynn/98.html#35a
https://www.garlic.com/~lynn/96.html#15
https://www.garlic.com/~lynn/93.html#26
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is Tim Berners-Lee the inventor of the web? Newsgroups: alt.folklore.computers Date: Fri, 29 Sep 2000 15:27:02 GMTjones@cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) writes:
misc. ref:
https://www.garlic.com/~lynn/2000e.html#0
https://www.garlic.com/~lynn/2000e.html#1
https://www.garlic.com/~lynn/2000d.html#30
https://www.garlic.com/~lynn/99.html#28
https://www.garlic.com/~lynn/99.html#197
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: older nic cards Newsgroups: alt.folklore.computers Date: Fri, 29 Sep 2000 17:46:57 GMTguy-jin writes:
i actually have some aui/bnc & aui/rj45 adapters laying around. The problem I found with some of the AUI adapters on some AUI machines is that the flange & male pins were not deep enuf to mate with the female plug on some machines (i.e. the female plug is slightly recessed and the body of the adapter makes contact with the surrounding housing preventing the plug from being fully inserted). I'm not familar with that particular laptop ... so can't say if it would be a problem or not. I know that I had the problem on various PCs and on a SGI Indy.
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Test and Set: Which architectures have indivisible instructions? Newsgroups: comp.arch,comp.sys.m68k Date: Sat, 30 Sep 2000 13:19:10 GMTnygren writes:
There was push back getting CAS into 370 architecture ... the challenge by people that "owned" the 370 architecture was to formulate programming models where CAS was used in single processor environments. The description using CAS for multi-threaded application running enabled for interrupts on single processor machines was eventually devised. The architecture group expanded CAS to two instructions ... one that operated on single word and one that operated on double word ... and changed the actual mnemonics to CS & CDS (compare&swap and compare double & swap). The two instructions were added to the 370 architecture in the early '70s.
programming examples for multithreaded operation
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/A%2e6%2e2?SHELF=
current instruction description
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/7%2e5%2e23
misc. refs:
https://www.garlic.com/~lynn/93.html#9
https://www.garlic.com/~lynn/93.html#14
https://www.garlic.com/~lynn/93.html#22
https://www.garlic.com/~lynn/99.html#88
https://www.garlic.com/~lynn/99.html#89
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Al Gore, The Father of the Internet (hah!) Newsgroups: alt.folklore.computers,talk.politics.misc,alt.fan.rush-limbaugh Date: Sat, 30 Sep 2000 13:32:29 GMTdjim55@datasync.com (D.J.) writes:
many existing "supercomputers" are workstation or PC processor chips hooked together in large arrays ... using a variety of interconnect.
random other ref:
https://www.garlic.com/~lynn/2000e.html#11
It isn't even clear how much of the NREN funding was actually spent (I
remember various corporations commenting about being asked to donate
commercial products to participate in NIIT "test bed"). Also, HPCC
supercomputers started to shift to being large arrays of workstation
&/or PC processor chips (workstations & PCs enabled the consumer
online market as well as providing basis for todays supercomputers).
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OCF, PC/SC and GOP Newsgroups: alt.technology.smartcards Date: Sat, 30 Sep 2000 15:21:07 GMTstevei_2000 writes:
IBM commisioned a PC unix based on AT&T system/3 early on for the IBM/PC. IBM then did a AT&T S5V2/3(?) with lots of modifications for the PC/RT (aka AIX2) and also "AOS" (BSD4.3) for the same machine. IBM also offered a "Locus" (UCLA) port to 370 and PS2 (i.e. AIX/370 and AIX/PS2).
Then there was work by SUN and AT&T to do some merge of AT&T Unix and BSD unix for System 5 V4. Then there was the OSF work (HP, DEC, IBM, etc, including picking up some stuff from CMU mach & UCLA locus work).
misc. ref:
http://www.ee.ic.ac.uk/docs/software/unix/begin/appendix/history.html
http://www.be.daemonnews.org/199909/usenix-kirk.html
https://web.archive.org/web/20010222211622/http://www.be.daemonnews.org/199909/usenix-kirk.html
http://www.byte.com/art/9410/sec8/art3.htm
http://www.albion.com/security/intro-2.html
http://www.wordesign.com/unix/coniglio.htm
https://web.archive.org/web/20010222205957/http://www.wordesign.com/unix/coniglio.htm
http://www.fnal.gov/docs/UNIX/unix_at_fermilab/htmldoc/rev1997/uatf-5.html
http://www.sci.hkbu.edu.hk/comp-course/unix/
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is Al Gore The Father of the Internet?^ Newsgroups: alt.folklore.computers,talk.politics.misc Date: Sat, 30 Sep 2000 15:41:01 GMTAnne & Lynn Wheeler writes:
Little snippet that I ran across regarding the gov/arpa involvement in
supporting the dominant infrastructure deployed for the internet
He said (paraphrased) that every DARPA meeting ended up the same, with
the Military coming in and giving CSRG (at UCB, the group that worked
on BSD) a stern warning that they were to work on the Operating
System, and that BBN will work on the networking. Every time, Bob
Fabry, then the adviser of CSRG, would "Yes them to death" and they'd
go off and just continue the way they were going. Much to the
frustration of the DARPA advisery board.
... also
The next Part in the Saga of CSRG is very important, as it led up to
the lawsuit and the creation of the "future" BSD Lite. The release of
the Networking Release 1, under what is now known as the Berkeley
license, because of the need to separate the networking code from AT&T
owned code. After that, the base system was still developed, The VM
ref:
http://www.be.daemonnews.org/199909/usenix-kirk.html
https://web.archive.org/web/20010222211622/http://www.be.daemonnews.org/199909/usenix-kirk.html
Interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Vint Cerf and Robert Kahn and their political opinions Newsgroups: alt.folklore.computers Date: Sat, 30 Sep 2000 19:28:47 GMT"Joel C. Ewing" writes:
There were big portions of the internet in the '80s that didn't have the same commercial AUP restrictions as the NSFNET backbone with regard to commercial uses.
I've conjectured that the NSFNET backbone status as a non-profit entity and not carrying "for profit" messages ... was as much an issue with commerical companies being able to supply stuff to NSFNET and take a tax write-off on the donation.
There was lots of stuff that DARPA, NSF, DOD, etc had been funding that were in fact being used for commercial purposes (despite statements like: NYSERNet, Inc. recognizes as acceptable all forms of data communications across its network, except where federal subsidy of connections may require limitations)
The BSD networking code ... which was used as a base for a large part of all the intenet software delivered on workstations & PCs ... which did as much as anything to fuel the internet ... is an example.
Not only that ... but CSRG was being explicitly directed by DARPA not to be doing networking software i.e. if they had followed DARPA direction the explosive growth in numbers of workstations & PCs on the internet in the 80s & 90s, I believe would have been significantly curtailed.
An issue with regard to the BSD networking code ... wasn't whether it had been done by a non-profit university using darpa &/or other gov. funding ... but whether there was any AT&T UNIX licensing issues.
misc. refs:
http://www.be.daemonnews.org/199909/usenix-kirk.html
https://web.archive.org/web/20010222211622/http://www.be.daemonnews.org/199909/usenix-kirk.html
https://www.garlic.com/~lynn/2000c.html#26
https://www.garlic.com/~lynn/2000e.html#5
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Is Tim Berners-Lee the inventor of the web? Newsgroups: alt.folklore.computers Date: Sun, 01 Oct 2000 22:16:43 GMTHoward S Shubs writes:
big thing that tcp/ip brought was IP ... or "internet protocol" ... along with gateways to interconnect multiple different networks.
About the time of the conversion to tcp/ip ... arpanet had about 200 nodes that were pretty homogeneous network ... and the internal network had nearly 1000 mainframe network nodes. During nearly the entire life of the arpanet ... the internal network was larger than the arpanet. One of the characteristics of the internal network (in contrast to the ARPANET) ... was that effectively every network node implemented a gateway funciton (just about from its start) ... allowing a diversity of different networking nodes to be attached to the internal network (drastically simplifying attachment of additional machines & growing the configurations).
Several factors contributed to the explosion in the "internet" after the 1983 conversion:
1) support of internetworking protocol
2) appearing local area network technology that could be integrated
into the "internet" via the IP protocol layer
3) relatively powerful computing devices at inexpensive
prices in the form of workstations and PCs
4) BSD (and other) tcp/ip (like MIT PC) support that was relatively
easily ported to a variety of workstations and PCs.
The internal network of mainframe machines had grown to 2000 nodes by 1984 ... but with the availability of all those LAN-based workstations and PCs that had internetworking access to the internet ... the number of internet nodes passed the number of internal network mainframe nodes sometime in the mid-80s (i.e. the number of LAN connected PCs and workstations as internet nodes exceeded the number of internal network mainframe nodes).
The conversion from a homogeneous networking environment to a heterogeneous networking environment with gateways ... and supporting the emerging LAN technology as part of the heterogenity was one of the most significant things for the "network" in the 80s.
Given the limited size of the ARPANET and the requirement for relatively homogeneous operation at the protocol level (i.e. no "internetworking" capability) ... the internal network with its gateway function in just about every major node ... more closely represented a world-wide "internet" ... than the pre-83 ARPANET ever did.
misc refs:
https://www.garlic.com/~lynn/99.html#112
https://www.garlic.com/~lynn/2000e.html#13
https://www.garlic.com/~lynn/2000e.html#14
https://www.garlic.com/~lynn/2000e.html#18
https://www.garlic.com/~lynn/2000e.html#19
https://www.garlic.com/~lynn/2000e.html#23
https://www.garlic.com/~lynn/2000e.html#26
https://www.garlic.com/~lynn/2000e.html#28
https://www.garlic.com/~lynn/2000e.html#29
https://www.garlic.com/~lynn/internet.htm
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Cerf et.al. didn't agree with Gore's claim of initiative. Newsgroups: alt.fan.rush-limbaugh,seattle.politics,alt.politics,alt.folklore.computers Date: Wed, 04 Oct 2000 14:26:58 GMT"Mike" writes:
appears that both Bush and Gore were proposing some forms of advanced funding ... and they appear to have needed to work out various differences.
Also with respect to previous posts about congress removing the "for profit" restriction on the internet ... in actuallity the only part of the internet that appears to have that restriction was the limited number of NSFNET backbone nodes. The other network acceptable use policies primarily make reference to such restrictions in the context of that data might have to flow thru such a restricted barrier when travelling to other (inter)networks.
given that many of these networks heavily involved non-profit institutions, universities, government funding ... etc ... there seems to have been something unique about the federal funding for the these limited number of NSFNET backbone nodes ... possibly even a congressional mandate by whoever in congress was backing NSFNET. If that is true ... then it is likely any case of congress removing restrictions ... was a case of congress givith and congress taketh away (i.e. they may not have been doing something so much that was really noble ... but cleaning up mess that they themselves may have caused).
misc. other refs:
https://www.garlic.com/~lynn/2000e.html#19
https://www.garlic.com/~lynn/2000e.html#26
https://www.garlic.com/~lynn/2000e.html#28
https://www.garlic.com/~lynn/2000e.html#29
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tektronics Storage Tube Terminals Newsgroups: alt.folklore.computers Date: Wed, 04 Oct 2000 14:30:37 GMT"David C. Barber" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: War, Chaos, & Business Newsgroups: alt.folklore.military Date: Thu, 05 Oct 2000 03:54:41 GMTI admit to be quite biased and I was very pleased when the following URLs were forwarded to me:
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: War, Chaos, & Business (web site), or Col John Boyd Newsgroups: alt.folklore.military Date: Thu, 05 Oct 2000 14:23:37 GMTColin Campbell writes:
US Defence Spending: Are Billions being Wasted:
http://www.infowar.com/iwftp/cspinney/spinney.htm
https://web.archive.org/web/20010122080100/http://infowar.com/iwftp/cspinney/spinney.htm
A call to chuck suggested that John should be called.
It was my introduction to John Boyd and his Patterns of Conflict and
Organic Design for Command and Control. A later copy of Organic
Design for Command and Control is also.
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/
At the time of desert storm, US News & Reports carried an article on John ... and the "fight to change how america fights" ... making references to the major and cols. at the time as john's "jedi knights". A more detailed description
"Thinking like marines"
is also on the above web site.
There are sveral short synopsis of John ... following written by
Dr. Hammond, Director of the "Center for Strategy and Technology" at
the Air War College.
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/
from the above reference ... a quote from Krulak, commandant of the
marine corps
As I write this, my mind wanders back to that morning
in February, 1991, when the military might of the
United States sliced violently into the Iraqi positions in
Kuwait. Bludgeoned from the air nearly round the clock
for six weeks, paralyzed by the speed and ferocity of
the attack. The Iraqi army collapsed morally and
intellectually under the onslaught of American and
Coalition forces. John Boyd was an architect of that
victory as surely as if he'd commanded a fighter wing
or a maneuver division in the desert. His thinking, his
theories, his larger than life influence, were there with
us in Desert Storm. He must have been proud of what
his efforts wrought.
...>
http://www.au.af.mil/au/awc/awcgate/awccsat.htm
also on the web site a pointer to
http://www.defense-and-society.org/FCS_Folder/boyd_thesis.htm
https://web.archive.org/web/20010722090426/http://www.defense-and-society.org/FCS_Folder/boyd_thesis.htm
"From Air Force Figher Pilot to Marine Corps Warfighting: Colonel John Boyd, His Theories on War, and their Unexpected Legacy"
as well as a wealth of other material on Col. John Boyd.
.......
my own reference to john (from the past)
https://www.garlic.com/~lynn/94.html#8
& a posted in this ng not too long ago:
https://www.garlic.com/~lynn/2000c.html#85
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: War, Chaos, & Business (web site), or Col John Boyd Newsgroups: alt.folklore.military Date: Thu, 05 Oct 2000 15:44:55 GMTmisc. other
"There are two career paths in front of you, and you have to choose
which path you will follow. One path leads to promotions, titles, and
positions of distinction.... The other path leads to doing things that are
truly significant for the Air Force, but the rewards will quite often be a
kick in the stomach because you may have to cross swords with the party
line on occasion. You can't go down both paths, you have to choose. Do
you want to be a man of distinction or do you want to do things that
really influence the shape of the Air Force? To be or to do, that is the
question." Colonel John R. Boyd, USAF 1927-1997
From the dedication of Boyd Hall, United States Air Force Weapons
School, Nellis Air Force Base, Nevada. 17 September 1999
& ...
http://www.defense-and-society.org/FCS_Folder/comments/c199.htm
https://web.archive.org/web/20010412225142/http://www.defense-and-society.org/FCS_Folder/comments/c199.htm
other posts & URLs (from around the web) mentioning Boyd and/or
OODA-loops
https://www.garlic.com/~lynn/subboyd.html
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: War, Chaos, & Business (web site), or Col John Boyd Newsgroups: alt.folklore.military Date: Thu, 05 Oct 2000 18:05:53 GMTvelovich@aol.com.CanDo (V-Man) writes:
is the following from pre-83 ... permissable?
http://www.defense-and-society.org/FCS_Folder/comments/c199.htm
https://web.archive.org/web/20010412225142/http://www.defense-and-society.org/FCS_Folder/comments/c199.htm
Much to the dismay of the autocrats at Wright-Pat, the Mad Major's
theory of energy-maneuverability (E-M) turned out to be a stunning
success. It provided a universal language for translating tactics into
engineering specifications and vice versa and revolutionized the way
we look at tactics and design fighter airplanes.
Boyd used it to explain why the modern F-4 Phantom performed so poorly
when fighting obsolete MiG-17s in Vietnam and went on to devise new
tactics for the Phanto whereupon Air Force pilots began to shoot
down more MiGs.
He used it to re-design the F-15, changing it from an 80,000-pound,
swing-wing, sluggish behemoth, to a 40,000-pound fixed-wing,
high-performance, maneuvering fighter. His crowning glory was his use
of the theory to evolve the lightweight fighters that eventually
became the YF-16 and YF-17 prototype and then to insist that the
winner be chosen in the competitive market of a free-play flyoff.
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: FW: NEW IBM MAINFRAMES / OS / ETC.(HOT OFF THE PRESS) Newsgroups: bit.listserv.ibm-main Date: Thu, 05 Oct 2000 23:11:38 GMTbblack@FDRINNOVATION.COM (Bruce Black) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: I'll Be! Al Gore DID Invent the Internet After All ! NOT Newsgroups: alt.fan.rush-limbaugh,alt.folklore.urban,seattle.politics,alt.politics,alt.folklore.computers Date: Sat, 07 Oct 2000 01:55:04 GMTkorpela@ellie.ssl.berkeley.edu (Eric J. Korpela) writes:
The advent of the internet service providers and interoperable internet allowed them to essentially eliminate all that overhead, expense and trouble (along with significant customer call center overhead).
As customers found that more and more of their online service providers were moving to interoperable, generic support that meets all of their requirements ... it represented quite a consumer convenience (it wasn't just AOL, Prodigy, Compuserve providing online service ... but little & middle tier guys including bunches of business specific places like both consumer and commercial banking).
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: I'll Be! Al Gore DID Invent the Internet After All ! NOT Newsgroups: alt.fan.rush-limbaugh,alt.folklore.urban,seattle.politics,alt.politics,alt.folklore.computers Date: Sat, 07 Oct 2000 14:26:36 GMT"Mike" writes:
for non-broadcast technology ... a large part of the bandwidth is the store & forward point-to-point bandwidth simulating broadcast(i.e. incoming bandwidth to one node ... and the outgoing bandwidth from that node to all the nodes it forwards it to, aggregated for every node that does forwarding).
while usenet bandwidth has increased from '93 ... the use of internet point-to-point store&forward to simulate broadcast requires significantly more aggregate bandwidth than the basic broadcast bandwidth requirements of the data being distributed.
random refs:
https://www.garlic.com/~lynn/2000.html#38
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why trust root CAs ? Newsgroups: sci.crypt Date: Sun, 08 Oct 2000 13:19:47 GMTandrew writes:
When a server registers for a domain name SSL certificate ... the CA has to authenticate the domain name request with the domain name infrastructure (as the authoritative source for domain names) as to the owner of the domain name (i.e. does the requester of the certificate actually have rights to the domain name being specified).
In other words, the CA infrastructure is dependent on the same domain name infrastructure that is supposedly the thing that the whole process is attempting to fix.
Now one of the methods to improve the integrity of the domain name system (so that CA's can rely on them ... and minimize things like domain name hijacking ... where I could hijack a domain name and then obtain a certificate for that domain name) is to register public key along with the domain name.
However, if public keys are registered (along with the domain name), the existing domain name infrastructure could return the public key in addition to other information.
This creates something of a catch-22 for ca infrastructure ... fixing the domain name integrity (with public keys) so that CAs can rely on domain name integrity as the authoritative source for domain names ... also creates the avenue making the domain name certificates redundant and superfluous.
random refs:
https://www.garlic.com/~lynn/aepay5.htm#rfc2931
https://www.garlic.com/~lynn/aepay5.htm#rfc2915
https://www.garlic.com/~lynn/aepay4.htm
https://www.garlic.com/~lynn/aadsmore.htm#client1
https://www.garlic.com/~lynn/aadsmore.htm#client2
https://www.garlic.com/~lynn/aadsmore.htm#client3
https://www.garlic.com/~lynn/aadsmore.htm#client4
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why trust root CAs ? Newsgroups: sci.crypt Date: Mon, 09 Oct 2000 02:41:53 GMTDaniel James writes:
in retail business with consumer ... having consumer "identity" certificates ... creates privacy issues.
in retail business with consumer ... and doing electronic financial transactions ... the transactions are done online ... and for the most part a merchant doesn't really care who you are ... they just care that the bank guarentess that the merchant will get paid (the bank cares that ther authorized entity for an account is, in fact, the person originating the transactions).
for the most part the account operation ... the part that allows a transaction to be executed ... and a pin environment ... used for authenticating a transaction have an extensive integrated data processing system, extensive integrated business continuity operation, triple redundancy in multiple physical locations, etc.
Currently, most CAs represent independant data processing operations ... which can represent expense duplication.
Various financial operations have done relying-party-only certificates, which address both privacy concerns and liability concerns. Effectively, certificate contains the account number.
Such a mechanism, integrated with standard account management, has an account owner sign & send a public key to a financial institution's RA. The RA validates some stuff, and has the financial institution create a certificate, stores the original in the account record and returns a copy of the certificate to the key/account owner.
The account owner, originates a transaction, signs it, and appends the signature and certificate and sends it on its way to the financial institution. The financial institution extracts the account number from the transaction, reads the account record, and gets enough information from the account record (including the original of the certificate) to authenticate and authorize the transaction.
Given that the financial institution needs to read the account record to obtain meaningful information (including the certificate original), the account owner can do some transaction payload optimization and compress his certificate copy that is appended to the transaction. Any field that is in both the copy of the certificate and the original of the certificate (stored in the account record) can be compressed from the returned certificate.
Since every field in the copy of the certificate is also in the original of the certificate, it is possible to compress the certificate appended to the transaction to zero bytes. This can be significant when an uncompressed certificate is 10-50 times larger than the financial transaction it is appended to.
Since the financial institution is finding that the account owner is alwas able to compress the attached certificate to zero bytes (for attaching to transactions), the financial institution, when returning the certificate to the entity registering their public key, does some pre-optimization and returns a pre-compressed zero byte certificate (saving the overhead of having the account owner have to do it on every transaction).
misc. refs:
https://www.garlic.com/~lynn/ansiepay.htm#aadsnew2
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's Workplace OS (Was: .. Pink) Newsgroups: alt.folklore.computers Date: Mon, 09 Oct 2000 12:35:07 GMTBurkhard Dietrich Burow writes:
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why trust root CAs ? Newsgroups: sci.crypt Date: Mon, 09 Oct 2000 12:42:04 GMTvjs@calcite.rhyolite.com (Vernon Schryver) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why trust root CAs ? Newsgroups: sci.crypt Date: Mon, 09 Oct 2000 13:02:34 GMTDaniel James writes:
Because of an artifact of existing POS online technology ... the other parties are evesdropping on something between the consumer and the consumer's financial institution. Institutionalizing that evesdropping would make it worse. In at least some of the bank card world there are even association quidelines about the merchant not doing anything more than certain minimums with the transaction (in part to address privacy issues).
Furthermore, in the current world, financial institutions tend to want to see all transactions against their accounts (& not having things like fraud attempts being truncated).
As stated previously, the merchant wants to know that the consumer's bank will pay ... the merchant getting a transaction from the consumer's bank indicating that they will get their money satisfies that requirement. Institutionalizing all the other parties evesdropping on consumers' instructions to their financial institution is also aggravating privacy issues (i.e. just because an implementation artifact has other people's messages flowing thru something that I have access to ... doesn't mean that i should be monitoring it).
Now, there can be a case that the return transaction from the financial institution to the merchant be signed and carry a certificate ... but the current bank->merchant infrastructure ... to some extent, operates within the bounds of a trusted network ... alliviating much of the authentication requirement that occurs in an untrusted network environment.
There can also be other transactions that might need authentication, (and could benefit from public key authentication) but the specific discussion was about retail transactions where the consumer is sending directions to their financial institution.
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's Workplace OS (Was: .. Pink) Newsgroups: alt.folklore.computers Date: Mon, 09 Oct 2000 14:28:01 GMTAnne & Lynn Wheeler writes:
For instance ... a lot of business modeling had migrated from APL (and other kinds) business models to PC spreadsheets. A lot more of the business people could participate in doing what-if questions. However, having valuable unsecured, unbacked-up corporate assets (the data) on PCs opened a huge business risk (some relatively recent studies have indicated that 50% businesses that have not backed up business critical data and the disk goes bad, go bankrupt within the first month ... for instance loose billing file and not able to send out bills ... which has a horrible downside effect on cash flow).
The disk storage division attempted to address that with mainframe products that provided fast, effecient, cost-effective management of corporate asset PC-resident data (back on the mainframe). However, in a turf war with the communication division ... they lost. As a result, the design point, price/performance & paradigm for mainframe support of PCs was essentially relegated to treating the PCs as a display device.
With that restriction ... to keep the mainframe "in the game", pretty much required trying to migrate the execution of PC applications back to the mainframe.
SAA was heavily tied into that.
Another aspect, is that the communication division ... had a host-centric, point-to-point communication operation w/o a networking layer. Because of the lack of a networking layer, even the LAN products were deployed as large bridged structures (no network layer, no routing).
I got in deep dudu when I first did a ppresentation to the IS-managers of a large corporate client where I presented real networking, multi-segment, routed LANs, high-speed host interconnect and the first 3-layer architecture.
The internal SAA and LAN groups came down hard. It was the place in life for PCs to be treated as display terminals. Since there was such low inter-PC traffic and such low mainframe<->PC traffic, that having 300 (or more) PCs sharing a common 16mbit/sec bandwidth was perfectly acceptable.
16mbit/sec T/R was obviously better than 10mbit ethernet (although there were a number of studies that showed it was possible to get more effective thruput out of ethernet than 16mbit T/R), and since nobody used the bandwidth anyway, having 300 PCs sharing the same 16mbits wasn't a problem (compared to ten, routed ethernet segments with only 30 PCs sharing 10mbit ... each segment actually having more effective throughput).
The idea of 3-tier & significant mainframe<->PC bandwdith was a real threat to the SAA group ... because many of the things that SAA wanted to migrate off PCs to the mainframe would go instead, to the servers in the middle (and/or applications stayed on the PC and used the middle layer and the host backend as disk farm).
postings in reply to question on origins of middleware
https://www.garlic.com/~lynn/96.html#16
https://www.garlic.com/~lynn/96.html#17
random refs:
https://www.garlic.com/~lynn/96.html#14
https://www.garlic.com/~lynn/98.html#50
https://www.garlic.com/~lynn/99.html#36
https://www.garlic.com/~lynn/99.html#38
https://www.garlic.com/~lynn/99.html#123
https://www.garlic.com/~lynn/99.html#124
https://www.garlic.com/~lynn/99.html#201
https://www.garlic.com/~lynn/99.html#202
https://www.garlic.com/~lynn/2000.html#75
misc. post from 1994 ...
https://www.garlic.com/~lynn/94.html#33b
Speaking of HSDT, in the mid-80s we were contracting for equipment for parts of the HSDT project from some companies in Japan. On the Friday before I was to leave for a meeting, somebody (in the US) announced a new newsgroup on high-speed data communication including in the posting the following definitions:
low-speed <9.6kbits medium-speed 19.2kbits high-speed 56kbits very high-speed 1.5mbits On Monday morning on the wall of a conference room in Japan was: low-speed <20mibts medium-speed 100mbits high-speed 200-300mbits very high-speed >600mbits--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Where are they now : Taligent and Pink Newsgroups: alt.folklore.computers Date: Mon, 09 Oct 2000 15:15:33 GMTTom Van Vleck writes:
The net of the JAD was about a 30% hit to the taligent base (I think two new frameworks plus hits to the existing frameworks) to support business critical applications.
Taligent was also going thru rapid maturity (outside of the personal computing, GUI paradigm) ... a sample business application required 3500 classes in taligent and only 700 classes in a more mature object product targeted for the business environment.
i think that shortly after taligent vacated their building ... sun java group moved in.
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why trust root CAs ? Newsgroups: sci.crypt Date: Mon, 09 Oct 2000 15:57:33 GMTvjs@calcite.rhyolite.com (Vernon Schryver) writes:
at its simplest the example was registering public keys at the same time as the domain name was registered. this would benefit the SSL domain name certificates ... since domain name infrastructure could require that changes to domain name info has to be signed ... making domain name hijacking much harder (i.e. i can't hijack a domain name and then get a valid ssl certificate for that domain).
on the other hand, with such keys registered as part of the domain name entries ... clients could optionally request the key be returned in addtion to host->ip-number. using that public key as part of server authentication and SSL session setup would reduce the bits on the wire compared to the existing client/server SSL handshaking.
... i.e. the solution to improving integrity for domain name SSL certificates is also a solution for making domain name SSL certificates obsolete, redundant and superfluous.
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Where are they now : Taligent and Pink Newsgroups: alt.folklore.computers Date: Tue, 10 Oct 2000 02:27:57 GMTanother object operating system in the valley ... was sun's dolphin ... java got a lot from it.
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: How did Oracle get started? Newsgroups: alt.folklore.computers Date: Tue, 10 Oct 2000 03:40:22 GMTJoe Morris writes:
I was involved in some of the technology transfer from san jose to endicott for sql/ds. baker then did a lot of the technology transfer from endicott back to STL for DB2. baker then showed up at oracle (but i don't remember the dates for that).
I do have recollection of Nippon Steel announcing they were buying Oracle in the late '80s ... and then Oracle canceling the deal ... I believe after a really good quarter.
somewhere in all that informix, tandem, etc.
The were all unix platforms with some VMS and misc. other platforms thrown in.
misc refs:
http://www.mcjones.org/System_R/mrds.html
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Teradata.html
http://www.mcjones.org/System_R/other.html
http://epoch.cs.berkeley.edu:8000/redbook/lec1.html
http://www.dbmsmag.com/9609d13.html
http://infoboerse.doag.de/mirror/frank/faqora.htm
https://web.archive.org/web/20020215000745/http://infoboerse.doag.de/mirror/frank/faqora.htm
http://www.pointbase.com/about/management_team.html
https://web.archive.org/web/20010401020537/http://www.pointbase.com/about/management_team.html
note in one of the other web/online oracle histories ... it says oracle wasn't available until 1984 on ibm mainframe (under vm/370 ... same platform that system/r was developed on).
folling from one of the above urls.
What is Oracle's history?
1977 Relational Software Inc. (currently Oracle Corporation)
established
1978 Oracle V1 ran on PDP-11 under RSX, 128 KB max
memory. Written in assembly language. Implementation separated
Oracle code and user code. Oracle V1 was never officially
released.
1980 Oracle V2 released on DEC PDP-11 machine. Still
written in PDP-11 assembly language, but now ran under Vax/VMS.
1982 Oracle V3 released, Oracle became the first DBMS to run on
mainframes, minicomputers, and PC's. First release to employ
transactional processing. Oracle V3's server code was written in
C.
1983 Relational Software Inc. changed its name to Oracle
Corporation.
1984 Oracle V4 released, introduced read consistency, was ported to
multiple platforms, first interoperability between PC and
server.
1986 Oracle V5 released. Featured true client/server, VAX-cluster
support, and distributed queries. (first DBMS with distributed
capabilities).
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why trust root CAs ? Newsgroups: sci.crypt Date: Tue, 10 Oct 2000 16:20:10 GMTDaniel James writes:
The SSL domain name server certificate associates the public key and the host name or domain name. The client checks the name in the certificate against the web address.
The authoritative resource for domain name ownership is the domain name infrastructure ... which CA's have to rely on when authenticating a request for a SSL domain name server certificate.
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why trust root CAs ? Newsgroups: sci.crypt Date: Wed, 11 Oct 2000 13:29:19 GMTvjs@calcite.rhyolite.com (Vernon Schryver) writes:
with regard to a domain name ... i can register a DBA and open a checking account with that DBA, get D&B registration ... hijack a domain name and provide all information to the CA that correctly validates (i.e. the domain name validates with the domain name infrastructure ... and all the other information provided also validates).
in the ssl domain name server certificate case ... all the client is doing is checking that the web address they are using and the domain name in the certificate match.
if there is any addition information in a certificate & it doesn't correspond with what a client might expect, oh well ... out of the millions of people that might do a SSL operation with the server & also actually physically look at any other information that may be part of a ssl domain name server certificate is possibly countable on fingers & toes.
a CA can authenticate stuff it has direct knowledge of and for the rest relies on authoritative sources for that information (like domain name infrastructure as the authoratative source for domain name ownership)
also as to regard to DBAs ... in the past i've purchased computer equipment with a bank card and later got the statement ... the legal DBA name on the statement of the business I bought the equipment from ... bore no correlation with the name of the store that I bought the equipment from. I did call the store and confirm their legal DBA name.
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why not an IBM zSeries workstation? Newsgroups: comp.arch Date: Thu, 12 Oct 2000 01:55:11 GMThack@watson.ibm.com (hack) writes:
a problem was that paging was a cross-processor call to the 8088 which would typically do i/o to a 100ms/access xt hard disk .. and there were a lot of things that would result in lots of paging ... although not as many as in 384k (where a lot of measurements showed extensive page thrashing).
cms file access would also be a cross-processor call to the 8088 which then did an i/o to (same) 100msc/access xt hard disk.
the xt/370 (& at/370) were co-processor cards in the PC.
there was a different machine out of POK that was a separate box with an adapter card and a big thick cable from the PC to the separate box. It had 4mbytes of memory and faster processor.
random refs:
https://www.garlic.com/~lynn/96.html#23
https://www.garlic.com/~lynn/2000.html#5
https://www.garlic.com/~lynn/2000.html#29
in the past, i received email to the effect that the "POK" machine in
the following list was actually one of the separate 4mbyte memory
boxes ... even tho it was listed as a 4341.
https://www.garlic.com/~lynn/99.html#110
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why not an IBM zSeries workstation? Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 13 Oct 2000 15:08:16 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: VLIW at IBM Research Newsgroups: bit.listserv.ibm-main Date: Fri, 13 Oct 2000 15:22:40 GMTbblack@FDRINNOVATION.COM (Bruce Black) writes:
the high-end machines were horizontal microcode machines ... which instead of being rated in microprocessor instructions per 370 instructions, it was more a case of avg. machine cycles per 370 instructions (since a horizontal instruction could be doing multiple things concurrently). For instance, one of the enhancements going from 165 to 168 was that the avg. cycle per instruction dropped from about 2.1 cycles/instruction to 1.6 cycles/instruction (i.e. improved implementation &/or better overlap of 370 oeprations).
The 3830 disk controller was also horizontal microcode engine ... while the 3880 disk controller was a vertical microcode engine (jib-prime) for control functions and special hardware for data movement.
random refs:
https://www.garlic.com/~lynn/95.html#3
https://www.garlic.com/~lynn/2000b.html#38
https://www.garlic.com/~lynn/2000c.html#75
https://www.garlic.com/~lynn/2000e.html#6
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why not an IBM zSeries workstation? Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 13 Oct 2000 19:34:51 GMTglass2 writes:
I had both A74 and a couple XT/370s all running in the same office at one time.
The A74 had about 350kips (370) processor.
Compared to XT/370 (which used a modified 68k) it was much closer to 370. I provided that changes to the pc/370 kernel for it to run on A74.
It had differences from 370 ... like 4k keys instead of 2k keys ... more like 3081 XA.
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why not an IBM zSeries workstation? Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 13 Oct 2000 19:39:17 GMT
Date: 88/06/22 18:51:005 To: Distribution A74 Users: I am happy to announce that the A74/370 processor has finally been approved for special bid orders as the "IBM 7437 VM/SP Technical Workstation". This approval was deliverd by Ed Kfoury, President of SPD on Wednesday, June 8th. Those of you who are familiar with the earlier version of the A74/370 should note that the special bid version of the unit has the following features: o Attachment to PS/2 Models 60, 70, and 80 o 16-bit interface to the PS/2 o 16 MB real memory with ECC o All S/370 I/O instructions are supported o Full VM/SP Release 5 o PER o VM Assist o Hardware Time of Day Clock o Writable Control Store o PS/2 5080 Adapter (optional) o MYTE 3270 Emulation and Host Attach (DFT, SNA, Bisync) o 5 to 7 times faster Transparent File Access to Host Minidisks o CE support for field repair The performance of the 7437 processor itself has not changed, but the increased speed of the PS/2 and VM Assist microcode combine to deliver better overall system throughput. When coupled with an optional 5080 adapter for the PS/2, the workstation offers exceptional processing capability for engineering applications like CADES, CATIA, CADAM, CBDS and GPG. Please contact Gary Smith (GSMITH at PLKSB) for more information. I would like to thank all of you who have helped in one way or another to make this product available to our customers. This is an exciting time for IBM as it shows that we are serious about our 370 customers. We now provide them with a common architecture solution for their workstation and mainframe requirements. W. F. Beausoleil... snip ... top of post, old email index
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why not an IBM zSeries workstation? Newsgroups: comp.arch,alt.folklore.computers Date: Sat, 14 Oct 2000 04:57:11 GMTjmaynard@thebrain.conmicro.cx (Jay Maynard) writes:
the stuff was retrofitted to 4341 for a large client who wanted to run mvs/sp on a couple hundred 4341s.
part of the problem for actual thruput for cross-memory hardware on 3033 (compared to software implementation) was that it caused significant increased strain on the table lookaside buffer (TLB) which possibly could even result in degrade performance.
misc. refs:
https://www.garlic.com/~lynn/98.html#11
https://www.garlic.com/~lynn/2000c.html#35
https://www.garlic.com/~lynn/2000c.html#83
https://www.garlic.com/~lynn/2000c.html#84
http://www.hpl.hp.com/features/bill_worley_interview.html
https://web.archive.org/web/20000816002838/http://www.hpl.hp.com/features/bill_worley_interview.html
"3033" extensions:
http://publib.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/DA1A7002/7.5.6
http://publib.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/ID4T0U01/6.4.85
misc quotes from (F.2 Comparison of Facilities between System/370 and
370-XA)
http://www.s390.ibm.com:80/bookmgr-cgi/bookmgr.cmd/BOOKS/DZ9AR004/F%2e2
The following items, which are part of the basic computing function in
System/370, are not provided in 370-XA: BC mode, interval timer, and
2K-byte protection blocks.
Only single-key 4K-byte protection blocks are provided, but the
storage-key-exception control is not.
The 370-XA translation provides only the 4K-byte page size and only
the 1M-byte segment size.
also ...
http://publib.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/DA1A7002/7.5.62
http://publib.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/EZ9A5003/7.4.75.1
--
Anne & Lynn Wheeler | lynn@garlic.com
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why not an IBM zSeries workstation? Newsgroups: comp.arch,alt.folklore.computers Date: Sat, 14 Oct 2000 05:15:59 GMTalso misc. "3033" entries from
IBM 3033 77-03 78-03 12 VERY LARGE S/370+EF INSTRUCTIONS AMH V7 77-03 78-09 18 AMDAHL RESP. TO 3033 (1.5-1.7) V6 IBM 3033MP 78-03 79-09 18 MULTIPROCESSOR OF 3033 IBM 3033AP 79-01 80-02 13 ATTACHED PROCESSOR OF 3033 (3042) IBM 3033 79-11 79-11 00 -15% PURCHASE PRICE CUT IBM 3033N 79-11 80-01 04 DEGRADED 3033, 3.9MIPS IBM 3033AP 80-06 80-08 02 3033 ATTACHED PROCESSOR IBM 3033 80-06 81-10 16 Ext. Addr.=32MB REAL ADDR.;MP ONLY IBM D.Addr.Sp. 80-06 81-06 12 Dual Address Space for 3033 IBM 3033XF 80-06 81-06 12 OPTIONAL HW/FW PERF. ENHANCE FOR MVS/SP IBM 3033 24MB 80-11 81-11 12 24MB REAL MEM. FOR 3033UP, AP IBM 3033S 80-11 81-01 02 2.2MIPS, DEGRADED 3033 (ENTRY 3033 MODEL) IBM 3033N UPGR. 80-11 80-11 00 9%-14% PERF. IMPROVE, NO CHARGE IBM 3033 PRICE 81-10 10% IN US, 12-20% EUROPE PURCH. ONLY IBM 3033S PERF. 81-10 82-06 08 NO-CHARGE PERF. BOOST BY 8%-10% IBM 3033 82-03 16% PUR.PRICE CUT, -14%Mem.Price($31K/MB) IBM 3033 82-03 3033 Placed on LIMITED-NEW PRODUCTION--
next, previous, subject index - home