From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Fri, 11 Feb 2011 18:08:56 -0500re:
also from long ago and far away ... other HPO stuff that got fiddled
Date: Sat, 11 Jan 1986 15:26:02 EST
From: melinda
To: wheeler
Subject: From VMSHARE....
<<< PROB HPOGRIND - 48 lines, 0 append(s) >>>
HPO 3.4 allows a user to run away with the CPU
One of the reasons we were always happy to pay to get a Wheeler
scheduler, beginning way back in the PRPQ days, was that it did
such a good job of protecting other users from a CPU hog.
Indeed, several times a year we would have a user panic because
he had just discovered that his computer account was overdrawn
by several thousand dollars. The scenario was always the same.
He had invoked a program or EXEC he was working on; his terminal
had gone dead, so he had gone home for the night. A couple of
days later, he tried to logon again, found himself still logged
on, and asked the operators to force him. That's when he found
he had no money left. Then he would come to us. We'd tell him
about loops, ask him not to do that again, and give him his
money back.
The interesting part of all this is that the Wheeler Scheduler
had been doing such a good job of protecting the system from
the looping user, that nobody had noticed him. The scheduler
just kept him in the background absorbing the spare cycles, but
didn't let him use the cycles somebody else wanted.
This is not at all the way the HPO 3.4 scheduler works, however.
In the year we've been running it, we have seen numerous cases in
which one or two heavy CPU users severely degraded the performance
of the entire system.
These people are not paging heavily and are not doing a lot of
I/O. (VM has never done a real good job of containing users who
put excessive loads on memory/paging or I/O.) They are using
CPU only and generally have very small working sets. Typically,
their TVRATIO's are 1.0.
And the HPO 3.4 scheduler lets a single such user have as much as
90% of one processor in the middle of the afternoon, when there
are plenty of other users who need (and deserve) some of those
cycles.
I'm rather at a loss to figure out how to approach IBM on this
problem. I don't want to be told that the scheduler is working
as designed. Does anybody have any suggestions? Also, do other
people see this problem?
... snip ... top of post, old email index
full item
http://vm.marist.edu/~vmshare/browse.cgi?fn=HPOGRIND&ft=PROB
after leaving cambridge for san jose research (later part os 70s) ... I
did a lot of work on heavy i/o users ... "scheduling to the bottleneck"
... but none of that work was shipped in product. some recent posts
mentioning systems becoming more & more i/o constrained (as improvement
in disk performance lagged other system components)
https://www.garlic.com/~lynn/2011.html#35 CKD DASD
https://www.garlic.com/~lynn/2011.html#61 Speed of Old Hard Disks
also refering to this item in this old posts (67/3081k comparison)
https://www.garlic.com/~lynn/93.html#31
https://www.garlic.com/~lynn/95.html#10
https://www.garlic.com/~lynn/98.html#46
https://www.garlic.com/~lynn/99.html#4
--
virtualization experience starting Jan1968, online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 11 Feb, 2011 Subject: IBM Future System Blog: IBM Historic Computingdidn't realize ... but there is now also wiki page for Future System
it references sowa web page mentioned upthread as well as this book review:
Broken Promises An unconventional view of what went wrong at IBM
http://gdrean.perso.sfr.fr/papers/promises.html
Not referenced on the wiki page ... but FS is also discussed in this article:
The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
a few past posts in this thread:
https://www.garlic.com/~lynn/2010q.html#32 IBM Future System
https://www.garlic.com/~lynn/2010q.html#64 IBM Future System
https://www.garlic.com/~lynn/2011.html#14 IBM Future System
https://www.garlic.com/~lynn/2011.html#18 IBM Future System
https://www.garlic.com/~lynn/2011.html#20 IBM Future System
https://www.garlic.com/~lynn/2011b.html#72 IBM Future System
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Other early NSFNET backbone Newsgroups: alt.folklore.computers Date: Fri, 11 Feb 2011 22:18:11 -0500re:
I asked Melinda about possibly PUCC participating as part of some
proposed NSFNET backbone activity ... this was in the face of nearly
constant opposition from internal politics ... later reference
https://www.garlic.com/~lynn/2006s.html#email870515
also mentioned here:
http://lnkd.in/JRVnGk
Date: Mon, 7 Apr 1986 12:20:32 EST
From: melinda
To: wheeler
Hi, Lynn, Sorry not to have gotten back to you sooner -- we were out
of town when your mail came.
The Cyber 205 is not yet at the John Von Neumann Center here, because
the Consortium's building is not yet finished. It is running
Consortium work, but is still at CDC in Arden Hills, Minnesota. It is
scheduled to be moved to Princeton in June.
In the meantime, two VAX 8600's that will be its frontends are sitting
in the PUCC machine room, and the supercomputer staff is working on
getting the software for them up. It is hoped that by May there will
be a configuration in place that will allow people to start testing
out their communications.
Here, as we understand it, is the ultimate configuration:
character image (doesn't survive well):
NSFNET----56kb-----FUZZBALL---------|E| line gateway |T| |H| PUCC PUCC Arizona |E| VAX750---Ethernet---3081 Colorado---satellite---VITALINK-----|R| (Ultrix) gateway |N| | |E| T1 line Rutgers, |T| | PSU, etc.---T1 lines----VAX750------| |------VAX750---T1 lines---MIT, (Ultrix) | | (Ultrix) NYU, |_| etc. | | 2 8600's (VMS/Wollongong WIN/VX) | 205Right now, the 205 and a pair of 8600's are at Arden Hills. The 8600's in the PUCC machine room aren't really talking to anything else. The VAX750 in the PUCC machine room has a (not yet stable) Ethernet link to the PUCC 3081, but is not communicating with the 8600's yet.
hung user reference
https://www.garlic.com/~lynn/2011b.html#61 VM13025 ... zombie/hung users
and these old emails
https://www.garlic.com/~lynn/2011b.html#email860217
https://www.garlic.com/~lynn/2011b.html#email860217b
for other topic drift
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
from long ago and far away ... TYMSHARE is being bought by M/D. I got
brought in to evaluate TYMSHARE's GNOSIS as part of its spin-off as
KEYKOS. Also got some IBM interviews for Doug Engelbart (who was at
TYMSHARE at the time).
Date: Tue, 20 Aug 1985 11:15:06 EDT
From: melinda
To: wheeler
Lynn, as you may have heard, we are about to move VMSHARE and
PCSHARE from TYMSHARE to McGill University. So that the you and
the McGill people won't have Customs hassles, it's been decided
that I will make your monthly tapes of the conferences from the
copy of the databases that I keep on my system. So, I need a
mailing address for you.
Do you need/want the index files? There is a public domain
program (written by Arty Ecock, of CUNY) that can be used to
search the indices without requiring the conferencing system.
Regards,
Melinda Varian
... snip ... top of post, old email index
past email mentioning vmshare
https://www.garlic.com/~lynn/lhwemail.html#vmshare
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Sat, 12 Feb 2011 08:19:54 -0500Charles Richmond <frizzle@tx.rr.com> writes:
original cp67 kernel (brought out to univ. jan68) was more than box ... but fit in tray.
at that time (jan68), cp67 group didn't quite trust the cms filesystem. distribution was cp67 source (& assembled source "txt" decks) on OS/360 tape ... and assembled on os/360. punch the individual module "txt" decks, use a magic marker to draw diagonal stripe across the top of each deck and write the module name. individual txt decks were arrainged in tray in appropriate order and a "BPS" loader was placed at the front of the whole thing.
load the whole deck in 2540 card reader, dial in the card reader on the front console and hit the "IPL" button. BPS loader would read all the cards into memory and transfer to the appropriate place ... "LDT" card pointing to savecp. savecp would find the correct disk location and write the memory image to disk with appropriate IPL records. Then cp67 system could be booted/IPLed from disk.
Source changes could be made to individual module assemble file, that module re-assembled and the assemble output TXT deck punched. Repeat the operation with the magic marker with diagonal stripe on module name on the top of the deck ... find the corresponding own deck in the card tray (from information written across the top of each deck) and replace the cards.
Other approach is to have all the TXT decks and write the BPS loader and the TXT deck images to tape ... IPL the tape (instead of 2540 card readere) to create a new cp67 kernel.
A little later in the year, the cp67 group grew confident enough in the CMS filesystem ... that cp67 source was moved to CMS ... and started using the CMS UPDATE command for source updates ... instead of directly editing the base assemble source, edit an "UPDATE" file ... and use the UPDATE command to change the assemble source creating a temporary/working assemble file, which was then assembled.
An update deck would have "./" control statements that replaced, inserted, deleted records in the original. It used sequence numbers in cols. 73-80 of the original source. The replace/inserted new source (from update deck) had to be completely generated manually (included the "new" sequence numbers in cols. 73-80). I was making so many source code changes to CP67 that I wrote a preprocessor to the update command ... which added a "$" to the end of the "./" control statements ... that defined the automatic generation of sequence numbers in the new/replaced source files. The preprocssor would read the new "update" file ... process any "$" fields appropriately generating the sequence numbers ... output to a temporary "update" file which was then feed to the UPDATE command ... which generated a temporary "assemble" file that was fed to the assembler.
Later, after joining the science center there was multi-level source update procedure created. Update files now had "prefix" filetype of "UPDG" (with remaining four characters in filetype used for specifying an update level). The "$" preprocessor was invoked which generated a "UPDT" temporary file (with the same suffix) ... which was then applied to assemble source to generate a temporary assemble file. This process then iterated (for each update file for that particular source routine) ... but with UPDATE applying the source update to the most recently generated temporary assembler file (output then replaced the temporary assembler file).
Eventually the CMS editor was enhanced to have an "update" mode; rather than directly creating the UPDG file ... it was possible to edit a source file ... and the CMS editor would "save" all source changes made as an update file.
from long ago and far away:
Date: Sun, 8 Sep 1985 14:10:41 EDT
From: melinda
To: wheeler
Lynn, I was truly touched by your having spent part of your Saturday
morning loading up those CP-67 EXECs for me. It was extraordinarily
thoughtful of you and has helped me answer almost all of my questions
about the CP-67 implementation.
I have been working my way through the EXECs and believe that I have
them all deciphered now. I was somewhat surprised to see how much of
the function was already in place by the Summer of 1970. In
particular, I hadn't expected to find that the update logs were being
put at the beginning of the textfiles. That has always seemed to me
to be one of the most ingenious aspects of the entire scheme, so I
wouldn't have been surprised if it hadn't been thought of right away.
One thing I can't determine from reading the EXECs is whether the
loader was including those update logs in the loadmaps. Do you
recall?
Of the function that we now associate with the CTL option of UPDATE,
the only substantial piece I see no sign of in those EXECs is the use
of auxfiles. Even in the UPAX EXEC from late January, 1971, it is
clear that all of the updates listed in the control files were
expected to be simple updates, rather than auxfiles. I know, however,
that auxfiles were fully implemented by VM/370 Release 1. I have a
First Edition of the "VM/370 Command Language User's Guide" (November,
1972) that describes them. The control file syntax at that point was
updlevel upid AUX
Do you have any memories of the origin of auxfiles?
Thank you again,
Melinda
... snip ... top of post, old email index
also posted/discussed in this post:
https://www.garlic.com/~lynn/2006w.html#42 vmshare
and as mentioned in these posts
https://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
https://www.garlic.com/~lynn/2011b.html#89 If IBM Hadn't Bet the Company
it was before Almaden tape library operations problem where random tapes were being mounted as "scratch" (and being written over).
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Sat, 12 Feb 2011 08:41:50 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
after I joined the science center ... misc. past posts mentioning
545tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech
the standard new cp67 process involved generating a new "kernel" card deck to tape. I enhanced the process to also add additional files on the tape (behind the bootable card deck image) ... everything that was needed to generate that card deck image ... all the source, all the source changes ... all the processes and procedures.
I periodically archives some of these tapes ... and besides straight backup/archive tapes ... I took several of these tapes when I transferred to SJR on the west coast. Over the years, I would copy the tapes to newer technology (800bpi to 6250bpi, etc). It was from these tapes that I retrieved a lot of stuff for Melinda ... and it were these tapes (even some replicated) that was "lost" in the Almaden tape library operations glitch.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 12 Feb, 2011 Subject: Early NSFNET backbone Blog: Old Geek Registryre:
recent post in a.f.c. with description from princeton
https://www.garlic.com/~lynn/2011c.html#email860407
and then slightly earlier:
Date: 1 October 1985, 11:04:17 EDT
To: wheeler
Lynn, I am working with XXXXXX to respond to Princeton/Wisconsin/
Delaware request for equipment with which to experiment with a micro
backbone for BITNET for access to supercomputers. It sounds like what
you mentioned to YYYYYY recently that you are proposing to NSF.
What is your proposal? Who are you working with in ACIS HQ? Who are
you working with in NSF? NSF is granting 500K to the above schools for
1 year to do the work. ACIS is trying to respond ASAP to Princton's
request for equipment. Please give me a call and let's discuss what
you are doing. I have a lot more confidence in your plan than in what
these 3 schools are trying to accomplish over the next year. They are
expecting the money momentarily but have no firm plan of who is doing
what to whom.
Its good to talk with you again. Looking forward to talking with you
on phone.
... snip ... top of post, old email index, NSFNET email
past posts mentioning bitnet (&/or EARN)
https://www.garlic.com/~lynn/subnetwork.html#bitnet
past posts mentioning nfsnet
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Other early NSFNET backbone Newsgroups: alt.folklore.computers Date: Sat, 12 Feb 2011 10:06:28 -0500re:
and more from long ago and far away
Date: 04/25/85 07:55:35
From: wheeler
re: foils; fyi; hsdt009 is what has been given to NSF to tie together
all the super computer centers. Have given it to UofC system to tie
together all their campuses. Will be giving it to national center for
atmospheric research (NCAR) in boulder on monday.
pam004 is the talk i'm giving at toronto (IBM SHARE) on weds
... snip ... top of post, old email index, NSFNET email
HSDT: is high-speed data transport project, I was running ...
some past posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
"PAM" is the paged mapped filesystem that I originally did on
cp67 and then ported to vm370 ... some past posts
https://www.garlic.com/~lynn/submain.html#mmap
and a few minutes later (the same day)
Date: 04/25/85 08:05:10
From: wheeler
re: hsdt; thot you may be interested in some of the stuff. will
probably be over in europe last week in june thru most of july. Have
pitched hsdt to NSF for tieing together all the super computer centers
and have gotten very favorable response (bandwidth is about 20* the
alternatives for similar costs). Have &/will be pitching to several
research/university systems. Have pitched to the Univ. of Cal.
systems, will be pitching it to national center for atmospheric
research before heading to toronto to give pam pitch at the interim
share.
Plan on pitching to the european univ. network when I'm over and
several other organizations.
re: rmn; looks like very good possibility of building a 4 processor
proto-type using A74s (about 350kips) next year ... romans probably
aren't going to be available until late 87 unless we build a fire
under them and accelerate the schedule. Have pitched it to both scd
and spd executives.
... snip ... top of post, old email index, NSFNET email
"rmn" refers to generalized processor clusters ... significant amount
of effort is physical packaging large numbers in racks (and how to
handle heat and air flow). mentioned in this post
https://www.garlic.com/~lynn/2004m.html#17
and these emails:
https://www.garlic.com/~lynn/2011b.html#email850314
https://www.garlic.com/~lynn/2011b.html#email870315
having to get substitute for HSDT presentation to director of NSF
because of processor cluster meeting in YKT:
https://www.garlic.com/~lynn/2007d.html#email850315
misc. other old email mentioning director of NSF
https://www.garlic.com/~lynn/2006w.html#email850607
https://www.garlic.com/~lynn/2006t.html#email860407
https://www.garlic.com/~lynn/2006s.html#email860417
https://www.garlic.com/~lynn/2007.html#email860428b
A74 was 370 "desktop" done by the dept. A74 in POK:
https://www.garlic.com/~lynn/2000e.html#email880622
later processor clusters activity part of ha/cmp product
https://www.garlic.com/~lynn/lhwemail.html#medusa
and the next day:
Date: 04/26/85 10:51:49
From: wheeler
re: network; i would like to place hsdt009 script on the network disk
... it is foils which describe the high-speed data transport adtech
project. Unfortunately, it can't be labeled ibm internal use only
since the foils are part of information that is being presented
outside of ibm ... including (among others) NSF as means for tieing
together all the super computer centers, Univ. of Cal. for tieing
together all campuses, NCAR for connecting 20+ universities who are
using the national center for atmospheric research, etc.
In fact, NSF has expressed interest in actively participating in the
adtech project. There is also some number of other documentation files
which (among other things) contains a detailed analysis of several
RSCS performance bottlenecks and proposed solutions (although some of
the information has already been extracted in forums on VMPCD and
IBMVM).
... snip ... top of post, old email index, NSFNET email
I had been blamed for online computer conferencing on the internal
network in the late 70s and early 80s ... misc. past internal
network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
folklore is that when the executive committee was told about computer conferencing (and the internal network), 5of6 wanted to fire me. There were then a lot of corporate investigation and taskforces looking at the phenomena ... one of the result was "official" support for (VM-based) computer conferencing. The first such conference was "IBMVM" ... but eventually others were spawned by different organizations on number of subjects. VMPCD was a VM performance subject sponsored by Endicott. There was also a networking online conferences (NETWORK, NETWRKNG, etc) sponsored by the communication group.
Some had requirement that topics were classified "Internal Use Only" (or sometimes higher) ... however, couldn't really classify presentations to external organizations as "Internal Use Only".
Past references to somebody in the communication group that wanted to
start a new topic on "high-speed" ... and their defintion of
"high-speed" (56kbit) and "very high-speed" (T1/1.5mbit)
https://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2003m.html#59 SR 15,15
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005n.html#25 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005r.html#9 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2007p.html#64 Damn
https://www.garlic.com/~lynn/2007q.html#45 Are there tasks that don't play by WLM's rules
https://www.garlic.com/~lynn/2008e.html#45 1975 movie "Three Days of the Condor" tech stuff
https://www.garlic.com/~lynn/2008h.html#31 VTAM R.I.P. -- SNATAM anyone?
https://www.garlic.com/~lynn/2008i.html#99 We're losing the battle
https://www.garlic.com/~lynn/2008p.html#12 Discussions areas, private message silos, and how far we've come since 199x
https://www.garlic.com/~lynn/2008p.html#13 "Telecommunications" from '85
https://www.garlic.com/~lynn/2009g.html#14 Top 10 Cybersecurity Threats for 2009, will they cause creation of highly-secure Corporate-wide Intranets?
https://www.garlic.com/~lynn/2009g.html#72 Mainframe articles
https://www.garlic.com/~lynn/2009l.html#7 VTAM security issue
https://www.garlic.com/~lynn/2009l.html#24 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
https://www.garlic.com/~lynn/2009l.html#44 SNA: conflicting opinions
https://www.garlic.com/~lynn/2009p.html#59 MasPar compiler and simulator
https://www.garlic.com/~lynn/2010e.html#11 Crazed idea: SDSF for z/Linux
https://www.garlic.com/~lynn/2010i.html#69 Favourite computer history books?
https://www.garlic.com/~lynn/2010o.html#6 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010o.html#57 So why doesn't the mainstream IT press seem to get the IBM mainframe?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: RISCversus CISC Newsgroups: comp.arch Date: Sat, 12 Feb 2011 17:56:37 -0500early 70s IBM had the "future system" effort to completely replace 360/370 ... with complex hardware to address all sorts of issues:
above references this item
http://www.jfsowa.com/computer/memo125.htm
some more information here:
https://people.computing.clemson.edu/~mark/fs.html
this has quote about trying to compete with the clone controller
vendors:
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
from above:
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead that
the competition would never be able to keep up, and to have such a high
level of integration that it would be impossible for competitors to
follow a compatible niche strategy. However, the project failed because
the objectives were too ambitious for the available technology. Many of
the ideas that were developed were nevertheless adapted for later
generations. Once IBM had acknowledged this failure, it launched its
'box strategy', which called for competitiveness with all the different
types of compatible sub-systems. But this proved to be difficult because
of IBM's cost structure and its R&D spending, and the strategy only
resulted in a partial narrowing of the price gap between IBM and its
rivals.
... snip ...
Other references are that during the FS period ... all sorts of internal
efforts (viewed as possibly competitive) were killed off ... including
370 hardware&software products (since FS was going to completely replace
360/370) ... which allowed (370) clone processor vendors to gain market
foothold. Then, after the FS demise there was mad rush to replenish the
370 software&hardware product pipeline. misc. past posts mentioning
FS
https://www.garlic.com/~lynn/submain.html#futuresys
There have been a number of articles that the corporation lived under the dark shadow of the FS failure for decades (deeply affecting its internal culture).
I've periodically claimed that the example of FS motivated John to go to
exact opposite extreme for 801/risc in the mid-70s.
http://www-03.ibm.com/press/us/en/pressrelease/22052.wss
wiki page:
https://en.wikipedia.org/wiki/John_Cocke
misc. past emails mentioning 801, iliad, romp, rios, power, etc
https://www.garlic.com/~lynn/lhwemail.html#801
The corporation had a large number of different microprocessors ... developed for controllers, engines used in low-end & mid-range 370s, various other machines (series/1, 8100, system/7, etc). In the late 70s there was an effort to converge all of these microprocessors to 801. In the early 80s, several of these efforts floundered and some number of the engineers leave and show up on risc efforts at other vendors.
There is folklore that after FS demse, some number of participants retreated to Rochester and did the S/38 with some number of FS features. Then the S/38 follow-on (AS/400) was one of the efforts that was to have one of these 801 micro-engines. That effort floundered (also) and there was a quick effort to do a CISC engine. Then a decade later, AS/400 finally did migrate to 801 (power/pc).
There was a presentation by the i432 group at annual Asilomar SIGOPS ... which claimed a major problem with i432 was it was a) complex and b) silicon; all "fixes" required brand new silicon.
I had done a multiprocessor machine design in the mid-70s (never announced or shipped) that was basically 370 with some advanced features somewhat akin to some of the things in i432 ... but it was a heavily microcoded engines ... and fixes were new microcode floppy disk.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The first personal computer (PC) Newsgroups: alt.folklore.computers Date: Sat, 12 Feb 2011 22:15:50 -0500Charles Richmond <frizzle@tx.rr.com> writes:
more than decade later somebody at stanford was doing phd on global LRU ... and there was lots of resistance from some factions of the academic community to awarding the phd. at asilomar sigops (14-16dec81) ... jim gray asked me if i could provide some input (phd candidate was co-worker at tandem).
i was having my own problems ... having been blamed for online computer
conferencing on the internal network in the late 70s & early 80s ... i
was under all sort of restrictions ... took me almost a year to get
approval to respond to jim's request ... even tho it primarily invovled
work i had done as undergraduate in the 60s. response that i was allowed
to send ... nearly year after original request
https://www.garlic.com/~lynn/2006w.html#email821019
in this post
https://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?
part of the response involved pointing to some work done on cp67 at the grenoble science center in the ealry 70s ... for a local LRU ... and published in cacm. at the time, grenoble had 1mbyte real storage (155 pageable pages after fixed storage requirements) 360/67 running cp67 with 35 users and subsecond trivial response. Cambridge science center had very similar cp67 with very similar cms workload on 768kbyte (104 pageable pages after fixed storage requirements) with my global LRU implementation ... and had similar subsecond response running 70-80 users (grenoble had 50percent more pageable pages and half the users).
misc. past posts mentioning cambridge science center:
https://www.garlic.com/~lynn/subtopic.html#545tech
misc. past posts mentioning replacement algorthms and paging:
https://www.garlic.com/~lynn/subtopic.html#wsclock
misc. past posts mentioning internal network:
https://www.garlic.com/~lynn/subnetwork.html#internalnet
misc. past posts mentioning grenoble science center:
https://www.garlic.com/~lynn/93.html#7 HELP: Algorithm for Working Sets (Virtual Memory)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
https://www.garlic.com/~lynn/2006i.html#31 virtual memory
https://www.garlic.com/~lynn/2006l.html#14 virtual memory
https://www.garlic.com/~lynn/2006o.html#11 Article on Painted Post, NY
https://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
https://www.garlic.com/~lynn/2007s.html#5 Poster of computer hardware events?
https://www.garlic.com/~lynn/2007u.html#79 IBM Floating-point myths
https://www.garlic.com/~lynn/2007v.html#32 MTS memories
https://www.garlic.com/~lynn/2008c.html#65 No Glory for the PDP-15
https://www.garlic.com/~lynn/2008h.html#70 New test attempt
https://www.garlic.com/~lynn/2008h.html#79 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008r.html#21 What if the computers went back to the '70s too?
https://www.garlic.com/~lynn/2009l.html#12 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
https://www.garlic.com/~lynn/2009r.html#54 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2010f.html#85 16:32 far pointers in OpenWatcom C/C++
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Sun, 13 Feb 2011 10:18:24 -0500note that IBM nearly bet the company again a decade later with "Future System" (comments that if any other vendor had a failure the magnitude of FS, it would no longer be in business). misc. past posts mentioning FS
recent post in comp.arch
https://www.garlic.com/~lynn/2011c.html#7 RISCversus CISC
FS wiki
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
above references:
Broken Promises An unconventional view of what went wrong at IBM
http://gdrean.perso.sfr.fr/papers/promises.html
and
http://www.jfsowa.com/computer/memo125.htm
FS is also mentioned here:
The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
and here:
https://people.computing.clemson.edu/~mark/fs.html
I may have established the tone for the rest of my career at the company by ridiculing the FS effort ... drawing comparisons with cult movie that had been playing down at central sq. (and claiming some stuff I already had running was better than what they were blue skying in various vaporware documents).
recent Future System thread:
https://www.garlic.com/~lynn/2011.html#14 IBM Future System
https://www.garlic.com/~lynn/2011.html#18 IBM Future System
https://www.garlic.com/~lynn/2011.html#20 IBM Future System
https://www.garlic.com/~lynn/2011b.html#72 IBM Future System
https://www.garlic.com/~lynn/2011c.html#1 IBM Future System
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The first personal computer (PC) Newsgroups: alt.usage.english, alt.folklore.computers Date: Sun, 13 Feb 2011 10:28:30 -0500Joe Thompson <spam+@orion-com.com> writes:
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The first personal computer (PC) Newsgroups: alt.usage.english, alt.folklore.computers Date: Sun, 13 Feb 2011 13:54:27 -0500tony cooper <tony_cooper213@earthlink.net> writes:
we have plan with fixed number of "call" minutes ... but per charge on non-voice minutes (text). we started getting so many spam text messages ... which show up on the bill ... finally had to ask service provider to block incoming text messages. they claimed only option/feature they supported was to turn off all incoming and outgoing non-voice calls (which we finally had to do).
we still get some SPAM voice calls ... at least some seem to originate from call center outside US ... which may be beyond the do-not-spam legislation. they may also be leveraging VOIP to minimize their costs.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Sun, 13 Feb 2011 13:55:48 -0500re:
Future System descriptions talk about it having one-level-store design ... but doesn't mention that large sections of the design/architecture was vaporware ... a lot of description that managed to have very little actual content.
I had watched the effort over the years trying to get TSS/360 up (with its one-level-store) and running on on the univ. 360/67 (tss/360 was the "official" virtual memory operating system for the 360/67) ... and after getting cp67/cms being able to run massive rings around tss/360 (on effectively identical benchmarks, especially after rewriting a lot of the cp/67 code).
While TSS/360 had done a much better job for address constants in
application execution images (compared to os/360 ... which was used
heavily by cms) ... there was still large portions of tss/360
one-level-store that was poorly implemented ... especially from
thruput standpoint ... somewhat analogous to recent global vis-a-vis
local LRU ... recent discussion/post:
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)
In any case, the Future System one-level-store appeared to have been
heavily influenced by the TSS/360 effort ... *AND* have learned
nothing from that effort. After joining the science center and doing
paged mapped filesystem for cp67/cms ... I tried to avoid a whole
bunch of things that I saw done wrong in TSS/360. misc. past posts
mentioning paged mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap
old email about moving bunch of stuff from cp67 to vm370
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430
and past posts mentioning struggling with the os/360 address constant
paradigm for paged mapped executables
https://www.garlic.com/~lynn/submain.html#adcon
recent email mentioning doing paged mapped filesystem
presentation at SHARE (in the mid-80s)
https://www.garlic.com/~lynn/2011c.html#email850425
in this post
https://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone
above also references doing all this HSDT stuff
https://www.garlic.com/~lynn/subnetwork.html#hsdt
... some of it in conjunction with what was to become NSFNET backbone ... as well as processor cluster stuff (I had to get a substitute to present to director NSF on the HSDT stuff ... because I got preempted for processor cluster meeting in YKT).
I've periodically claimed that the reason that NSFNET backbone RFP
went out specifying T1 ... because I already had T1 operational in
HSDT and was constantly pitching it for NSFNET. Recent reference in
this post
https://www.garlic.com/~lynn/2011c.html#2 Other early NSFNET backbone
with old email from Princeton showing how they expected their
links to be put in
https://www.garlic.com/~lynn/2011c.html#email860407
However, there was lots of internal political pressure that prevented
us from bidding on NSFNET backbone (overcoming even lobbying by
director of NSF ... and statements that what HSDT already had running
was at least five yrs ahead of all bid submssions for NSFNET
backbone). Past post
https://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET
about some of the internal SNA/VTAM misinformation that was
going around the company (at highest levels):
https://www.garlic.com/~lynn/2006w.html#email870109
in fact, the "winning" bid for NSFNET backbone RFP ... didn't actually
install T1 links ... but put in 440kbit/sec links ... and then
somewhat trying to meet the letter of the RFP ... put in T1 trunks
with telco multiplexors (multiple 440kbit/sec links over T1 trunks).
misc. past posts mentioning NSFNET backbone
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The first personal computer (PC) Newsgroups: alt.usage.english, alt.folklore.computers Date: Sun, 13 Feb 2011 14:34:23 -0500greenaum@yahoo.co.uk (greenaum) writes:
... replicated until full container. there was story that the
optimization work that they did on modulo containers in their super
datacenters ... resulted in them having about 1/3rd the cost/mip(flop)
... as you would pay for name-brand vendor. this includes doing some
very detailed studies on disk reliability and MTBF ... a couple past
posts on subject:
https://www.garlic.com/~lynn/2007h.html#13 Question on DASD Hardware
https://www.garlic.com/~lynn/2007j.html#10 Disc Drives
https://www.garlic.com/~lynn/2007j.html#40 Disc Drives
there is also stories of entities renting time from the big cloud vendors for SHA1 (secure hash) breaking (password cracking, and other similar things).
misc. past posts mentioning super datacenters:
https://www.garlic.com/~lynn/2006q.html#43 21st century pyramids--super datacenters
https://www.garlic.com/~lynn/2008n.html#68 VMware Chief Says the OS Is History
https://www.garlic.com/~lynn/2008n.html#79 Google Data Centers 'The Most Efficient In The World'
https://www.garlic.com/~lynn/2009m.html#81 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009o.html#64 The new coin of the NSA is also the new coin of the economy
https://www.garlic.com/~lynn/2010e.html#78 Entry point for a Mainframe?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Sun, 13 Feb 2011 18:38:01 -0500Quadibloc <jsavard@ecn.ab.ca> writes:
it went into S/38 ... one of the issues was that a lot of shortcuts and lack of thruput that was critical at the high-end ... was much less of an issue at the S/38 end of the market.
one of the s/38 simplifications was that all disks were created as common pool with record allocation being scatter allocation across the whole disk pool. this had downside on both thruput and availability (somewhat referred to in followup post regarding one-level-store from tss/360 and apparently neither FS nor S/38 learned anything from their experience; and work i did on page mapped filesystem).
after transferring to san jose research ... i was allowed to play
disk engineer (across the street) ... among other things ... some past
posts
https://www.garlic.com/~lynn/subtopic.html#disk
and one of the engineers there got a patent in the 70s ... on what would come to be referred to as RAID.
S/38 problem with common pool ... was the whole, complete infrastructure had to be backed up as a single entity (all disks) ... and then if any single disk failed ... a whole complete restore had to take place (in some cases claimed to take day or more).
In any case, that single disk failure vulnerability ... taking out the whole infrastructure and requiring whole infrastructure restore ... is claimed to be motivator for S/38 being early RAID adopter (mask single disk failure ... since scatter allocation resulted in single failure taking out whole infrastructure ... and doesn't scale at all).
In comp.arch risc/cisc reference ... there is discussion that as/400 is the s/38 follow-on and as/400 was originally going to be one of the 801/risc implementations (corporate effort from late 70s to converge the large number of different microprocessors to common 801 platform) ... effectively all floundered and as/400 had crash program to do a cisc chip (although as/400 eventually did move to 801 power/pc a decade later).
misc. past posts mentioning having done paged mapped filesystem for
cp67/cms originally in the early 70s.
https://www.garlic.com/~lynn/submain.html#mmap
recent post mentioning FS
https://www.garlic.com/~lynn/2010q.html#32 IBM Future System
with trivia that my brother was regional marketing rep for apple (largest physical area in conus) ... and worked out being able to dial into apple corporate hdqtrs to check on build&ship schedules ... which turned out to be a s/38.
misc. past posts mentioning 801, risc, iliad, romp, rios, power,
power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Sun, 13 Feb 2011 20:52:25 -0500re:
wiki for s/38
https://en.wikipedia.org/wiki/IBM_System/38
wiki for as/400
https://en.wikipedia.org/wiki/IBM_System_i
from long ago and far away ("merge" of s/36 & s/38 is referring to
as/400)
Date: 08/12/86 10:21:35
From: somebody in Rochester
To: wheeler
Subject: network/broadband distribution
Lynn,
I am running an ad hoc group here in Rochester trying to incorporate
a network/broadband technology into our 1Q88 processor (a new product
line which will merge the S/36 and S/38 products). We are involved in
the an effort (from Yorktown Research). Our intent is to use
network/broadbands to move our licensed products from the PIDs of the
world to the customer. Likewise, I would like to offer
network/broadband distribution as a functional offering to our
customer set for data networking capability. I am trying to track down
every person I can find who is doing network/broadband work so that I
don't reinvent any wheels and also so that I don't blindly head down
the wrong path. xxxxxx in Corporate Internal Telecommunications told
me that you were involved in network/broadband work in some
fashion. Can you enlighten me and perhaps give me more to go on?
... snip ... top of post, old email index, HSDT email
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Other early NSFNET backbone Newsgroups: alt.folklore.computers Date: Sun, 13 Feb 2011 21:27:31 -0500a little more topic drift, not sna/vtam mis-information ... some actual data ... from long ago and far away:
That summer was extended whirlwind tour of numerous places in Europe (including La Gaude lab). 3725 was biggest and fastest ... and full-duplex T1 (1.5mbit) would be 1.5mbit concurrently in both directions, aka 3mbit/sec aggregate (European "T1" is 2mbit/sec full-duplex or 4mbit/sec aggregate; the 1.7mbit limit wouldn't even handle single direction European "T1").
Possibly because of the limitation (and in support of various SNA/VTAM misinformation), the communication group did a report for corporate executives that customers wouldn't be needing T1 before sometime well into the 90s. 37xx controllers supported something called a fat pipe which simulate the operation of a single link with a group of parallel 56kbit links. They had a survey of fat pipe use and customers and found some number of two link fat pipes with declining customers as number of parallel links increased until five links and find nearly no customer use of fat pipes with more than five 56kbit/sec links (280kbit/sec aggregate, full-duplex would be 560kbit/sec).
What they overlooked (possibly purposefully) was that a) somewhere around 5 or 6 56kbit links ... had the same aggregate cost/tariff as a single T1 (1.5mbit) link and b) trivial survey at the same time, turned up 200 mainframe customers with T1 links using non-IBM controller products (the fat pipe survey possibly self justifying because didn't have products that could support full T1).
another look at 3725 in this presentation about using S/1 to emulate
37xx/ncp (comparing effective thruput of the two products):
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
"configurator" in the above refers to the official IBM sales&marketing support tool on the HONE system for 3725.
misc. past posts mentioning fat pipe survey:
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2002j.html#67 Total Computing Power
https://www.garlic.com/~lynn/2003m.html#28 SR 15,15
https://www.garlic.com/~lynn/2003m.html#59 SR 15,15
https://www.garlic.com/~lynn/2004g.html#37 network history
https://www.garlic.com/~lynn/2004l.html#7 Xah Lee's Unixism
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET
https://www.garlic.com/~lynn/2008e.html#45 1975 movie "Three Days of the Condor" tech stuff
https://www.garlic.com/~lynn/2008s.html#19 Nerdy networking kid crashes the party
https://www.garlic.com/~lynn/2009l.html#24 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
https://www.garlic.com/~lynn/2009l.html#44 SNA: conflicting opinions
https://www.garlic.com/~lynn/2010e.html#80 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010e.html#83 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010i.html#69 Favourite computer history books?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Mon, 14 Feb 2011 07:58:43 -0500hancock4 writes:
one of the final nails in the FS coffin was some analysis by the Houston Scientific Center
Eastern ran its System One, ACP airline res system on 370/195. Houston Scientific Center did some FS analysis claiming that if ACP was run on a FS machine built out of the fastest then available circuitry (370/195), it would have the throughput of 370/145 (factor of 20-30 times slowdown). This had to do with the high level hardware abstraction and multiple levels of indirection.
reference with some FS hardware discussion
http://www.jfsowa.com/computer/memo125.htm
the above references that 3081 was built using FS hardware with 370 microcode ... and that there was enormous amount of circuitry (increasing manufacturing cost) and significantly slow for the amount of circuitry (compared to clone processor competition).
some recent posts mentioning 3081:
https://www.garlic.com/~lynn/2011b.html#49 vm/370 3081
https://www.garlic.com/~lynn/2011b.html#62 vm/370 3081
https://www.garlic.com/~lynn/2011b.html#68 vm/370 3081
https://www.garlic.com/~lynn/2011b.html#70 vm/370 3081
one of the above has old email mentioning that ACP running on 3081D (just using one of the processors since ACP didn't have multiprocessor support), was 20% slower than running on 3033. Now 3033 was claimed to be 4.5mip machine (50% faster than 3mip 168-3) and each 3081D processor was claimed to be 5mip. Later 3081K was tested, (essentially double cache size of 3081D) claiming each processor was 7mip, but ACP ran approx. same speed on one 3081K processor as on 3033 (again 2nd 3081 processor was idle/unused).
in any case, at the low/entry business computer level ... s/38 could get away with the various throughput issues.
as mentioned in the posts and references ... initial AS/400 was designed to be converged s/36 & s/38 (using rapidly designed cisc processor ... after the 801/risc/iliad effort floundered). A decade later AS/400 was migrated to 801/risc (power/pc).
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Mon, 14 Feb 2011 08:00:42 -0500Quadibloc <jsavard@ecn.ab.ca> writes:
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Mon, 14 Feb 2011 08:23:54 -0500Charles Richmond <frizzle@tx.rr.com> writes:
There was another episode a couple years later ... I was in the process
of shipping my resource manager (much of it was stuff from cp67 that had
been dropped in the initial morph to vm370) ... discussed in some more
detail in this recent long-winded post (linkedin z/VM group)
https://www.garlic.com/~lynn/2011b.html#61 VM13025 ... zombie/hung users
there was one of the biggest, "true blue" commercial accounts not far from boston ... which I periodically drop by ... I knew several of the local branch office people as well as people on the account. About the time of the "resource manager" ... the branch manager had done/said something that had horribly offended the customer. In response, the customer was going to be the first "true blue" account to install a large clone processor (there had been several installs at educational accounts, but so far none at big commercial "true blue" accounts).
I was asked to go sit onsite at the customer account for six months ... appearing as if I was convincing the customer that IBM was better than the clone competition. I was familiar with the situation and knew that the customer was going to install the clone processor regardless of anything I did (it would go into a huge datacenter and might even be difficult to find in the wash of all the "blue processors").
I was told that I needed to do it for the CEO, the local branch manager was his good friend (and crewed on the CEO's sailboat) ... and being the first with a clone processor to blemish his record would taint his career forever. My presence was needed to try and obfuscate it being a technical issue ... and direct attention away from the branch manager. I was finally told if I didn't do it, I wouldn't have a career and could say goodby to promotions and raises (wasn't a team player to *NOT* take the bullet for the branch manager).
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Mon, 14 Feb 2011 09:32:07 -0500re:
starting in the late 70s and extending through the first half of the
80s ... 43xx/mid-range was showing up with clusters (besting 3033 &
3081 in aggregate performance & price/performance) as well as leading
edge of distributed computing (large customers with 43xx orders in the
several hundred at a time). Also, 43xx (& vax) had dropped
price/performance in the mid-range market below some threshold and
they were selling in new markets (large number of one or two machine
orders). by the mid-80s both the 43xx & vax numbers were starting to
drop off as workstations & large PCs was starting to take over those
markets (cluster, distributed, and single new business). some old
43xx email
https://www.garlic.com/~lynn/lhwemail.html#43xx
old post with decade of vax sales, sliced in diced in various ways:
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
It was in the mid-80s when the corporate executives started predicting that company gross would be doubling from $60B to $120B ... and started big expansion of mainframe manufacturing capacity (including the enormous bldg. 50 on the san jose plant site to "double" disk manufacturing). However, at the same time there was enormous amount of information that computing was becoming increasingly commoditized (cluster, distributed, moving into lower-end ... and moving to workstations and large PCs) ... and the mainframe business was heading in exactly the opposite direction (as predicted by top executives). It was relatively trivial to show spreadsheet that the company was heading into the red (didn't seem to matter since had already been told that didn't have career)
Role forward a few years and in 1992 the company does go into the
red. We depart that year ... not so much because of the company going
into the red ... but more because the cluster scale-up work got
transferred, announced as supercomputer and we were told we couldn't
work on anything with more than four processors. some old email
https://www.garlic.com/~lynn/lhwemail.html#medusa
and recent posts mentioning mid-80s processor cluster work:
https://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company
In departing executive interview ... one of the comments was
that they could have forgiven me for being wrong, but they
were never going to forgive me for being right. misc. past
posts mentioning the departing executive interview quote:
https://www.garlic.com/~lynn/2002q.html#16 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2004k.html#14 I am an ageing techy, expert on everything. Let me explain the
https://www.garlic.com/~lynn/2007.html#26 MS to world: Stop sending money, we have enough - was Re: Most ... can't run Vista
https://www.garlic.com/~lynn/2008c.html#34 was: 1975 movie "Three Days of the Condor" tech stuff
https://www.garlic.com/~lynn/2009g.html#56 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009h.html#74 My Vintage Dream PC
https://www.garlic.com/~lynn/2009k.html#73 And, 40 years of IBM midrange
https://www.garlic.com/~lynn/2009r.html#6 Have you ever though about taking a sabbatical?
https://www.garlic.com/~lynn/2010f.html#20 Would you fight?
https://www.garlic.com/~lynn/2010f.html#58 Handling multicore CPUs; what the competition is thinking
https://www.garlic.com/~lynn/2010o.html#47 origin of 'fields'?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Mon, 14 Feb 2011 10:37:40 -0500re:
oh, accelerating the downturn in the late 80s ... was the stranglehold that sna/vtam & communication group had on the datacenter. this shows up in the late '80s with senior disk engineer getting talk scheduled at annual, internal, world-wide communication group conference ... and opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. the issue was that the communication group stranglehold on the datacenter was isolating it from the emerging distributed computing environment.
users were getting fed-up with the limited bandwidth and capability available for accessing data in the datacenter ... and as a result there was lots of data fleeing the datacenter to more distributed computing friendly platforms (resulting in big downturn in datacenter mainframe disk sales & revenue).
the disk division had developed some number of products to address all the issues regarding working in a distributed environment ... but the communication group was able to block nearly all efforts ... since the communication group had corporate strategic responsibility for everything that crossed the datacenter walls ... *AND* the communication group was staunchly protecting its terminal emulation install base.
misc. past post mentioning the communication group terminal emulation
efforts (and some additional references to the talk that opened
with reference to the demise of the disk division)
https://www.garlic.com/~lynn/subnetwork.html#emulation
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Mon, 14 Feb 2011 10:48:05 -0500Quadibloc <jsavard@ecn.ab.ca> writes:
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Mon, 14 Feb 2011 11:20:36 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
one could even claim that the adoption of computers for commercial (tab)
dataprocessing went a whole lot better than how the communication group
handled distributed computing moving in on its terminal emulation
install base
https://www.garlic.com/~lynn/2011c.html#21 If IBM Hadn't Bet the Company
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Mon, 14 Feb 2011 15:48:25 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
IBM Watson's Ancestors: A Look at Supercomputers of the Past
http://www.pcworld.com/article/219577/ibm_watsons_ancestors_a_look_at_supercomputers_of_the_past.html
more detailed list:
https://en.wikipedia.org/wiki/Supercomputer
A senior corporate executive had been the sponsor of the Kingston
supercomputing effort ... besides supposedly doing their own design,
there was also heavy funding for Steve's SSI. That executive retires
end of Oct91 which resulted in review of a number of efforts,
including IBM Kingston. After the Kingston review, there was an effort
launched looking around the company for something to be used as
supercomputer and found cluster scale-up stuff (above referenced post
about Jan92 meeting in Ellison's conference room):
https://en.wikipedia.org/wiki/IBM_Scalable_POWERparallel
Steve Chen (computer engineer)
https://en.wikipedia.org/wiki/Steve_Chen_(computer_engineer)
Sequent eventually acquires Chen's business and Steve Chen
becomes CTO at Sequent, in the late 90s we did some consulting
for Steve (at Sequent before it was bought by IBM)
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
IBM buys Sequent ... but the effort suffers somewhat akin to what happened to IBM Kingston ... the executive sponsoring the activity retires (see description in above wiki).
Project Monterey
https://en.wikipedia.org/wiki/Project_Monterey
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Mon, 14 Feb 2011 16:42:50 -0500Joe Thompson <spam+@orion-com.com> writes:
before IBM bought Sequent
https://www.garlic.com/~lynn/2011c.html#24 If IBM Hadn't Bet the Company
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Mon, 14 Feb 2011 17:18:15 -0500hancock4 writes:
370/195 was peak 10mips/sec for codes that operated in the pipeline
... however most branch instructions would "drain" the pipeline ...
most common code would only keep the pipeline half full and have thruput
of 5mips/sec. This was motivation for an internal effort for
multi-threaded 370/195 ... basically looked like two 370/195s
multiprocessor ... but only had single (shared) pipeline ... concept
would be both threads each would keep pipeline half full ... for a
totally full pipeline operating at peak/full 10mips/sec. This never got
announced ship. However, at least one of the people from the YKT cp67
"G" effort went to work on it ... mentioned in this email
https://www.garlic.com/~lynn/2011b.html#email800117
in this post
https://www.garlic.com/~lynn/2011b.html#72 IBM Future System
After FS demise ... the mad rush to get products back into the product pipeline ... 303x line in parallel with 370/xa (& 3081).
3031 & 3032 were essentially repackaged 158-3 & 168-3. 3033 started out being 168 wiring diagram mapped to 20% faster chips. The chips also had ten times the circuits ... but initially went unused (3033 20% faster than 168-3) ... some last minute optimization leveraging onchip logic ... got it up to 50% faster than 168-3 (or 4.5mips/sec).
The claim was that 3081D was two 5mip processors (aggregate 10mips),
but as mentioned in this email
https://www.garlic.com/~lynn/2011b.html#email820820
in this post
https://www.garlic.com/~lynn/2011b.html#62 vm/370 3081
ACP on (single processor of) 3081D ran 20% slower than 3033. 3081K was introduced claimed to be two 7mip processors (aggregate 14mips) ... primary difference was doubled processor cache size. ... and ACP on (single processor of) ran only 5% faster than 3033 (effective thruput of two processor 3081K might be closer to 1.5times 3033 single processor or around 7mips aggregate instead of 2*7mips).
Most codes would run slightly faster at 5mips on 370/195 than they ran onf 4.5mips 3033 ... or close to the same on 3083k (eventually got around to doing 3083k basically 3081k with one of the processors removed ... in large part because ACP didn't have multiprocessor support)
To get "peak" 10mip 370/195 thruput ... would have to go to clone processor or wait for 3090.
I've mentioned before that SJR had 370/195 running MVT for quite awhile ... and there was big batch queue (sometimes taking several weeks or more than month for turn around). Disk group was running "air bearing" simulation in support of floating heads (flying much closer to surface resulting in much higher datarate) ... but even with priority consideration turn around could still be a week or two. When bldg. 15 got 3033 for disk testing ... things were setup so "air bearing" simulation could run in the background. "air bearing" was optimized for 195 ... an hr of 195 cpu could turn into nearer two hrs on 3033 ... but elapsed time on 3033 was about the same as cpu time (instead of a couple weeks).
misc. past posts mentioning 370/195 multi-threaded effort
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2001n.html#86 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?
https://www.garlic.com/~lynn/2002k.html#16 s/w was: How will current AI/robot stories play when AIs are
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
https://www.garlic.com/~lynn/2002p.html#59 AMP vs SMP
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
https://www.garlic.com/~lynn/2003m.html#4 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#31 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003m.html#60 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004.html#8 virtual-machine theory
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004k.html#39 August 23, 1957
https://www.garlic.com/~lynn/2004l.html#59 Lock-free algorithms
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#36 CAS and LL/SC
https://www.garlic.com/~lynn/2005f.html#41 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005j.html#28 NASA Discovers Space Spies From the 60's
https://www.garlic.com/~lynn/2005m.html#12 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2005q.html#30 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2006n.html#16 On the 370/165 and the 360/85
https://www.garlic.com/~lynn/2006r.html#2 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006s.html#21 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006y.html#26 moving on
https://www.garlic.com/~lynn/2007.html#36 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007d.html#19 Pennsylvania Railroad ticket fax service
https://www.garlic.com/~lynn/2007d.html#21 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007e.html#32 I/O in Emulated Mainframes
https://www.garlic.com/~lynn/2007f.html#11 Is computer history taught now?
https://www.garlic.com/~lynn/2007g.html#54 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007i.html#45 IBM System/360 Model 85: The Bashful Computer
https://www.garlic.com/~lynn/2007i.html#49 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007k.html#39 VLIW pre-history
https://www.garlic.com/~lynn/2007l.html#34 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007p.html#1 what does xp do when system is copying
https://www.garlic.com/~lynn/2007r.html#20 Abend S0C0
https://www.garlic.com/~lynn/2008c.html#92 CPU time differences for the same job
https://www.garlic.com/~lynn/2008k.html#22 CLIs and GUIs
https://www.garlic.com/~lynn/2009p.html#82 What would be a truly relational operating system ?
https://www.garlic.com/~lynn/2010h.html#45 Processors stall on OLTP workloads about half the time--almost no matter what you do
https://www.garlic.com/~lynn/2010i.html#6 45 years of Mainframe
https://www.garlic.com/~lynn/2010k.html#11 TSO region size
https://www.garlic.com/~lynn/2010n.html#16 Sabre Talk Information?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Tue, 15 Feb 2011 09:59:42 -0500hancock4 writes:
at least for awhile ... some of the sub designations was the number of
processors ... but that has gotten more complex with dynamic capacity
and being able to have extra processors turned on & off for peak loads
(along with how hardware capacity-based pricing). Then there are things
with dynamic capacity and software-based capacity pricing. older
discussion with some of the sub numbers (some of it discussion smp
scale-up as number of processors are added ... & "LSPR" ratios)
https://www.garlic.com/~lynn/2006l.html#41 One or two CPUs - the pros & cons
latest models announce racks with both "Z" processors and specialized
processors ... where capacity-based software pricing is different
depending on whether it runs on "normal" processor or designated
specialized processor ... longer post
https://www.garlic.com/~lynn/2008m.html#57 "Engine" in Z/OS?
some of the specialized processors may not even be of the "Z"/360 kind
.... akin to "processor clusters" that I worked on in the mid-80s:
https://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor
and old email (also mentions having to get substitute for HSDT/NSFNET
presentation to director of NSF because of having to be at a "processor
cluster" meeting)
https://www.garlic.com/~lynn/2011b.html#email850313
https://www.garlic.com/~lynn/2011b.html#email850314
https://www.garlic.com/~lynn/2007d.html#email850315
https://www.garlic.com/~lynn/2011b.html#email850325
https://www.garlic.com/~lynn/2011b.html#email870315
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Tue, 15 Feb 2011 10:14:05 -0500Huge <Huge@nowhere.much.invalid> writes:
... and then refusing to take a bullet for branch manager that was best
buds with the CEO
https://www.garlic.com/~lynn/2011c.html#19 If IBM Hadn't Bet the Company
i get blamed for online computer conferencing on the internal network in the late 70s and early 80s. folklore is that when executive committee was told about online computer conferencing (and the internal network), 5of6 wanted to fire me.
I then relatively trivially show that company was heading into
the red (as opposed to doubling revenue from $60b to $120b)
https://www.garlic.com/~lynn/2011c.html#20 If IBM Hadn't Bet the Company
misc. past posts mentioning getting blamed for online computer conferencing
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#6 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2001j.html#31 Title Inflation
https://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002n.html#31 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002q.html#16 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#38 ibm time machine in new york times?
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List
https://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005q.html#5 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2006h.html#9 It's official: "nuke" infected Windows PCs instead of fixing them
https://www.garlic.com/~lynn/2006l.html#24 Google Architecture
https://www.garlic.com/~lynn/2006l.html#51 the new math: old battle of the sexes was: PDP-1
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006w.html#35 Top versus bottom posting was Re: IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer
https://www.garlic.com/~lynn/2007i.html#34 Internal DASD Pathing
https://www.garlic.com/~lynn/2007p.html#30 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2008b.html#57 Govt demands password to personal computer
https://www.garlic.com/~lynn/2008h.html#23 IBM's Webbie World
https://www.garlic.com/~lynn/2008j.html#53 Virtual water cooler
https://www.garlic.com/~lynn/2008j.html#91 CLIs and GUIs
https://www.garlic.com/~lynn/2008o.html#10 Does anyone read the Greater IBM Connection Blog?
https://www.garlic.com/~lynn/2009g.html#47 You're Fired -- but Stay in Touch
https://www.garlic.com/~lynn/2009i.html#29 Online Computer Conferencing
https://www.garlic.com/~lynn/2009i.html#37 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009j.html#4 IBM's Revenge on Sun
https://www.garlic.com/~lynn/2009j.html#79 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009m.html#55 Tell me something about how you use signature files!
https://www.garlic.com/~lynn/2009n.html#53 Long parms...again
https://www.garlic.com/~lynn/2009p.html#81 IBM driving mainframe systems programmers into the ground
https://www.garlic.com/~lynn/2009q.html#3 Arpanet
https://www.garlic.com/~lynn/2009q.html#4 Arpanet
https://www.garlic.com/~lynn/2009r.html#28 curiousity q? for the historians
https://www.garlic.com/~lynn/2010h.html#48 Do you know of, or have you participated in, any good examples of successful collaboration?
https://www.garlic.com/~lynn/2010k.html#45 Taglines
https://www.garlic.com/~lynn/2010k.html#56 Unix systems and Serialization mechanism
https://www.garlic.com/~lynn/2010l.html#36 Great things happened in 1973
https://www.garlic.com/~lynn/2010m.html#77 towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
https://www.garlic.com/~lynn/2010m.html#88 Baby Boomer Execs: Are you afraid of LinkedIn & Social Media?
https://www.garlic.com/~lynn/2010o.html#6 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010o.html#10 Boyd & Beyond 2010, review at Zenpundit
https://www.garlic.com/~lynn/2010o.html#51 The Credit Card Criminals Are Getting Crafty
https://www.garlic.com/~lynn/2010o.html#61 They always think we don't understand
https://www.garlic.com/~lynn/2010o.html#62 They always think we don't understand
https://www.garlic.com/~lynn/2010p.html#73 From OODA to AAADA
https://www.garlic.com/~lynn/2010q.html#32 IBM Future System
https://www.garlic.com/~lynn/2010q.html#45 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#49 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010q.html#50 I actually miss working at IBM
https://www.garlic.com/~lynn/2010q.html#62 Is email dead? What do you think?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: RISCversus CISC Newsgroups: comp.arch Date: Tue, 15 Feb 2011 10:17:58 -0500torbenm@diku.dk (Torben Ægidius Mogensen) writes:
the statement in the 70s about (801/)RISC was that it could be done in a single chip. later in the 80s, (801/)RISC was instructions that could be executed in single machine cycle. Over the decades, the definition of RISC has been somewhat fluid ... especially as the number of circuits in a chip has dramatically increased.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The first personal computer (PC) Newsgroups: alt.usage.english, alt.folklore.computers Date: Tue, 15 Feb 2011 11:01:19 -0500greenaum@yahoo.co.uk (greenaum) writes:
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Tue, 15 Feb 2011 11:55:52 -0500despen writes:
I had several semi-automated procedures ... so the majority of all comments came to me ... I would add my own comments and then redistribute.
the internal network was larger than arpanet/internet from just
about the beginning until possibly late 85 or early 86 ... some
past posts referencing internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
the phenonama was also referred to as Tandem Memos ... since some of the activity was kickedoff by some comments I had distributed after a Friday afternoon visit to Jim Gray at Tandem (before Jim left research, Jim would frequently attend the friday afterwork events that I would have at various places in the san jose plant site area).
Some of it leaked outside the company and there was an article on the phenonama in Nov81 Datamation (by then, I was under stricked orders not to talk to the press).
Corporate task forces were launched to investigate the phenonoma
... including bringing in outside consultants. two of the consultants
were the authors of Network Nation
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=8903
One of the outcomes of the task forces was decision to provide official corporate support (and control) for online computer conferences. A more structured automated facility was created and used for the operations (TOOLSRUN) ... and there were official sponsored discussion groups created (with moderators).
Later, a similar program was created/adopted for BITNET (with subset of
the TOOLSRUN function ... called LISTSERV (since then LISTSERV function
has been ported to other platforms) ... misc. past posts mentioning
BITNET &/or LISTSERV
https://www.garlic.com/~lynn/subnetwork.html#bitnet
listserv (TOOLSRUN subset) for bitnet:
http://www.lsoft.com/corporate/history_listserv.asp
http://www.lsoft.com/products/listserv-history.asp
also, somewhat as result of all the events ... a researcher was paid to
sit in the back of my office for nine months taking notes on how I
communicated. they also went with me to meetings, got copies of all my
incoming & outgoing email and lots of all my instant messages. The
result was a research report, a Stanford PHD thesis (joint between
language and computer AI) and material for several papers and books.
misc. past posts mentioning computer mediated conversation
https://www.garlic.com/~lynn/subnetwork.html#cmc
corporate hdqtrs eventually had process that tracked amount of traffic on all the internal links across the world ... and there was claim for some months, I was responsible for 1/3rd of all internal network traffic (on all links).
with the "official" online computer conferences ... there was periodic jokes about a discussion being "wheeler'ized" ... there would be hundreds of people contributing comments ... but half of the volume of all comments were mine (i've since significantly mellowed).
misc. past posts mentioning TOOLSRUN:
https://www.garlic.com/~lynn/2001c.html#5 what makes a cpu fast
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#16 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006w.html#35 Top versus bottom posting was Re: IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006y.html#10 Why so little parallelism?
https://www.garlic.com/~lynn/2007.html#23 How to write a full-screen Rexx debugger?
https://www.garlic.com/~lynn/2007b.html#7 information utility
https://www.garlic.com/~lynn/2007b.html#31 IBMLink 2000 Finding ESO levels
https://www.garlic.com/~lynn/2007b.html#32 IBMLink 2000 Finding ESO levels
https://www.garlic.com/~lynn/2007b.html#55 IBMLink 2000 Finding ESO levels
https://www.garlic.com/~lynn/2007j.html#70 Using rexx to send an email
https://www.garlic.com/~lynn/2007p.html#30 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2008i.html#48 Anyone know of some good internet Listserv's?
https://www.garlic.com/~lynn/2008o.html#46 Anyone still have access to VMTOOLS and TEXTTOOLS?
https://www.garlic.com/~lynn/2008o.html#49 Discussions areas, private message silos, and how far we've come since 199x
https://www.garlic.com/~lynn/2008o.html#61 Discussions areas, private message silos, and how far we've come since 199x
https://www.garlic.com/~lynn/2008p.html#12 Discussions areas, private message silos, and how far we've come since 199x
https://www.garlic.com/~lynn/2008p.html#13 "Telecommunications" from '85
https://www.garlic.com/~lynn/2008q.html#37 BITNET & LISTSERV
https://www.garlic.com/~lynn/2008q.html#45 Usenet - Dead? Why?
https://www.garlic.com/~lynn/2009g.html#14 Top 10 Cybersecurity Threats for 2009, will they cause creation of highly-secure Corporate-wide Intranets?
https://www.garlic.com/~lynn/2009j.html#79 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009k.html#6 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009m.html#55 Tell me something about how you use signature files!
https://www.garlic.com/~lynn/2009o.html#38 U.S. house decommissions its last mainframe, saves $730,000
https://www.garlic.com/~lynn/2009p.html#52 Mainframe Hacking
https://www.garlic.com/~lynn/2009p.html#84 Anyone going to Supercomputers '09 in Portland?
https://www.garlic.com/~lynn/2009q.html#4 Arpanet
https://www.garlic.com/~lynn/2009q.html#17 toolsrun
https://www.garlic.com/~lynn/2009s.html#12 user group meetings
https://www.garlic.com/~lynn/2010.html#7 CAPS Fantasia
https://www.garlic.com/~lynn/2010b.html#36 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#75 Posts missing from ibm-main on google groups
https://www.garlic.com/~lynn/2010m.html#69 z/VM LISTSERV Query
https://www.garlic.com/~lynn/2011b.html#18 Melinda Varian's history page move
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Tue, 15 Feb 2011 16:02:09 -0500re:
although I had been doing online computer conferencing like things before, nothing seemed to catch the attention & interest of so many in the corporation as did the comments about the visit to Jim and Tandem (while there were only a few hundreds actively participated, estimates was that several tens of thousands were reading & following the discussions).
now ... the corporation was already somewhat sensitive over Jim's
MIPENVY comments when he left. past post mentioning MIPENVY:
https://www.garlic.com/~lynn/2009p.html#8
with reference to version of mipenvy here:
https://web.archive.org/web/20081115000000*/http://research.microsoft.com/en-us/um/people/gray/papers/MipEnvy.pdf
past post with old "definition" for "mip envy"
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
copy of MIP Envy (slightly earlier version than what appears at the
microsoft research gray webpages)
https://www.garlic.com/~lynn/2007d.html#email800920
in this post about Jim having gone missing
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
When Jim left, he was palming off a bunch of stuff on me related to
System/R (original sql/relational database) ... some past posts
https://www.garlic.com/~lynn/submain.html#systemr
consulting with outsiders on RDBMS (like bank of america), consulting with the IMS database group in STL, etc.
misc. old email mentioning Jim's departure
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016
for other Jim trivia/topic drift ... recent post about being asked by
Jim to help out one of his co-workers at Tandem who was being
blocked getting his PHD at Stanford on something similar to
what I had done nearly 15yrs earlier as an undergraduate:
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)
and a past reference to tribute to Jim held at UCB
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#40 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008p.html#27 Father Of Financial Dataprocessing
misc. other past posts mentioning Tandem memos &/or mip envy:
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#6 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2001j.html#31 Title Inflation
https://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002o.html#75 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002q.html#16 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#38 ibm time machine in new york times?
https://www.garlic.com/~lynn/2004c.html#15 If there had been no MS-DOS
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List
https://www.garlic.com/~lynn/2004l.html#28 Shipwrecks
https://www.garlic.com/~lynn/2004l.html#31 Shipwrecks
https://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005q.html#5 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005u.html#41 Mainframe Applications and Records Keeping?
https://www.garlic.com/~lynn/2006h.html#9 It's official: "nuke" infected Windows PCs instead of fixing them
https://www.garlic.com/~lynn/2006l.html#24 Google Architecture
https://www.garlic.com/~lynn/2006l.html#51 the new math: old battle of the sexes was: PDP-1
https://www.garlic.com/~lynn/2006n.html#26 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006o.html#50 When Does Folklore Begin???
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006w.html#35 Top versus bottom posting was Re: IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2007.html#1 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2007.html#13 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#45 Is computer history taugh now?
https://www.garlic.com/~lynn/2007d.html#63 Cycles per ASM instruction
https://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer
https://www.garlic.com/~lynn/2007e.html#50 Is computer history taught now?
https://www.garlic.com/~lynn/2007f.html#13 Why is switch to DSL so traumatic?
https://www.garlic.com/~lynn/2007f.html#70 Is computer history taught now?
https://www.garlic.com/~lynn/2007i.html#34 Internal DASD Pathing
https://www.garlic.com/~lynn/2007p.html#30 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2008b.html#57 Govt demands password to personal computer
https://www.garlic.com/~lynn/2009l.html#41 another item related to ASCII vs. EBCDIC
https://www.garlic.com/~lynn/2009p.html#8 WSJ.com - IBM Puts Executive on Leave
https://www.garlic.com/~lynn/2009q.html#49 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2010.html#13 360 programs on a z/10
https://www.garlic.com/~lynn/2010d.html#44 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#68 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#80 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010e.html#12 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010k.html#45 Taglines
https://www.garlic.com/~lynn/2010q.html#32 IBM Future System
https://www.garlic.com/~lynn/2011b.html#25 Melinda Varian's history page move
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Tue, 15 Feb 2011 16:25:36 -0500re:
oh ... and in this reference to Ferguson and Morris (1993 book)
reference
https://www.garlic.com/~lynn/2001f.html#33
about in the wake of the FS failure
https://www.garlic.com/~lynn/submain.html#futuresys
the old culture under Watsons was replaced with sycophancy and make no waves under Opel and Akers.
many of the things discussed in Tandem Memos would have been (were) an anathema to senior executives.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: NASA proves once again that, for it, the impossible is not even difficult. Newsgroups: comp.arch Date: Tue, 15 Feb 2011 17:21:36 -0500Terje Mathisen <"terje.mathisen at tmsw.no"> writes:
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Tue, 15 Feb 2011 17:52:28 -0500despen writes:
part of the problems was that it was one of the applications that was starting to push the overnight batch window
at science center in the early 70s ... we used a whole variety of performance methodologies ... including some that eventually evolved into things like "capacity planning" ... and this performance organization had somewhat fallen into a rut only looking at performance from single point-of-view.
misc. past posts about 450+k statement cobol application:
https://www.garlic.com/~lynn/2006u.html#50 Where can you get a Minor in Mainframe?
https://www.garlic.com/~lynn/2007l.html#20 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2008d.html#73 Price of CPU seconds
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2009e.html#76 Architectural Diversity
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order' for financial services
https://www.garlic.com/~lynn/2009s.html#9 Union Pacific Railroad ditches its mainframe for SOA
https://www.garlic.com/~lynn/2010.html#77 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2010i.html#41 Idiotic programming style edicts
misc. past posts mentioning overnight batch window and organizations
attempting to replace the implementations with straight-through
processing ... all the operations with their failed straight-through
processing attempts ... and still running overnight batch windows
... are possibly the largest remaining mainframe market
https://www.garlic.com/~lynn/2006s.html#40 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2007l.html#15 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007m.html#36 Future of System/360 architecture?
https://www.garlic.com/~lynn/2007u.html#44 Distributed Computing
https://www.garlic.com/~lynn/2007u.html#61 folklore indeed
https://www.garlic.com/~lynn/2007v.html#19 Education ranking
https://www.garlic.com/~lynn/2007v.html#81 Tap and faucet and spellcheckers
https://www.garlic.com/~lynn/2008b.html#74 Too much change opens up financial fault lines
https://www.garlic.com/~lynn/2008d.html#30 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#87 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008d.html#89 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008g.html#55 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008h.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#56 Long running Batch programs keep IMS databases offline
https://www.garlic.com/~lynn/2008p.html#26 What is the biggest IT myth of all time?
https://www.garlic.com/~lynn/2008p.html#30 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technolgies?
https://www.garlic.com/~lynn/2008r.html#7 If you had a massively parallel computing architecture, what unsolved problem would you set out to solve?
https://www.garlic.com/~lynn/2009.html#87 Cleaning Up Spaghetti Code vs. Getting Rid of It
https://www.garlic.com/~lynn/2009c.html#43 Business process re-engineering
https://www.garlic.com/~lynn/2009d.html#14 Legacy clearing threat to OTC derivatives warns State Street
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009h.html#1 z/Journal Does it Again
https://www.garlic.com/~lynn/2009h.html#2 z/Journal Does it Again
https://www.garlic.com/~lynn/2009i.html#21 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009l.html#57 IBM halves mainframe Linux engine prices
https://www.garlic.com/~lynn/2009m.html#81 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009o.html#81 big iron mainframe vs. x86 servers
https://www.garlic.com/~lynn/2009q.html#67 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2010.html#77 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2010b.html#16 How long for IBM System/360 architecture and its descendants?
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010h.html#47 COBOL - no longer being taught - is a problem
https://www.garlic.com/~lynn/2010k.html#3 Assembler programs was Re: Delete all members of a PDS that is allocated
https://www.garlic.com/~lynn/2010l.html#14 Age
https://www.garlic.com/~lynn/2010m.html#13 Is the ATM still the banking industry's single greatest innovation?
https://www.garlic.com/~lynn/2010m.html#37 A Bright Future for Big Iron?
https://www.garlic.com/~lynn/2011.html#42 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: NASA proves once again that, for it, the impossible is not even difficult. Newsgroups: comp.arch Date: Tue, 15 Feb 2011 18:16:57 -0500Robert Myers <rbmyersusa@gmail.com> writes:
definitly his own advice ... he used to tell a number of stories
one story was when he was head of lightweight fighter plane design at the pentagon ... and his 1star general came into the area to find a heated technical argument going on between him and a bunch of lieutenants. the general fired him for not maintaining correct military atmosphere.
another was about the forces behind the F15 attempting to get him thrown in Leavenworth (even tho he had redid F15 design cutting weight nearly in half).
They had gone to secretary of air force with claim that they knew he was designing what was to become F16 ... which was unauthorized ... and he had to be using enormous amounts of supercomputer time ... worth at least tens of millions; since it was unauthorized it amounted to theft of gov. property.
There was concerted effort to uncover evidence of this "theft" ... but after several months of auditing all gov. supercomputers records ... couldn't find any evidence of his use.
The air force had pretty much disowned him ... but the marines adopted him and it was the marines that were at arlington, his effects are at marine library at quantico ... and they have a shrine to him in the library lobby. In the light of all that, it seems strange that the air force would dedicate a hall to him.
when the lengthy spinney/time article appeared about gross pentagon misspending, Boyd claimed that SECDEF knew that Boyd was behind the article and had a directive that Boyd was banned from the pentagon. supposedly there was also a new document classification ... "NO-SPIN" ... unclassifed but not to be given to spinney.
when I sponsored Boyd's briefings at IBM, he only charged me for his out-pocket expenses.
past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Tue, 15 Feb 2011 21:19:29 -0500"Joe Morris" <j.c.morris@verizon.net> writes:
also from a couple hrs ago in this comp.arch thread:
https://www.garlic.com/~lynn/2011c.html#34 NASA proves once again that, for it, the impossible is not even difficult
https://www.garlic.com/~lynn/2011c.html#36 NASA proves once again that, for it, the impossible is not even difficult
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM "Watson" computer and Jeopardy Newsgroups: alt.folklore.computers Date: Wed, 16 Feb 2011 09:57:54 -0500cb@mer.df.lth.se (Christian Brunschen) writes:
along with other old pictures
https://www.garlic.com/~lynn/lhwemail.html#oldpicts
wiki
https://en.wikipedia.org/wiki/RS/6000
https://en.wikipedia.org/wiki/IBM_POWER
the executive we directly reported to when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
had previously worked at motorola. when somerset was original started (to do single-chip 801/risc ... starting with 601) ... he went over to head up sumerset. later he left sommerset to be president of MIPs.
wiki
https://en.wikipedia.org/wiki/PowerPC_600
https://en.wikipedia.org/wiki/PowerPC
as mentioned
https://www.garlic.com/~lynn/2011c.html#7 RISCversus CISC
https://www.garlic.com/~lynn/2011c.html#29 RISCversus CISC
John appeared to do 801/risc to be the exact opposite in
hardware complexity as Future System
https://www.garlic.com/~lynn/submain.html#futuresys
among other "simplifications" ... base 801/risc had no cache consistency ... which effectively made multiprocessors a difficult operation (John would make comments about the heavy performance penalty paid by 370 for multiprocessor cache consistency).
I recently mentioned there is 25th reunion for aix (pc/rt) coming up
the end of the month (and while lots of VRM showing up ... it didn't
look like any from interactive that did AIX were showing up). ROMP
mention (precursor to RIOS):
https://en.wikipedia.org/wiki/ROMP
ROMP (research/office products) was originally going to be used in follow-on to displaywriter (using CP.r written in PL.8). When that was canceled ... there was decision to do unix workstations. The PL.8 people doing the VRM (in PL.8) ... and hiring the company that did PC/IX (interactive) to do AIX ... implemented to the abstract virtual machine interface (provided by VRM). A major justification for the VRM/AIX was that it could be done faster than having interactive people learn the low-level 801/ROMP characteristics. This was somewhat disproved when the palo alto people were redirected from doing BSD port to 370 ... to doing BSD port to PC/RT (native, w/o VRM) called AOS. A more jaundiced view of VRM was that it gave the PL.8 programmers something to do.
One objective of Somerset (also AIM ... apple, ibm, motorola) was to add cache consistency and ability to do SMP multiprocessors (i.e. leverage some 88k technology ... since 88k did have SMP multiprocessor & cache consistency support).
In ha/cmp, one of the reasons for doing scale-up as clusters ... was
rios had no cache consistency for doing multiprocessor scale-up ... I had
worked on both multiprocessor implementations
https://www.garlic.com/~lynn/subtopic.html#smp
as well as cluster implementations previously ... recent references in
this thread about doing processor clusters the same time as working
with NSF on what was to be NSFNET backbone:
https://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone
some also mentioned in this recent post with pcworld article about
Watson's Ancestors
https://www.garlic.com/~lynn/2011c.html#24 If IBM Hadn't Bet the Company
later there was merge of Power & PowerPC ... and started seeing clusters of multiprocessors.
past posts mentioning 801, risc, iliad, romp, rios, power, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The first personal computer (PC) Newsgroups: alt.usage.english, alt.folklore.computers Date: Wed, 16 Feb 2011 10:25:45 -0500greenaum@yahoo.co.uk (greenaum) writes:
We downsized a few years ago ... and had to unload thousand or so books ... because there wasn't room. I've got a few things in storage
I find the kindle fits in some cargo pants pockets (other cargo pants are little short).
a few past posts mentioning getting kindle
https://www.garlic.com/~lynn/2011.html#39 The FreeWill instruction
https://www.garlic.com/~lynn/2011.html#53 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#55 America's Defense Meltdown
https://www.garlic.com/~lynn/2011.html#78 subscripti ng
https://www.garlic.com/~lynn/2011b.html#13 Rare Apple I computer sells for $216,000 in London
https://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
https://www.garlic.com/~lynn/2011b.html#59 Productivity And Bubbles
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Other early NSFNET backbone Newsgroups: alt.folklore.computers Date: Wed, 16 Feb 2011 11:24:23 -0500re:
more from long ago and far away (fat pipes, hsdt & other stuff) ...
Date: Tue, 1 Aug 89 10:42:25 EST
From: wheeler
As to the CPD forecast, conjecture is
1) don't bother to ask the customer about something you currently
don't support or
2) CPD support at the time was only for 56kbits, people may have
looked for large numbers of 56kbit links (with CPD supported hardware)
on smooth curve approaching aggregate T1 speeds. Lack of knowledge
about the business failed to alert them that a T1 was priced at about
six 56'ers (which is now down around 4 or less) ... making wholly
unlikely that any customer would go over three.
Also, if you can't fix-it, "feature it" ... CPD has made much of
fat-link capability (i.e. small aggregations of parallel 56kbits).
... snip ... top of post, old email index, HSDT email
Date: Mon, 31 Jul 89 14:20:18 EST
From: wheeler
Somebody recently inquired regarding the "network" forum that I
mentioned from Raleigh. Announcement file is attached. As part of the
distribution of the announcement information in the spring of '85 was
the inclusion of definitions (the distribution went out the friday
before the '85 VMITE, I remember the date because I had to be in Japan
the following week on business for the HSDT project ... and missed
VMITE).
The NETWORK distribution material included the definition:
low speed: <9.6kbits medium-speed: 19.2kbits high speed: 56kbits very high speed: T1The following monday, we were in an executive conference room at NEC outside of Tokyo. On the walls were:
low speed: <20mbits medium speed: 100mbits high-speed: 200-300mbits very high speed: >600mbitsThe HSDT project intersected the CPD "definition" the following year when the '86 CPD fall-plan forecast an aggregate total of 2-3 T1s installed (at customers) by early 90s. In the fall of '86, HSDT had more T1s installed than were forecast for the whole customer community (also at the time a superficial customer survey of IBM mainframe customers showed an installed base on the order of 200 T1s, a large percentage connected to IBM mainframes via NSC HYPERchannel).
>>>> <<<< >>>> Announcement of the NETWORK conferencing disk <<<< >>>> <<<<A new IBM Internal Use Only conference service is available for anyone interested in NETWORK problems, design, etc. It works like the IBMPC and IBMVM conferences in that the discussions on any topic are each contained in an ordinary CMS file which you may get, create, or append to, as desired. Anyone interested in NETWORK problems, design, performance measurements, engineering, etc are encouraged to use the conference to communicate with others working in the same field.
also more from long ago and far away:
Date: 08/01/89 08:34:29
To: wheeler
Lynn,
I only have a few minutes this morning to read your note as I am preparing
to go to Boston for two days. But what I read is a reasonable argument for
pacing based on past experience, logical deduction and a little intuition.
You might be interested in a slightly different arguament too. We in Manassas
have been working with signal processing in a distributed network for years,
since 1977 or so. In 1982 we developed a local area network for a submarine
that is being used today on board U S submarines. We started with the
observation that most of our messages are presented to the network at regular
intervals, periodic traffic. In cooperation with Carnegie Mellon Univ.,
we built on mathematical proofs which provide a guarantee of message
response time for any given set of messages as long as we pace the packets.
I know we approached the problem of network resource sharing from a different
starting point, but I think mathematically guaranteed response times along
with the ability to compute the adaptive parameters for pacing might be
a welcome addition to your paper.
If you think so too, we can provide as much of our experience as you
might like. By the way, this type of mathematical approach to determining
what "proper" scheduling means is being used by the DoD sponsored Software
Engineering Institute for Real Time Ada designs and has been incorporated
into the new Future Bus standard backplane bus. We can put you in touch
with these folks and/or the CMU people too, if you are interested.
... snip ... top of post, old email index, HSDT email
We had rate-based pacing in HSDT early on ... and then when I was
on the XTP technical advisery board ... misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
... I was doing a paper for XTP on the subject ... posted here
https://www.garlic.com/~lynn/xtprate.html
NOSC (surface warfare) was active particpant in XTP.
I got a lot of heat from the communication group from participating in
XTP ... an old email
https://www.garlic.com/~lynn/2006i.html#email890901
random other past email mentioning XTP
https://www.garlic.com/~lynn/2009q.html#email881113
https://www.garlic.com/~lynn/2002g.html#email890424
https://www.garlic.com/~lynn/2007.html#email911004
same month that "slow-start" was presented at IETF meeting, ACM SIGCOMM had paper on why "slow-start" was non-stable in high letency, heavily loaded network. I've conjectured in the past that "slow-start" for congestion avoidence was done for class of machines that had very poor time services. A simple rate-based pacing implementation adjusts time interval/delay between packet transmission (requiring system time services).
In addition to offending the communication group with HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt
and working with NSF on NSFNET backbone (and not sna/vtam)
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
part of HSDT came up with 3-tier network architecture ... included
it in response to large federal campus RFI and out pitching it
to corporate executives. This was at the time when communication
group was attempting to stuff client/server genie back into
the bottle ... defencing its terminal emulation install base
https://www.garlic.com/~lynn/subnetwork.html#emulation
past posts mentioning 3-tier
https://www.garlic.com/~lynn/subnetwork.html#3tier
misc. past posts mentioning rate-based pacing:
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#56 Moore law
https://www.garlic.com/~lynn/2002p.html#28 Western Union data communications?
https://www.garlic.com/~lynn/2002p.html#31 Western Union data communications?
https://www.garlic.com/~lynn/2003.html#55 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003g.html#54 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003g.html#64 UT200 (CDC RJE) Software for TOPS-10?
https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2003j.html#19 tcp time out for idle sessions
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
https://www.garlic.com/~lynn/2004k.html#8 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#12 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#13 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#16 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#29 CDC STAR-100
https://www.garlic.com/~lynn/2004n.html#35 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#62 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004q.html#3 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#57 high speed network, cross-over from sci.crypt
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005g.html#4 Successful remote AES key extraction
https://www.garlic.com/~lynn/2005q.html#37 Callable Wait State
https://www.garlic.com/~lynn/2006d.html#21 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006g.html#18 TOD Clock the same as the BIOS clock in PCs?
https://www.garlic.com/~lynn/2006m.html#20 Why I use a Mac, anno 2006
https://www.garlic.com/~lynn/2006o.html#64 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006u.html#44 waiting for acknowledgments
https://www.garlic.com/~lynn/2007q.html#19 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2008e.html#19 MAINFRAME Training with IBM Certification and JOB GUARANTEE
https://www.garlic.com/~lynn/2008e.html#28 MAINFRAME Training with IBM Certification and JOB GUARANTEE
https://www.garlic.com/~lynn/2008l.html#64 Blinkylights
https://www.garlic.com/~lynn/2009.html#86 F111 related discussion x-over from Facebook
https://www.garlic.com/~lynn/2009m.html#80 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009m.html#83 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2010b.html#68 Happy DEC-10 Day
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Wed, 16 Feb 2011 14:51:59 -0500"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
the 40+ * $30+M (>$1.2B) were pretty much sized for the all-night/overnight run of the 450+k statement cobol program ... they also pointed out that they were constantly bringing in the latest systems (no system was older than 18months). 14% improvement (a couple weeks work) was around $200m savings (that or they could process nearly 100m more accounts w/o needing additional hardware). At the time they were looking at moving/converting a portfolio with something like 65m accounts from somebody else.
I've mentioned before that I started out and offered to do it for 10% of the first yr's savings (but when I was done ... they had no recollection of that offer).
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Wed, 16 Feb 2011 15:22:26 -0500re:
totally unrelated performance work in the early/mid-90s ... was "routes" from major airline res system (ACP/TPF) ... which account for about 25% of the processing. They had a list of ten "impossible" things they wanted to do ... including significant scale-up ... theoretically be able to handle every reservation for every flt in the world. their implementation paradigm being used was effectively unchanged from the 60s.
I looked at it and decided technology had changed over the previous 30 or 40 yrs ... that I could take a completely different approach. I came back within two months with demo of the new implementation. The basic process ran 100 times faster ... but since I added some new features ... one was that typical operations required three different manual searches/queries ... which I had collapsed into one. I could only do about ten times as many of the new "routes" operations ... but each one did much more work (and eliminated two manual operations by reservation agents).
The initial pass I only got about 20 times the performance ... and then went back and carefully optimized for cache characteristics of the machine it was being demo'ed on ... and got another five times (for 100 times total).
One of the "impossible" things was production system tended only be able to do two or three connections ... more than that required manual operation by agent. I claimed to be able to find route/flts from anyplace to anyplace else (they had provided me with copy of the complete OAG ... all commercial scheduled flts and all airports with commercial scheduled flts). As part of the demo, they would ask things like route/flts from some obtuse airport in Kansas to some equally obtuse airport in Georgia (and probably not the Georgia you are thinking of).
They ran into organization roadblock ... one of the reasons that many of the processes they wanted to do were "impossible" was because there was nearly 1000 people doing manual support operations. Part of the new paradigm eliminated all the those manual support operations (in theory whole thing could be done now with less than 40 people). It turns out that there were a lot of high level executives would be effected by this ... things dragged on for nearly a year ... and they eventually said that they hadn't actually wanted me to do anything ... they just wanted to tell the airline board that I was consulting on the problem.
discussion from year ago ... I mentioned sizing being able to handle
routes processing for all reservations in the world ... and in theory
that it could then be handled by cellphone processor:
https://www.garlic.com/~lynn/2010b.html#78 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#79 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#80 Happy DEC-10 Day
misc. other past posts mentioning "routes" effort:
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002j.html#83 Summary: Robots of Doom
https://www.garlic.com/~lynn/2003o.html#17 Rationale for Supercomputers
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
https://www.garlic.com/~lynn/2004q.html#85 The TransRelational Model: Performance Concerns
https://www.garlic.com/~lynn/2005o.html#24 is a computer like an airport?
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)
https://www.garlic.com/~lynn/2006q.html#22 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2007g.html#22 Bidirectional Binary Self-Joins
https://www.garlic.com/~lynn/2007g.html#41 US Airways badmouths legacy system
https://www.garlic.com/~lynn/2007h.html#41 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings
https://www.garlic.com/~lynn/2007j.html#28 Even worse than UNIX
https://www.garlic.com/~lynn/2007p.html#45 64 gig memory
https://www.garlic.com/~lynn/2008h.html#61 Up, Up, ... and Gone?
https://www.garlic.com/~lynn/2008i.html#19 American Airlines
https://www.garlic.com/~lynn/2008j.html#32 CLIs and GUIs
https://www.garlic.com/~lynn/2008p.html#39 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technologies?
https://www.garlic.com/~lynn/2008p.html#41 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technologies?
https://www.garlic.com/~lynn/2009l.html#54 another item related to ASCII vs. EBCDIC
https://www.garlic.com/~lynn/2009o.html#42 Outsourcing your Computer Center to IBM ?
https://www.garlic.com/~lynn/2009q.html#10 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2010b.html#13 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2010b.html#73 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#74 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#19 Processes' memory
https://www.garlic.com/~lynn/2010j.html#53 Article says mainframe most cost-efficient platform
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The first personal computer (PC) Newsgroups: alt.usage.english, alt.folklore.computers Date: Wed, 16 Feb 2011 17:04:15 -0500Roland Hutchinson <my.spamtrap@verizon.net> writes:
An example of transmission lines can be seen in the picture here
(the picture doesn't quite go with the article since it is of
a dam on the columbia in the pacific northwest)
http://www.wired.com/wiredscience/2010/12/southwestern-water-future/
the directive/mandate came down from "lady bird" that the transmission lines had to be buried underground (going up the hill to behind where the camera taking the picture is located). all the engineers claimed there wasn't technology available to put such transmission lines underground going up that slope. they were told it had to be done anyway. a couple years later when there was large electrical short and big fire ... the engineers that said the technology wouldn't work ... were the ones blamed (not lady bird).
past post mentioning the above:
https://www.garlic.com/~lynn/2010j.html#13 A "portable" hard disk
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Wed, 16 Feb 2011 18:08:55 -0500hancock4 writes:
450+k statements ... ran every 3rd shift on 40+ mainframes each at $30+M ... somewhere around $1.5B total
there are a number of operations like this around ... helps maintain the mainframe business.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Wed, 16 Feb 2011 19:56:17 -0500Patrick Scheible <kkt@zipcon.net> writes:
"America's Defense Meltdown: Pentagon Reform for President Obama and the
New Congress"
https://www.amazon.com/Americas-Defense-Meltdown-President-ebook/dp/B001TKD4SA
there are quite a few references to Boyd in the above. recent reference
to Boyd:
https://www.garlic.com/~lynn/2011c.html#37 If IBM Hadn't Bet the Company
note however, one of the comments about the above is that the venality in the pentagon has been dwarfed by wallstreet, i.e.
"13 Bankers: The Wallstreet Takeover and the Next Financial Meltdown"
https://www.amazon.com/13-Bankers-Takeover-Financial-ebook/dp/B0036S4EIW
there is another dimension to various federal gov. projects:
Success of Failure:
http://www.govexec.com/management/management-matters/2007/04/the-success-of-failure/24107/
recent posts mentioning above:
https://www.garlic.com/~lynn/2011.html#53 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#55 America's Defense Meltdown
https://www.garlic.com/~lynn/2011.html#75 America's Defense Meltdown
https://www.garlic.com/~lynn/2011.html#93 America's Defense Meltdown
https://www.garlic.com/~lynn/2011b.html#0 America's Defense Meltdown
https://www.garlic.com/~lynn/2011b.html#59 Productivity And Bubbles
way up at the top is large number of legacy computer "re-engineering"
efforts that have gone on in every federal organization ... akin to
reference to billions were spent in the financial industry in the 90s on
failed straight-through processing re-engineering ... reference in this
recent post
https://www.garlic.com/~lynn/2011c.html#35 If IBM Hadn't Bet the Company
other past posts mentioning Success Of Failure
https://www.garlic.com/~lynn/2009o.html#25 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#41 U.S. house decommissions its last mainframe, saves $730,000
https://www.garlic.com/~lynn/2010b.html#19 STEM crisis
https://www.garlic.com/~lynn/2010b.html#26 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010f.html#38 F.B.I. Faces New Setback in Computer Overhaul
https://www.garlic.com/~lynn/2010k.html#18 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010p.html#78 TCM's Moguls documentary series
https://www.garlic.com/~lynn/2010q.html#5 Off-topic? When governments ask computers for an answer
https://www.garlic.com/~lynn/2010q.html#69 No command, and control
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Thu, 17 Feb 2011 08:34:32 -0500Charles Richmond <frizzle@tx.rr.com> writes:
somewhere along the way somebody decided to save money and eliminate some number of the off-grade crossings, which blew the elapsed time commute numbers which blew the ridership volume that justified having the light rail in the first place.
then there was the "new" 101 from cottle rd (south san jose) to gilroy. coyote valley association campaigned that the new 101 should drop from six lanes to four lanes through coyote valley (approx. bernal rd in san jose to cochran in morgan hill). That resulted in adding 30 minutes to commute in the morning for tens of thousands of commuters going north at the cochran choke point and another 30 minutes to commute in the evening at the bernal choke point going south ... possibly cost 10,000 people hrs/day (in addition to extra auto pollution); 50k people hrs/week, 2.5m people hrs/year. then much later ... the incremental construction cost to add the extra two lanes (compared to having just done the full six lanes as part of the original construction in the first place).
who should get the bill for that 2.5m people hrs/year (even at $10/hr ... that is still $25m/yr).
the reverse somewhat happened during the financial crisis. unregulated
loan originators (who nominally was very small part of the business
because they had very limited source of funds for lending, which was
possibly why nobody got around to regulating them), found that they
could securitize the loans and pay the rating agencies for triple-A
ratings. The triple-A ratings gave them access to nearly unlimited
source of funds (estimate that during the mess, something like $27T in
triple-A rated toxic CDO transactions were done).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt
Now the unregualted loan originators could unload every loan they wrote, regardless of loan quality and/or borrowers qualifications. Speculators found the no-documentation, no-down, 1% interest only payment ARMs a gold-mine ... possibly 2000% ROI in areas with 20-30% inflation. The speculation created impression of enormous more demand than actually existed, the speculation demand motivated enormous over building, the over building required municipalities to build out a whole lot of new services for all the new housing projects (as well as commercial developers doing a lot of new projects like strip malls for the new housing developments).
For all the new services, the municipalities issued lots of new bonds ... anticipated that they would be covered by revenue when all the new houses were sold. Then things crashed ... and they aren't getting the revenue to cover all those new bonds. The crash spreads out thru the economy ... strip malls are going unsold, commercial developers are defaulting and those defaults starting to take down local banks.
An early secondary side-effect was that the bond market froze ... when
investors found out that the rating agencies were selling triple-A
ratings and started wondering whether they could trust any ratings
from the rating agencies. The muni-bond market was restarted when
Buffett stepped in and started offering muni-bond insurance (this was
before the deflating bubble had perculated into hitting municipality
revenue and ability to pay on all those bonds) ... past post
https://www.garlic.com/~lynn/2008j.html#20 dollar coins
Two nights ago, 60mins-on-CNBC had program on ponzi schemes and an update on financial crisis. At the end of 2008, there was estimate that the four largest too-big-to-fail financial institutions had $5.2T in triple-A rated toxic CDOs being carried off-balance (courtesy of their unregulated investment banking arms and repeal of Glass-Steagall). The 60mins report was that these too-big-to-fail institutions helped keep the bubble going by continue to buy/trade each others triple-A rated toxic CDOs (while having the triple-A rated toxic CDOs would severely damage the institution, the investment bankers were getting big bonuses, fees, and commissions as long as they could keep the trades going; they pretty much all knew that the triple-A rated toxic CDOs weren't worth having ... but as long was the music played ... they could continue to rake in the money off the trades, churning each others portfolios).
Later there were some "regular" selling involving several tens of billions these toxic CDOs and they were going at 22cents on the dollar (if the four largest too-big-to-fail institutions had been required to bring the $5.2T back onto the books, they would have been declared insolvent and have to be liquidated). Recently after the Federal Reserve was forced to divulge some of the stuff it has been doing, buried in the numbers was reference to the FED has been buying up these toxic CDOs for 98cents on the dollar (as part of its propping up the too-big-to-fail institutions).
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The first personal computer (PC) Newsgroups: alt.usage.english, alt.folklore.computers Date: Thu, 17 Feb 2011 08:58:03 -0500Peter Brooks <peter.h.m.brooks@gmail.com> writes:
Many Consumers Believe 36 Months Is Longer Than 3 Years
http://www.sciencedaily.com/releases/2011/02/110214163114.htm
after watching large number of people during airline boarding by rows ... not being able to figure out if one number is larger than another number ... i considered that moving to boarding by sections was attempt to eliminate descrimination of the mathematically challenged. however, in discussing this in other fora ... there are claims that there are still significant of people then still can't get it right when boarding by sections (even when printed on their boarding pass).
... possibly a lot of the people really don't have much connection
between various sections of their brain ... they see some number of
other people moving and they move too ... more akin to sheep herds
https://en.wikipedia.org/wiki/Sheep
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Thu, 17 Feb 2011 11:22:10 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
then there is this:
Running on a Faster Track: Researchers Develop Scheduling Tool to Save
Time on Public Transport
http://www.sciencedaily.com/releases/2011/02/110216110853.htm
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Abhor, Retch, Ignite? Newsgroups: alt.folklore.computers Date: Thu, 17 Feb 2011 12:06:34 -0500"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
up until then, people would get the latest release (every year or more often) because it would have new function that they needed. turning point approx '96 was that 95% of the people had 95% percent of what they used. it was time to switch to new marketing campaign similar to new cars in the 60s ... somehow convince people to buy a new one whether they needed it or not.
even tho the software was "purchased" ... the business had been similar to IBM's hardware lease business (prior to early 70s when most machines were converted to purchase) ... aka dependable regular revenue stream (from lease business). When people start keeping their cars for 5-10 yrs ... it has big downside on the annual revenue stream (compared to everybody getting a new one every year).
the industry has been especially accused of maintaining a broken PC security paradigm in order to keep up that part of the regular annual revenue stream.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Thu, 17 Feb 2011 18:14:13 -0500Ahem A Rivet's Shot <steveo@eircom.net> writes:
'96 MDC in moscone ... had all these banners that proclaimed internet
support ... but the repeated theme/phrase almost everywhere was "protect
your investment" ... this was all the developers that did various forms
of basic programming ... including stuff that could be added to all
sorts of email and office files ... that would automatically execute for
various kinds of special effects ... but resulted in enormous
vulnerabilities when moved to the internet. other recent mention
of '96 MDC in moscone
https://www.garlic.com/~lynn/2011c.html#49 Abhor, Retch, Ignite?
up until '99, buffer length exploits in applications implemented
in C language was dominate exploit ... misc past posts mentioning
buffer length exploits:
https://www.garlic.com/~lynn/subintegrity.html#buffer
as an aside ... platforms that had similar applications and the tcp/ip protocol stack implemented in languages or ther C had little or none of the buffer length exploits.
Study says buffer overflow is most common security bug:
http://news.cnet.com/2100-1001-233483.html
99 ... things started to shift to 1/3rd social engineering, 1/3rd buffer exploits, and 1/3rd automatic execution.
this is old post where I was attempting to categorize exploits in the
CVE database (when it was still at mitre before it moved to NIST)
https://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE
I talked to mitre people at the time about seeing if they could get the
people doing the submissions to add slightly more structured descriptive
information (that could be used in categorization). the reply was that
the were lucky enough to get people to write anything ... and trying to
apply rules would probably backfire. I was working on adding to
my merged security glossary and taxonomy ... reference here
https://www.garlic.com/~lynn/index.html#glosnote
update on buffer exploits (down to 20%):
https://www.garlic.com/~lynn/2005d.html#0 Buffer overruns
current CVE reference/pointer
http://nvd.nist.gov/
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Fri, 18 Feb 2011 09:42:12 -0500Michael Wojcik <mwojcik@newsguy.com> writes:
a lot of one-level store was copied from (failed) tss/360 effort
... as well as other activities going on around the industry ... for
instance Multics .. on the flr above the cambridge science center at
545 tech sq. I mention first doing paged mapped support for cp67/cms
(during FS period) ... trying to avoid the shortcomings that I
observed in tss/360
https://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company
some amount of the s/38 was the high level abstraction and application simplification ... moving into business areas with much lower skills and resources. One of the early s/38 pitches was that the whole s/38 activity at a company could be handled by a single person. for low-end, the issues regarding skills/availability was more of a market inhibitor than the cost of the hardware (with possibly factor of 30 times thruput hit)
however, in large established operation with critical depandancies on
dataprocessing ... giving up 30 times performance ... and max'ing out
at 370/145 thruput (using 370/195 speed hardware), would have been
enormous impact ... reference to the FS performance study by the
Houston Science Center
https://www.garlic.com/~lynn/2011c.html#17 If IBM Hadn't Bet the Company
FS architecture was divided into approx. dozen sections ... and at the time, my wife reported to person that owned one of the sections. She thot FS was fantastic because got to consider every academia blue sky idea that had ever been thot of. However, she also observed that there were whole sections of FS architecture that was purely blue sky ... with no actual content (vaporware). The enormous amount of blowing smoke, content free and vaporware turned into a polite description of being "too ambitious". "Too ambitious" is possibly also polite way of defining high level, complex hardware that results in taking a factor of 30 times performance hit.
I've tripped across comments that the FS compartmentalizing was possibly done for security reasons ... industrial espionage of any specific FS component still wouldn't allow competition to build a product. A more jaundiced view was that the compartmentalization prevented people from realizing how bad things actually were. Some humorous references was that if vendor somehow actually got the specifications for all the different parts (and didn't die laughing), any attempt to build a competitive implementation would have destroyed them.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: System 360 From Computers to Computer Systems Newsgroups: alt.folklore.computers Date: Fri, 18 Feb 2011 11:01:29 -0500x-over from somebody's ibm-main post:
Today's IBM 100:
System 360 From Computers to Computer Systems
http://www.ibm.com/ibm100/us/en/icons/system360/
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Fri, 18 Feb 2011 11:23:03 -0500Michael Wojcik <mwojcik@newsguy.com> writes:
It was also in this period that Phili science center (where lots of APL stuff had been done) and Houston science center were shutdown (Cambridge and Palo Alto continued to survive for a period).
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Sat, 19 Feb 2011 10:09:26 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
the big difference between the "Sequent" executive retirement and the Oct91 executive retirement ... was the Sequent scenario was possibly somewhat NIH for the rest of the company ... and the Oct91 executive retirement, and the followup reviews possibly turned into something of "Emperor's New Clothes" moment
recent past posts referring to the Oct91 executive retirement kicking of
sequence of events, including scouring the company looking for
technology that could be used for supercomputer:
https://www.garlic.com/~lynn/2010b.html#71 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010q.html#52 I actually miss working at IBM
slightly related:
https://www.garlic.com/~lynn/2011c.html#38 IBM "Watson" computer and Jeopardy
old email references
https://www.garlic.com/~lynn/lhwemail.html#medusa
last email in above was possibly only hrs before cluster scale-up getting
transferred and we were told that we couldn't work on anything with
more than four processors.
https://www.garlic.com/~lynn/2006x.html#email920129
and then press item from 17Feb92 ("scientific and technical only")
https://www.garlic.com/~lynn/2001n.html#6000clusters1
and later related press item 11May92 ("caught by surprise")
https://www.garlic.com/~lynn/2001n.html#6000clusters2
and recent refs to earlier processor cluster activity
https://www.garlic.com/~lynn/2011b.html#48 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#50 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#55 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#20 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#27 If IBM Hadn't Bet the Company
older reference:
https://www.garlic.com/~lynn/2000c.html#21 Cache coherence
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Sat, 19 Feb 2011 10:20:26 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech
past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys
misc past posts mentioning cp40
https://www.garlic.com/~lynn/99.html#177 S/360 history
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#59 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002c.html#44 cp/67 (coss-post warning)
https://www.garlic.com/~lynn/2004c.html#11 40yrs, science center, feb. 1964
https://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP
https://www.garlic.com/~lynn/2005s.html#21 MVCIN instruction
https://www.garlic.com/~lynn/2005u.html#47 The rise of the virtual machines
https://www.garlic.com/~lynn/2006i.html#30 virtual memory
https://www.garlic.com/~lynn/2006i.html#32 virtual memory
https://www.garlic.com/~lynn/2007i.html#14 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2007r.html#51 Translation of IBM Basic Assembler to C?
https://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'
https://www.garlic.com/~lynn/2009f.html#13 System/360 Announcement (7apr64)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 19 Feb, 2011 Subject: The real cost of outsourcing Blog: MainframeZone787 Dreamliner teaches Boeing costly lesson on outsourcing
In the 757/767/777 there were stories about Boeing "outsourcing" large amount of the work to their suppliers ... cutting down some of the big employee boom/bust cycles in Seattle i.e. large amount of work that had been done in-house would instead be performed by supplier employees. Since large proportion were in the US ... it didn't get the same publicity (i.e. publicity isn't really about outsourcing ... but whether the jobs are in the US or not ... pretty much independent of whether the jobs have been outsourced or are non-US employees).
Somewhat more mainframe related and foreign competition. Major (CAD/CAM) design tool used was re-logo'ed by IBM from foreign company. During the OCO-wars, one of the arguments was customers needed source for business critical components where customer was willing to devote the resources for more timely support than might be available with the normal channels.
In the CAD/CAM relogo ... the IBM support group just acted as problem clearing house (and at the time, also not having access to the source) for the original vendor. At the time there was also speculation about international issues since the CAD/CAM vendor was viewed as having close ties with Boeing's major competitor.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 20 Feb, 2011 Subject: IBM Future System Blog: IBM Historic ComputingThere were these skirmish arguments between IMS and System/R groups ... IMS saying that relational required twice the disk space (for the implicit index) and large number of additional disk I/Os (in processing index). System/R response was that IMS exposure of record pointers as part of data schema required significant application programmer and administrative effort. misc. past system/r posts:
Going into the 80s, disk price/bit came down (significantly mitigating the disk space issue); there was significant increase in system real storage (mitigating index disk i/o), allowing large amount of index to be cached; skilled people costs idn't keep pace with the demand and their costs went up. All of the factors started to tip the balance towards relational. However, there is still significant large pockets of IMS use ... especially in the financial industry; some of it is purely legacy ... but other is that there are still some things (like large ATM networks) that haven't tipped from IMS to relational.
S/38 inherited some amount of FS ... and there were early claims that S/38 installations could get by with a single person; a dataprocessing manager ... and not need application programmers or other staff.
Note that the motivation for FS was clone controllers ... and other objectives may have been established once FS was going .... by an executive that was part of FS
The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
from above:
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead
that the competition would never be able to keep up, and to have such
a high level of integration that it would be impossible for
competitors to follow a compatible niche strategy. However, the
project failed because the objectives were too ambitious for the
available technology. Many of the ideas that were developed were
nevertheless adapted for later generations. Once IBM had acknowledged
this failure, it launched its 'box strategy', which called for
competitiveness with all the different types of compatible
sub-systems. But this proved to be difficult because of IBM's cost
structure and its R&D spending, and the strategy only resulted in a
partial narrowing of the price gap between IBM and its rivals.
... snip ...
One might claim that the extreme baroque nature of the PU4/PU5 (ncp/vtam) interface (under the guise of SNA) was attempt to meet the original FS motivation/objective (and not the reduced people effort objective ... since it significantly drove up effort).
Trivia drift ... I worked on clone controller project as undergraduate in the 60s ... later four of us were written up being responsible for (some part of the) clone controller business.
past posts in this thread:
https://www.garlic.com/~lynn/2011.html#14 IBM Future System
https://www.garlic.com/~lynn/2011.html#18 IBM Future System
https://www.garlic.com/~lynn/2011.html#20 IBM Future System
https://www.garlic.com/~lynn/2011b.html#72 IBM Future System
https://www.garlic.com/~lynn/2011c.html#1 IBM Future System
other Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: RISCversus CISC Newsgroups: comp.arch Date: Sun, 20 Feb 2011 12:03:58 -0500nmm1 writes:
somewhat as a result, we were asked to get involved in some number of other (payment related) efforts. One was by the electronic payment associations in combination with several technology vendors. They published a specification that called for a large number of big number operations for the whole operation (not just for key exchange). The standard library for performing such operations was called "BSAFE" ... and standard implementation did 16bit operations. I immediately did a profile of the number of operations called for ... and got a friend who had modified BSAFE to use 32bit operations (ran four times faster) to benchmark on a number of different platforms.
I then presented the numbers to the committee (payment & technology reps) ... the members claimed the numbers were 100 times to slow (if they had ever done any actual operations, they should have claimed it was four times too fast). Six months later, when they had running prototype ... it turns out the profile numbers were within a couple percent of actual (by then the standard BSAFE library distribution included the 32bit support).
Note that possibly one of the reasons for the 100 times claim ... was
that it was, in fact, an increase of 100 times over computational load
for doing an existing payment transaction (and for no actual effective
improvement over what was being provided by SSL). misc. past posts
mentioning their enormous 100times computational (as well as 100times
payload size) bloat
https://www.garlic.com/~lynn/subpubkey.html#bloat
note some payment/security chipcards had 1024-bit math circuits/hardware added ... to try and address the enormous elapsed time issue at point-of-sale ... but that dramatically increased number of circuits in such chips (size of chips & reduced number of chips/wafer), as well as power draw per unit time (aka total power to do the operations was still about the same, just compressed into shorter period ... which was still significant).
a little later, I was approached by some in the transit industry if I could do design that had roughly equivalent characteristics but w/o the enormous power, elapsed time, chip size, etc penalty (could be done in small subsecond for transit turnstyle ... using power from contactless/RF operation ... and small inexpensive chip).
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: RISCversus CISC Newsgroups: comp.arch Date: Sun, 20 Feb 2011 12:20:48 -0500re:
I got asked to do a talk on it at spring2001 IDF at panel session in the
trusted computing track ... old reference that went 404 ... but lives on
at the wayback machine
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13
guy running TPM effort was in front row ... so I quiped that it was nice to see TPM was starting to look more & more like my chip ... and he quiped back that I didn't have a committee of 200 people helping me with the design.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 20 Feb, 2011 Subject: IBM Future System Blog: IBM Historic Computingre:
By the early 80s there was growing number of stories about the tightly integrated, highly complex, baroque FS-philosophy SNA implementation (w/o "clean" interfaces). simple example was large environment installing slightly different device at remote location AND ... 1) required a new microcode load in the remote controller, 2) new NCP version at the datacenter, 3) new VTAM version in the host, and 4) new MVS version. This required simultaneous, coordinated, upgrade of all components across the whole infrastructure (typically over long weekend) and any glitch in any part of the process frequently required reverything/rolling back the whole infrastructure.
These horror stories increased during the 80s ... especially for large customers with multiple systems per datacenter and multiple large datacenters ... where there would have to be a simultaneous, coordinated upgrade of the whole infrastructure (with glitch of any one piece may have to revert the whole environment).
If FS had ever proven to be "deliverable" ... it would have implied
abandoning the large growing customers in favor of the s/38 class
customers who were having trouble getting support staff (because of
the growing impossibility of having coordinated, simultaneous upgrades
across large environment).
https://www.garlic.com/~lynn/submain.html#futuresys
The internal network saw this in the 70s & 80s with JES2 having
effectively intermixed networking fields with other job control fields
(no clean separated operation). Slightly different versions of JES2
attempting to communicate could result in one or both the systems
failing. Internal network was larger than arpanet/internet from just
about the beginning until possibly late '85 or early '86 and primarily
VM RSCS/VNET. RSCS/VNET had addressed the clean separation and as a
result JES2 systems were pretty much restricted to boundary nodes
... and a whole library of special RSCS/VNET drivers were created for
talking to JES2 systems ... which would convert all the necessary JES2
fields to the format required by the particular JES2 system at the
other end of the link. misc. past posts mentioning hasp/jes2
https://www.garlic.com/~lynn/submain.html#hasp
There is a somewhat notorious folklore about a ("upgraded") JES2
system in the San Jose plant site causing MVS system crashes in
Hursley ... and what was worse, they blamed RSCS/VNET for having not
been correctly configured to prevent the (Hursley) MVS
crashes. misc. past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: IBM 100: System 360 From Computers to Computer Systems Newsgroups: bit.listserv.ibm-main Date: 20 Feb 2011 16:18:49 -0800Allodoxaphobia <knock_yourself_out@example.net> writes:
misc. past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: z/OS 1.13 preview Newsgroups: bit.listserv.ibm-main Date: 20 Feb 2011 23:19:36 -0800ps2os2@YAHOO.COM (Ed Gould) writes:
mid-70s was 138/148 ... follow-on to 135/145 ... still vs1 & dos/vs
in the failure of future system project ... there was mad rush to get
stuff back into the 370 product pipeline (most activity had been killed
off during the future system period)
https://www.garlic.com/~lynn/submain.html#futuresys
high-end did 303x ... 3031 & 3032 were repackaged 158 & 168
... and 3033 started out with 168 wiring diagram map to chips that
were 20% faster. chips also had ten times as many circuits
... initially mostly unused ... but during the product development
some redesign to use the additional circuits got 3033 up to 1.5 times
168 (instead of 1.2 times). In parallel with 303x ... things started
on "XA" ... for awhile known as "811" ... which eventually resulted in
3081. some discussion of both FS & 3081:
http://www.jfsowa.com/computer/memo125.htm
and with the failure of FS, mid-range started work on "E" architecutre
... and in 79 came out with 43xx machines (follow-on to 138 & 148) that
supported both vanilla 370 and "E" (somewhat akin to 3081 with 370 &
"XA" modes). misc. past 43xx email ... starting in jan79 doing
benchmarks on engineering 4341s:
https://www.garlic.com/~lynn/lhwemail.html#43xx
some of the benchmarks were being done on the engineering 4341 in the disk product test labs for the endicott performance test group ... since i seemed to have better access to (endicott) 4341 than they did.
"E" architecture was somewhat akin to initial VS2 (SVS) with much of the single virtual address space moved into microcode/hardware.
4341 announced 30Jan1979:
https://web.archive.org/web/20190105032753/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP4341.html
the above mentions 4341 supported 3370 ... which was the mid-range disk
... and was FBA. There was no mid-range CKD at the time ... which sort
of left MVS out of the big explosion in the mid-range market ... could
upgrade 370 and continue to use existing legacy DASD ... but was
difficult to see MVS on all the 43xx that were starting to proliferate
all over corporations in departmental conference rooms and supply rooms.
Eventually 3375 was produced which was CKD emulated on 3370 FBA ... to
address the lack of MVS support for FBA. misc. past posts mentioning
FBA & CKD
https://www.garlic.com/~lynn/submain.html#dasd
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Comparing YOUR Computer with Supercomputers of the Past Newsgroups: alt.folklore.computers Date: Mon, 21 Feb 2011 10:00:39 -0500somewhat similar posts mentioning ancestor's of watson
recent posts mentioning univ. of cal. supercomputer center
https://www.garlic.com/~lynn/2011b.html#50 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#51 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#55 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#58 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011b.html#6 Other early NSFNET backbone
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Mon, 21 Feb 2011 10:56:41 -0500Ahem A Rivet's Shot <steveo@eircom.net> writes:
past post mentioning "idle" character transmission:
https://www.garlic.com/~lynn/2003k.html#39 Differnce between LF and NL
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Comparing YOUR Computer with Supercomputers of the Past Newsgroups: alt.folklore.computers Date: Mon, 21 Feb 2011 11:15:32 -0500re:
past posts mentioning doing Lawrence RAIN/RAIN4 benchmark (from
cdc66000) on engineering 4341 early 1979
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2006x.html#31 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006y.html#21 moving on
https://www.garlic.com/~lynn/2007d.html#62 Cycles per ASM instruction
https://www.garlic.com/~lynn/2009d.html#54 mainframe performance
https://www.garlic.com/~lynn/2009l.html#67 ACP, One of the Oldest Open Source Apps
recent post mentioning doing (engineering, pre-customer ship) 4341
benchmarks (in ibm-main newsgroup):
https://www.garlic.com/~lynn/2011c.html#62
RAIN/RAIN4 predated LINPACK
https://en.wikipedia.org/wiki/Linpack
benchmarks
http://www1.cse.wustl.edu/~jain/cse567-06/ftp/processor_workloads/index.html#linpack
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: RISCversus CISC Newsgroups: comp.arch Date: Mon, 21 Feb 2011 11:39:14 -0500"nedbrek" <nedbrek@yahoo.com> writes:
in this old post
https://www.garlic.com/~lynn/2003e.html#65
other old email mentioning 801, iliad, romp, rios, etc
https://www.garlic.com/~lynn/lhwemail.html#801
one of the people worked on pa-risc and then wide-word/itanium
https://www.garlic.com/~lynn/2006e.html#1
https://www.garlic.com/~lynn/2006o.html#67
https://www.garlic.com/~lynn/2009p.html#6
--
virtualization experience starting Jan1968, online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 22 Feb, 2011 Subject: IBM Future System Blog: IBM Historic ComputingAs per above, my wife reported to owner (previously had been head of the cp40 group at the cambridge science center) of one of the FS "sections" ... and at some point started reviewing lots of the other specifications ... which was the source of her comments that whole sections were content free (vaporware).
I was told folklore that 811 came from nov78 date on some of the documents.
i had a file drawer full of candy-striped 811 documents ... which required special security handling. I've suspected that the list of candy-striped document owners was subject of industrial espionage. At one point I got a call from a recruiter about assistant to president of silicon-valley subsidiary of a foreign company. During the interview there were veiled references to new product documents. I took the opportunity to say that I had recently submitted a "speak-up" about some questionable business practices at a company unit, suggesting some specific examples needed adding to the employee conduct manual.
later during gov. prosecution of the foreign company for illegal activity in the US, I had a 3hr debriefing by an FBI agent regarding who said what during that interview.
from ibm jargon:
candy-striped - adj. Registered IBM Confidential (q.v.). Refers to the
Red and White diagonal markings on the covers of such documents. Also
used as a verb: Those figures have been candy-striped.
Registered IBM Confidential - adj. The highest level of confidential
information. Printed copies are numbered, and a record is kept of
everyone who sees the document. This level of information may not
usually be held on computer systems, which makes preparation of such
documents a little tricky. It is said that RIC designates information
which is a) technically useless, but whose perceived value increases
with the level of management observing it; or b) is useful, but which
is now inaccessible because everyone is afraid to have custody of the
documents. candy-striped
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Tue, 22 Feb 2011 10:24:02 -0500jmfbahciv <See.above@aol.com> writes:
and some of the resulting difficulty described in this post
https://www.garlic.com/~lynn/2011c.html#60 IBM Future System
aka countermeasures to competitive clone controllers.
in the early 80s there was big explosion in both VAX & 43xx mid-range (cost of mid-range dataprocessing dropping below some threshold).
old post with decade of VAX sales
https://www.garlic.com/~lynn/2002f.html#0
43xx saw similar volumes in the small number of unit sales ... a difference was that there were some number of multi-hundred 43xx orders by large corporations ... for the leading edge of "distributed computing". internally they were converting departmental conference rooms into 43xx rooms ... resulted in conference room scheduling crisis.
old 43xx email
https://www.garlic.com/~lynn/lhwemail.html#43xx
there was some expectation that the 4331/4341 follow-on (4361/4381) would see similar large continued growth ... but by the mid-80s ... the VAX sale numbers show that mid-range market was moving (to workstations and large PCs).
disclaimer: i worked on clone controller as undergraduate in the 60s
... and there was some writeup blaming for of us for (some part of)
clone controller business. misc. past posts
https://www.garlic.com/~lynn/submain.html#360pcm
other past FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
random First Data trivia ... some detail in the following was garbled:
https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The first personal computer (PC) Newsgroups: alt.usage.english, alt.folklore.computers Date: Tue, 22 Feb 2011 10:41:13 -0500re:
big issue in "processor cluster" ... packaging increasing number
of processors in rack has been heat removal (cooling). recent posts
mentioning processor clusters
https://www.garlic.com/~lynn/2011b.html#48 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#50 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#55 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#20 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#27 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#38 IBM "Watson" computer and Jeopardy
https://www.garlic.com/~lynn/2011c.html#54 If IBM Hadn't Bet the Company
recent item on the issue:
The Advantages of Row and Rack-oriented Cooling Architectures for Data
Centers
http://whitepapers.theregister.co.uk/paper/view/1904/vavr-6j5vyj-r1-en.pdf
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Tue, 22 Feb 2011 13:25:08 -0500"Rod Speed" <rod.speed.aaa@gmail.com> writes:
competition significantly brought down price of VHS machines (especially compared to betamax machines) ... so part of tape (rental & sales) market was responding to the largest segment with specific kind of machines (part of betamax trying to control the video market)
in the late 90s, a website outsourcer claimed that it had ten e-commerce websites that all had higher hits per month that the number #1 listed site in popular measurements ... and they were all adult sites (who weren't interested in being part of popular number #1 listings since they were doing just fine w/o any such publicity).
there was also a footnote that they had several games&software e-commerce websites where credit card fraud approached 50% while adult content websites had near zero fraud.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Tue, 22 Feb 2011 17:41:32 -0500hancock4 writes:
4331 would be vse and/or vm. 4341 was big enuf to run mvs ... but mvs has never came out with FBA support ... and the only mid-range disk was 3370 (a FBA device). it would be possible to upgrade existing 370 processor (running) MVS to 4341 ... retaining any existing CKD disks. In attempt to allow some play for MVS in the big explosion in mid-range ... eventually 3375 was made available; CKD simulated on FBA 3370, in effect giving up that MVS would come out with FBA support.
with regard to big profusion of distributed vm/43xx (4331 & 4341) ... after 3375 became available, there theoretically was some MVS opportunity ... except MVS required a significant large amount of skills & resources for its care & feeding.
recent post in ibm-main thread about "XA" was the high-end architecture
extension to 370 (3081) and "E" was the mid-range architecture extension
to 370 ((4331 & 4341 ... i.e. 4331 was code-named "E3" and 4341 was
code-named "E4")
https://www.garlic.com/~lynn/2011c.html#62
misc. past posts mentioning CKD, FBA, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd
4331 might have used existing CKD disk ... or got new FBA disks, 3310s or the larger 3370s.
for other topic drift ... IBM field engineering support required being able to diagnose failed components at customer site. Level of integration in 3081 no longer being able to apply probes as part of diagnostic procedures. As a result, 3081 came with "service processor" with probes builtin at manufacture time ... and a rudimentary user interface was created for the "service processor".
In part, given the resources for creating a "service processor" specific operating system and applications ... it was decided to use a special custom modified vm/cms (release 6) running on 4331 as service processor for 3090 (service processor menu screens were implemented in IOS3270). By the time, 3090 shipped the "service processor" had been upgraded to a pair of 4361s (with 3310 FBA disks).
past posts mentioning "3092", 3090 "service processor"
https://www.garlic.com/~lynn/2004p.html#37 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2008i.html#10 Different Implementations of VLIW
https://www.garlic.com/~lynn/2009b.html#22 Evil weather
https://www.garlic.com/~lynn/2009e.html#50 Mainframe Hall of Fame: 17 New Members Added
https://www.garlic.com/~lynn/2010e.html#32 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#34 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#38 Need tool to zap core
misc. other past posts mentioning "service processor"
https://www.garlic.com/~lynn/96.html#41 IBM 4361 CPU technology
https://www.garlic.com/~lynn/99.html#61 Living legends
https://www.garlic.com/~lynn/99.html#62 Living legends
https://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#26 Superduper computers--why RISC not 390?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001j.html#13 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2002.html#45 VM and/or Linux under OS/390?????
https://www.garlic.com/~lynn/2002b.html#32 First DESKTOP Unix Box?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002e.html#5 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#19 What goes into a 3090?
https://www.garlic.com/~lynn/2002i.html#79 Fw: HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#28 ibm history note from vmshare
https://www.garlic.com/~lynn/2002l.html#7 What is microcode?
https://www.garlic.com/~lynn/2002l.html#10 What is microcode?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#59 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002p.html#40 Linux paging
https://www.garlic.com/~lynn/2002q.html#53 MVS History
https://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics
https://www.garlic.com/~lynn/2003l.html#12 Why are there few viruses for UNIX/Linux systems?
https://www.garlic.com/~lynn/2003l.html#62 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003n.html#17 which CPU for educational purposes?
https://www.garlic.com/~lynn/2004.html#10 Dyadic
https://www.garlic.com/~lynn/2004.html#11 Dyadic
https://www.garlic.com/~lynn/2004j.html#45 A quote from Crypto-Gram
https://www.garlic.com/~lynn/2004k.html#37 Wars against bad things
https://www.garlic.com/~lynn/2004n.html#10 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004p.html#27 IBM 3705 and UC.5
https://www.garlic.com/~lynn/2004p.html#36 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004p.html#41 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2005b.html#51 History of performance counters
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor
https://www.garlic.com/~lynn/2005t.html#39 FULIST
https://www.garlic.com/~lynn/2006.html#0 EREP , sense ... manual
https://www.garlic.com/~lynn/2006b.html#2 Mount a tape
https://www.garlic.com/~lynn/2006n.html#6 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2006n.html#8 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2006r.html#27 A Day For Surprises (Astounding Itanium Tricks)
https://www.garlic.com/~lynn/2006x.html#24 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2007.html#18 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2007.html#24 How to write a full-screen Rexx debugger?
https://www.garlic.com/~lynn/2007.html#39 Just another example of mainframe costs
https://www.garlic.com/~lynn/2007b.html#1 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007b.html#15 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007b.html#30 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007c.html#16 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007d.html#1 Has anyone ever used self-modifying microcode? Would it even be useful?
https://www.garlic.com/~lynn/2007d.html#22 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007d.html#23 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007e.html#39 FBA rant
https://www.garlic.com/~lynn/2007e.html#43 FBA rant
https://www.garlic.com/~lynn/2007i.html#20 Does anyone know of a documented case of VM being penetrated by hackers?
https://www.garlic.com/~lynn/2007p.html#36 Writing 23FDs
https://www.garlic.com/~lynn/2007p.html#37 Writing 23FDs
https://www.garlic.com/~lynn/2007u.html#9 Open z architecture and Linux questions
https://www.garlic.com/~lynn/2007v.html#46 folklore indeed
https://www.garlic.com/~lynn/2008d.html#54 Throwaway cores
https://www.garlic.com/~lynn/2008e.html#60 z10 presentation on 26 Feb
https://www.garlic.com/~lynn/2008h.html#80 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2009b.html#77 Z11 - Water cooling?
https://www.garlic.com/~lynn/2009g.html#49 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#66 Mainframe articles
https://www.garlic.com/~lynn/2009k.html#44 Z/VM support for FBA devices was Re: z/OS support of HMC's 3270 emulation?
https://www.garlic.com/~lynn/2009k.html#47 Z/VM support for FBA devices was Re: z/OS support of HMC's 3270 emulation?
https://www.garlic.com/~lynn/2009r.html#18 "Portable" data centers
https://www.garlic.com/~lynn/2009r.html#24 How to reduce the overall monthly cost on a System z environment?
https://www.garlic.com/~lynn/2009r.html#49 "Portable" data centers
https://www.garlic.com/~lynn/2009r.html#51 "Portable" data centers
https://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010e.html#44 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#76 LPARs: More or Less?
https://www.garlic.com/~lynn/2010g.html#32 Intel Nehalem-EX Aims for the Mainframe
https://www.garlic.com/~lynn/2010h.html#42 IBM 029 service manual
https://www.garlic.com/~lynn/2010m.html#43 IBM 3883 Manuals
https://www.garlic.com/~lynn/2010m.html#55 z millicode: where does it reside?
https://www.garlic.com/~lynn/2010n.html#71 Fujitsu starts shipping 800 rack 80,000 chip 'K' supercomputer
https://www.garlic.com/~lynn/2010q.html#33 IBM S/360 Green Card high quality scan
https://www.garlic.com/~lynn/2010q.html#47 IBM S/360 Green Card high quality scan here
https://www.garlic.com/~lynn/2011b.html#18 Melinda Varian's history page move
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: A History of VM Performance Newsgroups: alt.folklore.computers Date: Thu, 24 Feb 2011 09:49:16 -0500I'm giving a talk at the next Hillgang meeting 16Mar on a history of virtual machine performance. The original talk as given at the Oct86 SEAS (European SHARE) meeting held on Jersey.
I spent several weeks getting the talk cleared through management and legal since it mentioned some performance comparisons between "PAM" (page mapped filesystem that I had originally done for cp67 which wasn't released) with existing HPO/3081 system. In the mid-80s there were some issues with a few releases of HPO performance enhancements during the early to mid-80s. One of the issues in the mid-80s was some new people in the VM group having done some comparison tests on HPO where the page replacement infrastructure was reverted to effectively the oiriginal global LRU (not *local*) changes I had made in the late 60s to CP67. This had been hot topic (not just the page replacement changes) and there was possible some concern that some of the details might spill out in the talk
The original talk was scheduled for an hour ... but spilled over into a five hour session that night at SCIDS (in a overflow room next to the SCIDS room ... people could easily wander back and forth).
https://www.garlic.com/~lynn/hill0316g.pdf
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Thu, 24 Feb 2011 10:36:00 -0500"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
cp67/cms script ... there was a statement that could specify where the tab stops had been placed (otherwise would assume default every ten spaces) for handling of "real" tabs.
cp67/cms script was somewhat port of ctss runoff ... old reference
and heritage
https://www.garlic.com/~lynn/2003o.html#32
https://www.garlic.com/~lynn/2006p.html#27
https://www.garlic.com/~lynn/2008j.html#86
above references:
PDP-1 Expensive Typewriter (Peter Sampson) about 1962 CTSS RUNOFF (Jerry Saltzer) 1964-65 CMS SCRIPT (Stuart E. Madnick) 1967 CTSS BCPL runoff (Rudd Canaday, Dennis Ritchie) 1967-68 Multics BCPL runoff (Canaday, Ritchie, Ossanna) 1968 UNIX troff (J. F. Ossanna) dunnoalso references old ctss runoff
GML was invented at science center in 1969 and GML "tag" support added
... in addition to the runoff "dot". Nearly a decade later, GML morphed
into ISO standard SGML. Slightly more than another decade, SGML morphs
into HTML. past posts mentioning GML, SGML, etc
https://www.garlic.com/~lynn/submain.html#sgml
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A History of VM Performance Newsgroups: alt.folklore.computers Date: Thu, 24 Feb 2011 10:59:20 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
mistype ... that is GLOBAL LRU (not "LOCAL") ... recent post
discussing long running skirmish between local & global LRU paradigms
https://www.garlic.com/~lynn/2011c.html#8
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Thu, 24 Feb 2011 11:25:40 -0500hancock4 writes:
picture of my home office in the late 70s ... 300baud portable cdi
miniterm, portable/compaq microfiche viewer, and an old corporate
"tieline" (dial).
https://www.garlic.com/~lynn/lhwemail.html#oldpicts
misc. past posts referencing same picture
https://www.garlic.com/~lynn/2008m.html#38 Baudot code direct to computers?
https://www.garlic.com/~lynn/2008m.html#51 Baudot code direct to computers?
https://www.garlic.com/~lynn/2009e.html#30 Timeline: 40 years of OS milestones
https://www.garlic.com/~lynn/2009g.html#45 Netbooks: A terminal by any other name
I would have a fairly current complete copy of vm370 source listings as well as bunch of other documents. routing output to the microfiche printer ... you could specify the header output printed on the fiche (which was large enuf to be human readable with fanning thru the cards).
I no longer have the reader ... but I believe I have a couple hundred of the microfiche in box someplace.
wiki page:
https://en.wikipedia.org/wiki/Microform
has this image:
https://en.wikipedia.org/wiki/File:Microfiche_card.JPG
the above example doesn't show ability to have something embossed across the top that was large enough to be human readable. next time I'm going thru boxes ... I'll see if I can take picture of some of the microfiche.
home office late 70s
https://www.garlic.com/~lynn/miniterm.jpg
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Other early NSFNET backbone Newsgroups: alt.folklore.computers Date: Thu, 24 Feb 2011 12:08:01 -0500re:
Space Shuttle Discovery to Launch on Final Mission Today
http://www.space.com/10937-space-shuttle-discovery-final-launch-preview.html
was in the VIP stands for the first launch (one of the people that
walked on the moon was also in the stands)
http://science.ksc.nasa.gov/shuttle/missions/41-d/mission-41-d.html
primary reason was that part of HSDT was going to use one of the
transponders on SBS4 (SBS-D) ... which was part of the payload.
https://www.garlic.com/~lynn/subnetwork.html#hsdt
following to somebody who was known to fly a private plane outside the
restricted air space to take pictures of shuttle launches
Date: 09/04/84 17:22:44
From: wheeler
re: shuttle; i was in the viewing stands when the last shuttle went up
thursday ... that wasn't you in a private airplane in the launch space
that ignored radio messages & had to have a chase plane sent after
it?????
... snip ... top of post, old email index, HSDT email
Date: 4 September 1984, 20:30:37 EDT
To: wheeler
Hi Lynn -
No, I was in Denver!!!!!
How did the launch look from the viewing stands? It must have been
fantastic!
Did you get pictures for the next ITE?
... snip ... top of post, old email index, HSDT email
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Thu, 24 Feb 2011 15:12:52 -0500greymausg writes:
in ha/cmp there was heart beat (keep alive) protocol that was used to
try and identify when an active cluster member had died.
https://www.garlic.com/~lynn/subtopic.html#hacmp
however, there was a failure mode where the member possibly was just playing dead ... before recovery could be completed, it was required to cut the dead member from the configuration .. scenario that it was just in suspended animation in front of critical operation, once recovery started had to preclude the possibility of a reviving (suspended) member from proceeding to perform the critical operation.
in two-way operation this included a "reserve" on all disks to lock out any possible disk i/o that a possibly member in suspended animation was about to do. in an n-way operation, things get more complex ... since a "reserve" is designed to lock out everyone except the one issuing the reserve. what is desired is somewhat the reverse of a "reserve" ... allow everybody but identified member(s).
there was some fiddling in a HYPERchannel configuration to achieve
this ... where HYPERchannel was being used in both inter-processor
communication and disk i/o. recent mention of NCAR (HYPERchannel A51n,
remote device adapter, simulated mainframe channel to disk controller)
https://www.garlic.com/~lynn/2011b.html#58 Other early NSFNET backbone
There was some work on HiPPI switch (for use with IPI disks) to provide an equivalent "fencing" function in the switch (i.e. lock-out a suspected member that had died; basically if suspected of having died ... make sure they are truely finished off), which, in conbination with HiPPI-switch support for 3rd party I/O transfers ... would allow for roughly equivalent operation to that provided in the HYPERchannel environment.
then there was follow-on effort to see about equivalent functionality in
FCS (fiber channel standard) switch. misc. (other) past posts
mentioning hippi &/or fcs switch:
https://www.garlic.com/~lynn/94.html#16 Dual-ported disks?
https://www.garlic.com/~lynn/94.html#17 Dual-ported disks?
https://www.garlic.com/~lynn/98.html#58 Reliability and SMPs
https://www.garlic.com/~lynn/2001.html#21 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2002e.html#46 What goes into a 3090?
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002j.html#15 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2003p.html#46 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2004d.html#75 DASD Architecture of the future
https://www.garlic.com/~lynn/2005n.html#1 Cluster computing drawbacks
https://www.garlic.com/~lynn/2005r.html#10 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006w.html#14 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?
https://www.garlic.com/~lynn/2008p.html#43 Barbless
https://www.garlic.com/~lynn/2008p.html#51 Barbless
https://www.garlic.com/~lynn/2008p.html#62 Barbless
https://www.garlic.com/~lynn/2008q.html#36 Startio Question
https://www.garlic.com/~lynn/2009c.html#47 Using a PC as DASD
https://www.garlic.com/~lynn/2010d.html#69 LPARs: More or Less?
https://www.garlic.com/~lynn/2010h.html#63 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2010m.html#85 3270 Emulator Software
https://www.garlic.com/~lynn/2011b.html#12 Testing hardware RESERVE
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: RISCversus CISC Newsgroups: comp.arch Date: Thu, 24 Feb 2011 15:22:29 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
news item related to TPM chip:
NSA Winds Down Secure Virtualization Platform Development; The National
Security Agency's High Assurance Platform integrates security and
virtualization technology into a framework that's been commercialized
and adopted elsewhere in government
http://www.informationweek.com/news/government/security/showArticle.jhtml?articleID=229219339
and then from long ago and far away
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A History of VM Performance Newsgroups: alt.folklore.computers Date: Thu, 24 Feb 2011 15:42:42 -0500re:
item from today on current day flavor
http://www.informationweek.com/news/government/security/showArticle.jhtml?articleID=229219339
and something from long ago and far away
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Thu, 24 Feb 2011 16:34:16 -0500Patrick Scheible <kkt@zipcon.net> writes:
references various kinds of memory protection
http://www.informationweek.com/news/government/security/showArticle.jhtml?articleID=229219339
incluing countermeasures to buffer overflow ... lots of past
post
https://www.garlic.com/~lynn/subintegrity.html#buffer
as well as this thread in comp.arch
https://www.garlic.com/~lynn/2011c.html#58 RISCversus CISC
https://www.garlic.com/~lynn/2011c.html#59 RISCversus CISC
https://www.garlic.com/~lynn/2011c.html#78 RISCversus CISC
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A History of VM Performance Newsgroups: alt.folklore.computers Date: Thu, 24 Feb 2011 17:05:53 -0500hancock4 writes:
recent post
https://www.garlic.com/~lynn/2011b.html#10 Rare Apple I computer sells for $216,000 in London
references old email (spring '87)
https://www.garlic.com/~lynn/2006x.html#email870302
in this post
https://www.garlic.com/~lynn/2006x.html#7
where one of the things that the communication group was telling the executive committee was that PROFS was a VTAM application ... as part of the campaign to convert the internal network to SNA.
misc. past posts mentioning internal network (larger than
arpanet/internet from just about the beginning until possibly late 85 or
early 86)
https://www.garlic.com/~lynn/subnetwork.html#internalnet
above part of thread that mentions xmas exec
https://www.garlic.com/~lynn/2011b.html#9 Rare Apple I computer sells for $216,000 in London
on bitnet
https://www.garlic.com/~lynn/subnetwork.html#bitnet
... which was social engineering ... convince people to "execute" the exec ... which would display a xmas greeting ... which doing other things in the background.
old post that attempts to recreate an early/internal "xmas" greeting
exec from 1981 that would blink colored "lights" on 3279 screen
https://www.garlic.com/~lynn/2007v.html#54 An old fashioned Christmas
"profs memo" on vmshare mentions profs 2.2.3 up thru 1990 ... and then
starts mentioning OV/VM (aka provs v3) later in the 90s
http://vm.marist.edu/~vmshare/browse.cgi?fn=PROFS&ft=MEMO
Office Vision VM -- PROFS Version 3.0 (announced in 1989)
http://vm.marist.edu/~vmshare/browse.cgi?fn=PROFSV30&ft=MEMO
possibly related to the company having bought lotus ... there was transition to lotus notes.
past posts mentioning that PROFS started out with very early internal
version of VMSG as the email client ... and when the original author
offered to upgrade their version ... the PROFS group attempted to get
him fired. things somewhat settled down when the original author pointed
out that every PROFS email in the world carried his initials in a
non-displayed control field
https://www.garlic.com/~lynn/2000c.html#46 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#39 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#40 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2002f.html#14 Mail system scalability (Was: Re: Itanium troubles)
https://www.garlic.com/~lynn/2002h.html#58 history of CMS
https://www.garlic.com/~lynn/2002h.html#64 history of CMS
https://www.garlic.com/~lynn/2002p.html#34 VSE (Was: Re: Refusal to change was Re: LE and COBOL)
https://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
https://www.garlic.com/~lynn/2003j.html#56 Goodbye PROFS
https://www.garlic.com/~lynn/2004p.html#13 Mainframe Virus ????
https://www.garlic.com/~lynn/2005t.html#43 FULIST
https://www.garlic.com/~lynn/2005t.html#44 FULIST
https://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2007f.html#13 Why is switch to DSL so traumatic?
https://www.garlic.com/~lynn/2007p.html#29 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007v.html#54 An old fashioned Christmas
https://www.garlic.com/~lynn/2007v.html#55 An old fashioned Christmas
https://www.garlic.com/~lynn/2008k.html#59 Happy 20th Birthday, AS/400
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2010.html#1 DEC-10 SOS Editor Intra-Line Editing
https://www.garlic.com/~lynn/2010b.html#44 sysout using machine control instead of ANSI control
https://www.garlic.com/~lynn/2010d.html#61 LPARs: More or Less?
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A History of VM Performance Newsgroups: alt.folklore.computers Date: Thu, 24 Feb 2011 17:26:37 -0500re:
in the 80s, I did a cms to unix mail/822 translation exec that I
distributed on the internal network. from long ago and far away:
REMAIL can be used to forward all CMS mail to your RT via the SMTP
mail gateway.
REMAIL
<DEBUG> <SMTPUSER=smtp> <SMTPNODE=vnet-node>
<SMTPNAME=domain-name> <TCPUSER=wheeler>
<tcpnode=tcp-node>
The keyword variables are optained from GLOBALV group "SMTP"
if not specified on the command line. If no specification
is available, user will be asked for one (& the GLOBALV SETP
function invoked). Command line specification is keyword without
imbedded blanks (and overides any GLOBALV settings, but doesn't
reset them).
SMTPUSER=smtp-virtual-machine
SMTPNODE=vnet-node-with-smtp
SMTPNAME=smtpnode-domain-name
TCPUSER=wheeler
TCPNODE=tcp-node
REMAIL will examine every spooled file in your reader. When mail files
are found (NOTE, PROFS, VMSG, MAIL, etc), they are read, reformated
and forwarded to the SMTP mail gateway.
RMFORW
RMFORW is an EXEC that when invoked will run continueously. It will
invoke REMAIL EXEC anytime a spool file arrives in the reader. This EXEC
also uses SMSG to intercept immediate messages for forwarding. While
executing RMFORW will accept the following terminal input as commands:
DISC - Disconnect from terminal'
EXIT - End the mail and messaging forwarding shell'
HELP - Display this message'
QUIT - Same as EXIT'
RMEXIT is a "user exit" EXEC that can be modified to selective bypass
processing of files/messages from specific userids.
REMAIL invokes special processing if a 822 mail file arrives from the
specified SMTP mail gateway that originated from the same address that
CMS mail is being forwarded. Rather than "looping" the mail, the file
is assumed to be a request to execute a CMS command. After validating
the 822 mail format, a "X-cms-command:" line is executed as a CMS
command after deleting the file (note: Subject: is no longer handled
as a CMS command).
....
... snip ...
--
virtualization experience starting Jan1968, online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 25 Feb, 2011 Subject: IBM Future System Blog: IBM Historic ComputingThere was a specially modified VM370 system that had softcopy of some number of FS documentation ... so that documents could only be read on local 3270 terminals and not allowing printing and/or copying the material in any way.
I had gone by Friday afternoon to get setup for some off-shift weekend test time in that machine room and they taunted me that even if I was left alone in the machine room, "even" I wouldn't be able to access the documents. I mentioned something about 5 mins ... most of the time disabling all external access to the machine; I then modified a byte in storage so that everything typed would be treated as a valid password. I pointed out that front panel would need some sort of authorization infrastructure and/or documents encrypted.
from ibm jargon:
FS - n. Future System. A synonym for dreams that didn't come
true. That project will be another FS. Note that FS is also the
abbreviation for functionally stabilized, and, in Hebrew, means zero,
or nothing. Also known as False Start, etc.
... snip ...
misc. past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys
from ibm jargon:
TIME/LIFE - n. The legendary (defunct since 1975) New York Programming
Center, formerly in the TIME & LIFE Building on 6th Avenue, near the
Rockefeller Center, in New York City. For many years it was the home
of System/360 and System/370 Languages, Sorts and Utilities. Its
programmers are now primarily in IBM Kingston, Palo Alto, and Santa Teresa
(or retired).
... snip ...
there was joke that the Burlington Mall vm370 development group (see up thread) was put under the same executive responsible for the "TIME/LIFE" shutdown ... and therefor it should have been obvious to the Burlington Mall group as to what was coming.
Apparently the plan was not telling the group until the very last moment, maximizing the number that would be moved to POK (to support mvs/xa development) and minimize possibility of finding alternatives in the Boston area. However, the information was leaked to the group ... and there was then a witch hunt to identify the source of the leak (creating an extremely paranoid atmosphere in the bldg).
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Fri, 25 Feb 2011 16:55:34 -0500re:
The IBM Punched Card
http://www.ibm.com/ibm100/us/en/icons/punchcard/
from above:
In 1928, IBM introduced a new version of the punched card with
rectangular holes and 80 columns. This newly designed "IBM Computer
Card" was the end result of a competition between the company's top two
research teams, working in secrecy from one another.
... snip ...
and related to FS thread drift ... from ibm jargon:
FS - n. Future System. A synonym for dreams that didn't come
true. That project will be another FS. Note that FS is also the
abbreviation for functionally stabilized, and, in Hebrew, means zero,
or nothing. Also known as False Start, etc.
... snip ...
misc. past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Fri, 25 Feb 2011 20:23:44 -0500Peter Flass <Peter_Flass@Yahoo.com> writes:
part of the FS thread with comment about paged mapped (and other things)
and s/38 ... (as/400 was follow-on that merged s/36 & s/38)
https://www.garlic.com/~lynn/2011c.html#12 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#14 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#51 If IBM Hadn't Bet the Company
recent comment about FS objective/requirement being highly integrated as
countermeasure to clone controllers ... fine for single integrated
processor operation ... but became unreasonable when managing large
collection ... where all changes had to be done at the same time across
the whole infrastructure (aka vtam/ncp attempt to embody many of the FS
objectives)
https://www.garlic.com/~lynn/2011c.html#57 IBM Future System
https://www.garlic.com/~lynn/2011c.html#60 IBM Future System
https://www.garlic.com/~lynn/2011c.html#67 IBM Future System
past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys
other recent posts mentioning sna/vtam/ncp:
https://www.garlic.com/~lynn/2011.html#0 I actually miss working at IBM
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011.html#17 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
https://www.garlic.com/~lynn/2011.html#21 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011b.html#10 Rare Apple I computer sells for $216,000 in London
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#33 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#34 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#65 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#16 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#21 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#40 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#68 IBM and the Computer Revolution
https://www.garlic.com/~lynn/2011c.html#81 A History of VM Performance
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Fri, 25 Feb 2011 21:31:58 -0500"Joe Morris" <j.c.morris@verizon.net> writes:
recent mention of the HILLGANG talk:
https://www.garlic.com/~lynn/2011c.html#72 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#74 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#79 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#81 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#82 A History of VM Performance
(also) from ibm jargon:
BYTE8406 - bite-eighty-four-oh-sixv. To start a discussion about old
IBM machines. forum
BYTE8406 syndrome - n. The tendency for any social discussion among
computer people to drift towards exaggeration. Well, when I started
using computers they didn't even use electricity yet, much less
transistors. forum n. The tendency for oppression to waste resources.
Derives from the observation that erasing a banned public file does
not destroy the information, but merely creates an uncountable number
of private copies. It was first diagnosed in September 1984, when the
BYTE8406 forum was removed from the IBMPC Conference.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A History of VM Performance Newsgroups: alt.folklore.computers Date: Fri, 25 Feb 2011 21:58:30 -0500re:
from long ago and far away ...
Date: 01/19/86 12:29:38
From: wheeler
re: hpo; spent some time last tues. with people putting global LRU
page replacement algorithm back into VM/HPO (it was taken out by the
hpo3.4 group). they have very good performance comparison with
hpo3.4. They are almost done with implmentation of correct global LRU
replacement algorithm for >16meg real memory support (the hpo2.5
support >16meg real memory screwed up the global LRU replacement
algorithm ... although they thot the code was similar ... small
changes in the way some bits were tested ... resulted in algorithm
other than global LRU being implemented).
It looks like will have page replacement algorithm put back to the way
it was prior to HPO2.5 (i.e. >16meg support & swapper support) and a
>16meg support implemented with true global LRU page replacement ...
performing much better than the swapper hpo3.4 stuff & hpo 2.5 >16meg
support.
I also found out from the people working on putting my global LRU page
replacement algorithm back in, that IBM handed out 6 OIAs for removing
it (I had previously believed that there was one or two, but wasn't
sure). It is funny since prior to releasing HPO3.4 ... they were
claiming it was over 80% SYSPAG code written by "Lynn Wheeler" to clean
up large portions of various pieces of CP.
... snip ... top of post, old email index
as I've referred to before ... i had been repeatedly told that I
had no career and/or promotions:
https://www.garlic.com/~lynn/2011c.html#9 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#19 If IBM Hadn't Bet the Company
other recent posts mentioning HPO &/or global LRU
https://www.garlic.com/~lynn/2011b.html#89 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#0 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)
https://www.garlic.com/~lynn/2011c.html#72 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#74 A History of VM Performance
misc. past posts mentioning replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#clock
as an aside ... there was post in the past couple days in comp.arch mentioning flavor or global LRU for hardware processor cache management.
and then ...
Date: 21 January 1986, 07:11:46 CST
To: wheeler
Subject: Dispatcher change for VM/XA
I am planning on changing my XA dispatcher to execute the SIE
instruction with I/O interrupts disabled (external interrupts will
still be enabled). The SIE instruction is a very expensive
instruction and I want to give the virtual machine a chance to do some
productive work before taking an interrupt. With I/O interrupts
disabled, the virtual machine will get to run until it relinquishes
control to CP or hits the end of the dispatcher timeslice. The I/O
supervisor already uses the TPI instruction to process all pending
interrupts before returning control to the dispatcher.
Do you have any thoughts on the matter? I have read CPDESIGN FORUM on
the IBMVM disk, so I know what has been discussed there. Do you have
a ballpark figure for the maximum allowable dispatcher timeslice which
would allow satisfactory I/O throughput? I am thinking about
disabling I/O interrupts for a maximum of 10 milliseconds. Another
alternative would be to have DMKSTP set/reset the interrupt mask based
upon the observed I/O interrupt rate.
... snip ... top of post, old email index
Date: 01/22/86 12:29:11
From: wheeler
guy in yyyyyy has vm/sp running on xa ... he is now working on version
2.
... snip ... top of post, old email index
Date: 01/23/86 10:13:39
From: wheeler
Have also spent time with xxxxxx and his redesign on his VM/HPO/XA
release two (i.e. he has already a "release one" of VM/HPO rewritten
to run in XA mode).
... snip ... top of post, old email index
vm(370)/HPO modified to run 370/XA ... as opposed to the internal
vmtool reworked for customer release. a few recent posts mentioning
vmtool, "migration aid", and/or vm/xa ...
https://www.garlic.com/~lynn/2011b.html#18 Melinda Varian's history page move
https://www.garlic.com/~lynn/2011b.html#25 Melinda Varian's history page move
https://www.garlic.com/~lynn/2011b.html#29 A brief history of CMS/XA, part 1
above includes old email referencing significant resources poured into vmtool to making it available to customers (as opposed to one person that enhanced vm/sp/hpo to support 370/xa).
I had done something similar to dispatching disabled for i/o interrupts in my original resource manager ... on heavily loaded system to minimize effects that asynchronous interrupts had on cache hit ratios.
recent posts mentioning SIE instruction:
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2011.html#62 SIE - CompArch
https://www.garlic.com/~lynn/2011b.html#18 Melinda Varian's history page move
https://www.garlic.com/~lynn/2011b.html#70 vm/370 3081
one of the above mentions IBMVM ... i've previously mentioned that I
was blamed for computer conferencing on the internal network in the
late 70s & early 80s. then the corporation put together officially
sanctioned computer conferencing support
https://www.garlic.com/~lynn/2011b.html#9 Rare Apple I computer sells for $216,000 in London
https://www.garlic.com/~lynn/2011b.html#25 Melinda Varian's history page move
https://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)
https://www.garlic.com/~lynn/2011c.html#28 If IBM Hadn't Bet the Company
and from ibm jargon
conferencing facility - n. A service machine that allows data files to
be shared among many people and places. These files are typically
forums on particular subjects, which can be added to by those people
authorised to take part in the conference. This allows anyone to ask
questions of the user community and receive public answers from it.
The growth rate of a given conferencing facility is a good indication
of IBMers' interest in its topic. The three largest conferences are
the IBMPC, IBMVM, and IBMTEXT conferences, which hold thousands of
forums on matters relating to the PC, VM, and text processing,
respectively. These are all open to any VNET user. append, forum,
service machine
... snip ...
--
virtualization experience starting Jan1968, online at home since Mar1970
Date: Fri, 25 Feb 2011 10:05:11 -0500 From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Hillgang -- VM Performance MailingList: HILLGANGon 02/23/2011:
I'm not sure about the last 20yrs ... I may have to do some research between now and the talk. For the previous 20yrs, I reverted things at least twice.
A lot of stuff that I had done for cp67 as undergraduate in the 60 and was released in cp67 was dropped in the simplification morph from cp67->vm370.
I continued to do a lot of 370 stuff all during the future system period (when 370 stuff was being killed off). With the demise of future system, there was mad rush to get stuff back into the 370 product pipelines ... some amount of stuff I had been doing was picked up and released in vm370 R3. Then some more was selected to be released as my "Resource Manager".
Then in the mid-80s, it was taking several weeks to get management and
legal approval for the SEAS talk. There was some dustup about some
amount of the changes made in HPO2.5, HPO3.4, etc ... and reverything
again to way I had done things in CP67 ... and they were apparently
worried that some of that would leak out in the talk. One was with
global LRU page replacement ... that somebody in the development
group was reverything to cp67. A couple yrs earlier, somebody was trying
to get a Stanford PHD in the area of global LRU and being opposed by
"logal LRU" forces in academia. It took nearly a year to get
management approval to send a reply regarding work I had done in the
60s in global LRU. part of that reply
https://www.garlic.com/~lynn/2006w.html#email821019
in this old post
https://www.garlic.com/~lynn/2006w.html#46
As an aside ... there was post in comp.arch within the past two days about processor hardware caches using very similar strategy for replacing cache lines.
past posts mentioning replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#clock
for other drift ... after presentation about z/VM cluster software
talk a couple Hillgang meetings ago ... I posted about "From The
Annals of Release No Software Before Its Time" about cluster support
having been done for internal HONE support in the late 70s (more than
30yrs earlier):
https://www.garlic.com/~lynn/2009p.html#43
... and x-over from recent a.f.c. posts
https://www.garlic.com/~lynn/2011c.html#72 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#74 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#79 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#81 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#82 A History of VM Performance
https://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and the Computer Revolution Newsgroups: alt.folklore.computers Date: Sat, 26 Feb 2011 08:04:10 -0500Peter Flass <Peter_Flass@Yahoo.com> writes:
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A History of VM Performance Newsgroups: alt.folklore.computers Date: Sat, 26 Feb 2011 09:29:06 -0500re:
and with respect to "dustup"
https://www.garlic.com/~lynn/2011c.html#88 Hillgang -- VM Performance
from long ago and far away:
Date: 05/01/86 11:35:13
To: wheeler
Subject: qdrop delay et al
Wow... What a torrent of ideas you've shipped out here while I was off
skiing Tuckerman! On another subject, I've been meaning to reply
concerning the distinction between changed and unchanged unreferenced
pages as regards moving them from below to above the notorious 16m
line. I agree with you 100%... the name of the game is global LRU,
and all that matters is reference.
The version of the glru prototype that ran here a while ago fixed much
of that area, while leaving the code which distinguished between
changed and unchanged pages alone. The "trick" was that all pages
that were going to be swapped were considered changed for purposes of
the page move, and that no private page would then ever appear
unchanged or unswappable (all pages read in singly must eventually be
swapped, while all pages read from swap sets are explicitly marked
changed by the destructive read). This still left the set of
non-private pages, (and pseudo pages), since they can not be swapped.
Some of these (system address space and virtual page zeroes) can not
be moved up for other implementation reasons, but shared pages were
still being denied their fair shot at a potential move up as opposed
to being immediately freelisted (and later reread). The latest
version of the prototype covers this.
I expect a benefit to trickle through to the extend code as well,
since now many of the pages it will find will be moved instead of
freelisted. Even if extend fails to find an unchanged page, now there
is still an excellent chance for non-loss-of-control extending.
... snip ... top of post, old email index
with regard to above ... "EXTEND" is process when the cp kernel runs out of available kernel storage and scavages a "pageable page" for that purpose. original code in cp67 would just invoke the standard page replacement algorithm ... which might select either a changed or non-changed page purely on reference bit. The downside is selection of changed page introduced delay (and other activity) to first write the changed page to disk (causing delay which might result in kernel failure with attempting to do "new" extend while extending). While I was at Boeing ... I modified the CP67 kernel to use "BALR" linkages between high-use kernel routines (including FREE/FRET, kernel storage management). I also modified "EXTEND" processing to first search all of pageable pages, for a non-changed page (to minimize kernel failure).
Recent posts referencing Boeing:
https://www.garlic.com/~lynn/2011b.html#66 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
more from long ago and far away
Date: 06/09/86 17:50:42
To: wheeler
Subject: Global LRU
I've just submitted some details on the GLRU line item to our patent
office. No telling whether I'll get lucky, but I took the liberty of
including you in the invention of the new management of the <16m area
(when that area is more constrained than storage in general).
So... don't be suprised if someone from IBM Kingston looks you up. And
don't be suprised if they don't, for that matter. We can only hope.
... snip ... top of post, old email index
Since most of the stuff I had done was as undergraduate in the 60s & was well before process & software patents were allowed ... it never occurred to me to submit anything.
earlier global LRU email
https://www.garlic.com/~lynn/2007b.html#email860124
in this post
https://www.garlic.com/~lynn/2007b.html#34 Just another example of mainframe costs
past posts mentioning replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#clock
old email regarding providing code for moving above/below
16mbyte line w/o requiring i/o:
https://www.garlic.com/~lynn/2006t.html#email800121
in this post
https://www.garlic.com/~lynn/2006t.html#15 more than 16mbyte support for 370
and related to patents for completely different portfolio
https://www.garlic.com/~lynn/2009q.html#40 Crypto dongles to secure online transactions
and
https://www.garlic.com/~lynn/aadssummary.htm
another reference to vm/sp (i.e. 370) modified to support xa-mode (as
opposed to internal "vmtool" being modified to ship to customers).
Date: 10/14/86 19:24:08
From: wheeler
re: yyyyyyy; xxxxxx in yyyyyyy modified vm/sp over a year ago to run
xa-mode ... and then started work on upgrading to 3.4/4.2 hpo.
yyyyyyy has been running the code in production for some of their
stuff. i've had a number of exchanges with xxxxxx. strong rumor was
that when kingston 1st heard of it, kingston management contacted
yyyyyyy and attempted to have it killed and all references to its
existence obliterated.
latest i've heard is that endicott has made some sort of offer to
xxxxxx and they were expecting answer this week.
... snip ... top of post, old email index
recent email about getting "vmtool" ready for customers (aka vm/811):
https://www.garlic.com/~lynn/2011b.html#email810210
in this post
https://www.garlic.com/~lynn/2011b.html#70 vm/370 3081
other references to SEAS presentation
Date: 10/16/86 10:59:53
From: wheeler
To: Melinda
re: vmp003; fyi, SEAS Oct presentation at Jersey.
... snip ... top of post, old email index
Date: Thu, 16 Oct 1986 14:04:27 EDT
From: Melinda
To: wheeler
Thank you very much!
We had hoped to get to SEAS, but finally couldn't make it.
I made your VMSHARE tape this morning. Sorry for the delay.
... snip ... top of post, old email index
other recent posts with old email from 1986
https://www.garlic.com/~lynn/2011b.html#61 VM13025 ... zombie/hung users
https://www.garlic.com/~lynn/2011b.html#89 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#0 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#2 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#15 If IBM Hadn't Bet the Company
from ibm jargon:
dual-path - v. To provide alternative paths through program code in
order to accommodate different environments. Since the CP response is
different in VM/SP and VM/XA, we'll have to dual-path that Exec.
special-case v. To make a peripheral device available through more
than one channel. This can improve performance, and, on
multi-processor systems, allows the device to be available even if one
processor is off-line.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: If IBM Hadn't Bet the Company Newsgroups: alt.folklore.computers Date: Sat, 26 Feb 2011 10:22:05 -0500Quadibloc <jsavard@ecn.ab.ca> writes:
FS was doing single-level store ... ala earlier tss/360, multics, etc ... but got a lot of the details wrong. s/38 incorporated some of the ideas ... implementing single-level store with single (48bit) virtual address space (everything in the system existing in single address space).
everything in the whole infrastructure being mapped into that single (48bit) virtual address space ... somewhat contributed to s/38 scatter allocation across all available devices; a s/38 backup require all available disks as single operation ... and restore required all available disks as single operation (major scale-up issues, say a 300 drive disk farm ... where single disk failure would require restoring all 300 disks). the single-disk-failure mode issues with scatter allocation was big motivator for s/38 to be early raid adopter in the 80s.
there have been various claims that intel i432 adopted similar design
to s/38 ... old post with quote from i432 intro (mentioning both B5000
and S/38):
https://www.garlic.com/~lynn/2000f.html#48 Famous Machines and Software that didn't
a big difference between i432 and s/38 (and as/400) ... was that i432 had a huge amount of complex stuff in silicon ... and any fixes required a new chip (while s/38 & as/400 all that complexity was effectively software).
misc. other past posts mentiong both i432 and s/38:
https://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
https://www.garlic.com/~lynn/2002d.html#27 iAPX432 today?
https://www.garlic.com/~lynn/2002f.html#42 Blade architectures
https://www.garlic.com/~lynn/2003e.html#54 Reviving Multics
https://www.garlic.com/~lynn/2003e.html#55 Reviving Multics
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003m.html#23 Intel iAPX 432
https://www.garlic.com/~lynn/2004q.html#60 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2006n.html#42 Why is zSeries so CPU poor?
https://www.garlic.com/~lynn/2008s.html#39 The Internet's 100 Oldest Dot-Com Domains
https://www.garlic.com/~lynn/2009o.html#46 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2010h.html#40 Faster image rotation
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A History of VM Performance Newsgroups: alt.folklore.computers Date: Sat, 26 Feb 2011 11:10:28 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
for no particular reason ... another email from 1may86
https://www.garlic.com/~lynn/2005d.html#email860501
in this post
https://www.garlic.com/~lynn/2005d.html#13 Cerf and Kahn receive Turing award
mentions pulling together all the potentially participants in the NSFNET backbone for 2day meeting at IBM ... however the internal politics was kicking in ... and they were all called up and told the meeting was canceled.
other old email related to internal politics (this time pushing
SNA for the NSFNET backbone ... the full thing contained huge
amount of misinformation):
https://www.garlic.com/~lynn/2006w.html#email870109
in this post
https://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET
old email related to replacement algorithm
https://www.garlic.com/~lynn/lhwemail.html#globallru
and old email related to nsfnet backbone
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
other recent posts:
https://www.garlic.com/~lynn/2011c.html#2 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#6 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#16 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#40 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011c.html#76 Other early NSFNET backbone
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Irrational desire to author fundamental interfaces Newsgroups: comp.arch Date: Sat, 26 Feb 2011 13:01:43 -0500EricP <ThatWouldBeTelling@thevillage.com> writes:
late 70s ... there was an effort to converge all the internal
microprocessors (not just 370 micro-engines ... but all the controllers
and other processors) to 801/risc (the "iliad" chips had
features/extensions specifically for supporting emulation). misc.
past posts mentioning 801, risc, romp, rios, iliad, etc
https://www.garlic.com/~lynn/subtopic.html#801
there has been a few commercial vendors of 370 emulation ... running on
i86 and other platforms ... like
https://web.archive.org/web/20240130182226/https://www.funsoft.com/
and then there is hercules
http://www.hercules-390.org/
and
https://en.wikipedia.org/wiki/Hercules_%28emulator%29
at least some of the 370 emulators ... included support for JIT "370 compile" ... sequences of 370 code (presumably frequently executed) translated to native for direct execution (with lots of tracking for "self-modifying" events).
more recently there have been mainframe discussions of some sort of x86 emulator (on mainframe) capable of executing windows.
--
virtualization experience starting Jan1968, online at home since Mar1970