List of Archived Posts

2025 Newsgroup Postings (03/01 - )

Financial Engineering
Large Datacenters
Why VAX Was the Ultimate CISC and Not RISC
Clone 370 System Makers
Why VAX Was the Ultimate CISC and Not RISC
RDBMS, SQL/DS, DB2, HA/CMP
2301 Fixed-Head Drum
Why VAX Was the Ultimate CISC and Not RISC
The joy of FORTRAN
HSDT
IBM Token-Ring
IBM Token-Ring
IBM 3880, 3380, Data-streaming
Learson Tries To Save Watson IBM
IBM Token-Ring
IBM Token-Ring
IBM VM/CMS Mainframe
IBM VM/CMS Mainframe
IBM VM/CMS Mainframe
IBM VM/CMS Mainframe
IBM San Jose and Santa Teresa Lab
IBM San Jose and Santa Teresa Lab
IBM San Jose and Santa Teresa Lab
Forget About Cloud Computing. On-Premises Is All the Rage Again
Forget About Cloud Computing. On-Premises Is All the Rage Again
IBM 3880, 3380, Data-streaming
IBM 3880, 3380, Data-streaming
IBM 3880, 3380, Data-streaming
IBM WatchPad
Learson Tries To Save Watson IBM
Some Career Highlights
Some Career Highlights
Forget About Cloud Computing. On-Premises Is All the Rage Again
3081, 370/XA, MVS/XA
IBM 370/125
3081, 370/XA, MVS/XA
FAA ATC, The Brawl in IBM 1964
FAA ATC, The Brawl in IBM 1964
IBM Computers in the 60s
FAA ATC, The Brawl in IBM 1964
IBM APPN
AIM, Apple, IBM, Motorola
IBM 70s & 80s
IBM 70s & 80s
IBM 70s & 80s
Business Planning
POK High-End and Endicott Mid-range
IBM Datacenters
IBM Datacenters
POK High-End and Endicott Mid-range
IBM 3880, 3380, Data-streaming
POK High-End and Endicott Mid-range
Mainframe Modernization
IBM Datacenters
Planet Mainframe
POK High-End and Endicott Mid-range
POK High-End and Endicott Mid-range
IBM Downturn, Downfall, Breakup
IBM Downturn, Downfall, Breakup
IBM Retain and other online
IBM Retain and other online
Capitalism: A Six-Part Series
Capitalism: A Six-Part Series
IBM Retain and other online
IBM Downturn, Downfall, Breakup
Supercomputer Datacenters
IBM 3101 Glass Teletype and "Block Mode"
IBM 23Jun1969 Unbundling and HONE
IBM 23Jun1969 Unbundling and HONE
Amdahl Trivia
Kernel Histories
IBM 23Jun1969 Unbundling and HONE

Financial Engineering

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Financial Engineering
Date: 01 Mar, 2025
Blog: Facebook

The last product we did was HA/6000 approved by Nick Donofrio in 1988
(before RS/6000 was announced) for the NYTimes to move their newspaper
system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix that had vaxcluster
support in same source base with unix). The S/88 product administrator
then starts taking us around to their customers and also has me do a
section for the corporate continuous availability strategy document
... it gets pulled when both Rochester/AS400 and POK/(high-end
mainframe) complain they couldn't meet the requirements.

Early Jan1992 have a meeting with Oracle CEO and IBM/AWD Hester tells
Ellison we would have 16-system clusters by mid92 and 128-system
clusters by ye92. Then late Jan92, cluster scale-up is transferred for
announce as IBM Supercomputer (for technical/scientific *ONLY*) and we
are told we can't work on anything with more than four processors (we
leave IBM a few months later).

1992, IBM has one of the largest losses in the history of US companies
and was in the process of being re-orged into the 13 "baby blues" in
preparation for breaking up the company (take off on the "baby bell"
breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup

Not long after leaving IBM, I was brought in as consultant into small
client/server startup, two of the former Oracle people (that we had
worked with on HA/CMP cluster scaleup) were there responsible for
something they called "commerce server" and they wanted to do payment
transactions, the startup had also invented this technology they
called "SSL" they wanted to use, it is now frequently called
"electronic commerce" (or ecommerce).

I had complete responsibility for everything between "web servers" and
gateways to the financial industry payment networks. Payment network
trouble desks had 5min initial problem diagnoses ... all circuit
based. I had to do a lot of procedures, documentation and software to
bring packet-based internet up to that level. I then did a talk (based
on ecommerce work) "Why Internet Wasn't Business Critical
Dataprocessing" ... which Postel (Internet standards editor) sponsored
at ISI/USC.

Stockman and IBM financial engineering company:
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:

IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback
contraption on steroids. During the five years ending in fiscal 2011,
the company spent a staggering $67 billion repurchasing its own
shares, a figure that was equal to 100 percent of its net income.

pg465/loc10014-17:

Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.

... snip ...

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate
Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind
paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st

The company has represented that its dividends and share repurchases
have come to a total of over $159 billion since 2000.

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM
Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a
little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud
Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
ecommerce gateways
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback

--
virtualization experience starting Jan1968, online at home since Mar1970

Large Datacenters

From: Lynn Wheeler <lynn@garlic.com>
Subject: Large Datacenters
Date: 01 Mar, 2025
Blog: Facebook

I had taken 2credit-hr intro do fortran/computers and at the end of
the semester was hired to rewrite 1401 MPIO in 360 assembler for
360/30. Univ was getting 360/67 for tss/360 to replace 709/1401 and
got a 360/30 temporarily (replacing 1401) pending availability of
360/67. Univ shutdown datacenter on weekends and I got the whole place
dedicated (although 48hrs w/o sleep made Mondays hard). I was given a
bunch of hardware & software manuals and to to design and implement my
own monitor, device drivers, interrupt handlers, error recovery,
storage management, etc ... and within a few weeks had a 2000 card
assembler program. The 360/67 arrives within year of talking intro
class and I was hired fulltime responsible for os/360 (tss/360 never
came to production).

Then before I graduate, I'm hired fulltime into a small group in
Boeing CFO office to help with the formation of Boeing Computer
Services (consolidate all dataprocessing into independent business
unit). I think Renton datacenter largest in the world with 360/65s
arriving faster than they could be installed, boxes constantly staged
in the hallways around the machine room. Lots of politics between
Renton director and CFO who only had a 360/30 up at Boeing field
(although they enlarge the machine room to install 360/67 for me to
play with when I'm not doing other stuff). Then when I graduate,
instead of staying with the CFO, I join IBM science center.

I was introduced to John Boyd in the early 80s and would sponsor his
briefings at IBM. He had lots of stories, including being very vocal
about electronics across the trail wouldn't work. Possibly as
punishment he was put in command of "spook base" (Boyd would say it
had the largest air conditioned bldg in that part of the world) about
the same time I'm at Boeing
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

Access to the environmentally controlled building was afforded via the
main security lobby that also doubled as an airlock entrance and
changing-room, where twelve inch-square pidgeon-hole bins stored
individually name-labeled white KEDS sneakers for all TFA
personnel. As with any comparable data processing facility of that
era, positive pressurization was necessary to prevent contamination
and corrosion of sensitive electro-mechanical data processing
equipment. Reel-to-reel tape drives, removable hard-disk drives,
storage vaults, punch-card readers, and inumerable relays in
1960's-era computers made for high-maintainence systems. Paper dust
and chaff from fan-fold printers and the teletypes in the
communications vault produced a lot of contamination. The super-fine
red clay dust and humidity of northeast Thailand made it even more
important to maintain a well-controlled and clean working environment.

Maintenance of air-conditioning filters and chiller pumps was always a
high-priority for the facility Central Plant, but because of the
24-hour nature of operations, some important systems were run to
failure rather than taken off-line to meet scheduled preventative
maintenance requirements. For security reasons, only off-duty TFA
personnel of rank E-5 and above were allowed to perform the
housekeeping in the facility, where they constantly mopped floors and
cleaned the consoles and work areas. Contract civilian IBM computer
maintenance staff were constantly accessing the computer sub-floor
area for equipment maintenance or cable routing, with the numerous
systems upgrades, and the underfloor plenum areas remained much
cleaner than the average data processing facility. Poisonous snakes
still found a way in, causing some excitement, and staff were
occasionally reprimanded for shooting rubber bands at the flies during
the moments of boredom that is every soldier's fate. Consuming
beverages, food or smoking was not allowed on the computer floors, but
only in the break area outside. Staff seldom left the compound for
lunch. Most either ate C-rations, boxed lunches assembled and
delivered from the base chow hall, or sandwiches and sodas purchased
from a small snack bar installed in later years.

... snip ...

Boyd biography says "spook base" was a $2.5B "windfall" for IBM (ten
times Renton).

In 89/90 the Commandant of Marine Corps leverages Boyd for make-over
of the corps (at a time when IBM was desperately in need of
make-over).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
1992 IBM has one of the largest losses in history of US companies and
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company (take off on "baby bells" breakup a decade
earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.

Boyd passes in 1997 and the USAF had pretty much disowned him and it
was the Marines at Arlington and his effects go to Quantico. The 89/90
commandant continued to sponsor regular Boyd themed conferences at
Marine Corps Univ. In one, the (former) commandant wanders in after
lunch and speaks for two hrs (totally throwing schedule off, but
nobody complains). I'm in the back corner of the room and when he is
done, he makes a beeline straight for me (and all I could think of was
I had been setup by Marines I've offended in the past, including
former head of DaNang datacenter and later Quantico).

IBM Cambridge Science Center
https://www.garlic.com/~lynn/subtopic.html#545tech
Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent 709/1401, MPIO, 360/67, univ, Boeing CFO, Renton posts
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360

--
virtualization experience starting Jan1968, online at home since Mar1970

Why VAX Was the Ultimate CISC and Not RISC

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why VAX Was the Ultimate CISC and Not RISC
Newsgroups: comp.arch
Date: Sat, 01 Mar 2025 18:29:50 -1000

anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:

IBM tried to commercialize it in the ROMP in the IBM RT PC; Wikipedia
says: "The architectural work on the ROMP began in late spring of
1977, as a spin-off of IBM Research's 801 RISC processor ... The first
examples became available in 1981, and it was first used commercially
in the IBM RT PC announced in January 1986. ... The delay between the
completion of the ROMP design, and introduction of the RT PC was
caused by overly ambitious software plans for the RT PC and its
operating system (OS)."  And IBM then designed a new RISC, the
RS/6000, which was released in 1990.

ROMP originally for DISPLAYWRITER follow-on ... running CP.r operating
system and PL.8 programming language. ROMP was minimal 801, didn't
have supervisor/problem mode ... at the time their claim was PL.8
would only generate correct code and CP.r would only load/execute
correct programs.  They claimed 40bit addressing ... 32 bit addresses
... but top four bits selected 16 "segment registers" that contained
12bit segment-identifiers. ... aka 28bit segment displacement and
12bit segment-ids (40bits) .... and any inline code could change
segment register value ... as easily as could load any general
register.

When follow-on to DISPLAYWRITER was canceled, they pivoted to UNIX
workstation market and got the company that had done AT&T unix port to
IBM/PC for PC/IX ... to do AIX. Now ROMP needed supervisor/problem
mode and inline code could no longer change segment register values
... needed to have supervisor call.

Folklore is they also had 200 PL.8 programmers and needed something
for them to do, so they gen'ed a abstract virtual machine system
("VRM") (implemented in PL.8) and had AIX port be done to the abstract
virtual machine definition (instead of real hardware) .... claiming
that the combined effort would be less (total effort) than having the
outside company do the AIX port to the real hardware (also putting in
a lot of IBM SNA communication support).

The IBM Palo Alto group had been working on UCB BSD port to 370, but
was redirected to do it instead to bare ROMP hardware ... doing it in
enormously significantly less resources than the VRM+AIX+SNA effort.

Move to RS/6000 & RIOS (large multi-chip) doubled the 12bit segment-id
to 24bit segment-id (and some left-over description talked about it
being 52bit addressing) and eliminated the VRM ... and adding in some
amount of BSDisms.

AWD had done their own cards for PC/RT (16bit AT) bus, including a
4mbit token-ring card. Then for RS/6000 microchannel, AWD was told
they couldn't do their own card, but had to do PS2 microchannel
cards. The communication group was fiercely fighting off client/server
and distributed computing and had seriously performance knee-capped
PS2 cards, including ($800) 16mbit token-ring card (the PS2
microchannel which had lower card throughput than the PC/RT 4mbit TR
card).  There was joke that PC/RT 4mbit TR server having higher
throughput than RS/6000 16mbit TR server.  There was also joke that
the RS6000/730 with VMEbus was a work around corporate politics and
being able to install high-performance workstation cards

We got the HA/6000 project in 1988 (approved by Nick Donofrio),
originally for NYTimes to move their newspaper system off VAXCluster to
RS/6000. I rename it HA/CMP.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix that had vaxcluster
support in same source base with unix). The S/88 product administrator
then starts taking us around to their customers and also has me do a
section for the corporate continuous availability strategy document
... it gets pulled when both Rochester/AS400 and POK/(high-end
mainframe) complain they couldn't meet the requirements.

Early Jan1992 have a meeting with Oracle CEO and IBM/AWD Hester tells
Ellison we would have 16-system clusters by mid92 and 128-system
clusters by ye92. Then late Jan92, cluster scale-up is transferred for
announce as IBM Supercomputer (for technical/scientific *ONLY*) and we
are told we can't work on anything with more than four processors (we
leave IBM a few months later). Contributing was the mainframe DB2 DBMS
group were complaining if we were allowed to coninue, it would be at
least five years ahead of them.

Neither ROMP or RIOS supported bus/cache consistency for
multiprocessor operation. The executive we reported to, went over to
head up ("AIM" - Apple, IBM, Motorola) Somerset for single chip
801/risc ... but also adopts Motorola 88k bus enabling multiprocessor
configurations. He later leaves Somerset for president of (SGI owned)
MIPS.

trivia: I also had HSDT project (started in early 80s), T1 and faster
computer links, both terrestrial and satellite ... which included
custom designed TDMA satellite system done on the other side of the
pacific ... and put in 3-node system. two 4.5M dishes, one in San Jose
and one in Yorktown Research (hdqtrs, east coast) and a 7M dish in
Austin (where much of the RIOS design was going on).  San Jose also
got an EVE, a superfast hardware VLSI logic simulator (scores of times
faster than existing simultion) ... and it was claimed that Austin
being able to use the EVE in San Jose, helped bring RIOS in a year
early.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
virtualization experience starting Jan1968, online at home since Mar1970

Clone 370 System Makers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Clone 370 System Makers
Date: 02 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025.html#121 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#122 Clone 370 System Makers

Note: several years after the Amdahl incident (and being told goodby
to career, promotions, raises), I wrote an IBM "speakup" about being
underpaid with some supporting documents. Got a written reply from
head of HR that after a detail review of my whole career, I was being
paid exactly what I was suppose to be paid. I then made copy of
original "speakup" and head of HR's reply and wrote a cover stating
that I recently was asked to help interview some number of students
that would be shortly graduating, for positions in new group that I
would be technically directing ... and found out that they were being
offered starting salaries that were 1/3rd more than I was currently
making. I never got a written reply, but a few weeks later I got a 33%
raise (putting me on level playing field with new graduate
hires). Several people then reminded me that "Business Ethics" was an
oxymoron.

some past posts mentioning the speakup
https://www.garlic.com/~lynn/2023c.html#89 More Dataprocessing Career
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2022h.html#24 Inventing the Internet
https://www.garlic.com/~lynn/2022f.html#42 IBM Bureaucrats
https://www.garlic.com/~lynn/2022e.html#59 IBM CEO: Only 60% of office workers will ever return full-time
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021j.html#39 IBM Registered Confidential
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021i.html#82 IBM Downturn
https://www.garlic.com/~lynn/2021h.html#61 IBM Starting Salary
https://www.garlic.com/~lynn/2021e.html#15 IBM Internal Network
https://www.garlic.com/~lynn/2021d.html#86 Bizarre Career Events
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#12 IBM "811", 370/xa architecture
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2017.html#78 IBM Disk Engineering
https://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014h.html#81 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014c.html#65 IBM layoffs strike first in India; workers describe cuts as 'slaughter' and 'massive'
https://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
https://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2011g.html#12 Clone Processors
https://www.garlic.com/~lynn/2011g.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
https://www.garlic.com/~lynn/2010c.html#82 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2009h.html#74 My Vintage Dream PC
https://www.garlic.com/~lynn/2007j.html#94 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#83 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#75 IBM Unionization
https://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer

--
virtualization experience starting Jan1968, online at home since Mar1970

Why VAX Was the Ultimate CISC and Not RISC

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why VAX Was the Ultimate CISC and Not RISC
Newsgroups: comp.arch
Date: Sun, 02 Mar 2025 09:03:53 -1000

Robert Swindells <rjs@fdy2.co.uk> writes:

You could look at the MIT Lisp Machine, it used basically the same chips
as a VAX 11/780 but was a pipelined load/store architecture internally.

re:
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC

from long ago and far away:


Date: 79/07/11 11:00:03
To: wheeler

i heard a funny story: seems the MIT LISP machine people proposed that
IBM furnish them with an 801 to be the engine for their prototype.
B.O. Evans considered their request, and turned them down.. offered them
an 8100 instead!  (I hope they told him properly what they thought of
that)

... snip ... top of post, old email index

... trivia: Evans had asked my wife to review/audit 8100 (had really
slow, anemic processor) and shortly later it was canceled
("decomitted").

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

misc past posts with same email
https://www.garlic.com/~lynn/2023e.html#84 memory speeds, Solving the Floating-Point Conundrum
https://www.garlic.com/~lynn/2006t.html#9 32 or even 64 registers for x86-64?
https://www.garlic.com/~lynn/2006o.html#45 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006c.html#3 Architectural support for programming languages
https://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics

--
virtualization experience starting Jan1968, online at home since Mar1970

RDBMS, SQL/DS, DB2, HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: RDBMS, SQL/DS, DB2, HA/CMP
Date: 02 Mar, 2025
Blog: Facebook

Vern Watts responsible for IMS
https://www.vcwatts.org/ibm_story.html

SQL/Relational started 1974 at San Jose Research (main plant site) as
System/R, implementing on VM370

Some of the MIT CTSS/7094 people went to the 5th flr to do Multics,
others went to to the IBM Cambridge Science Center ("CSC") on the 4th
flr, did virtual machines (initially CP40/CMS on 360/40 with virtual
memory hardware mods, morphs into CP67/CMS when 360/67 standard with
virtual memory becomes available), internal network, invented GML in
1969, lots of online apps, When decision was made to add virtual
memory to all 370s, some of the CSC people split off and take-over the
IBM Boston Programming Center on the 3rd flr for the VM370 development
group (and CP67/CMS morphs into VM370/CMS).

Multics releases the 1st relational RDBMS (non-SQL) in June 1976
https://www.mcjones.org/System_R/mrds.html

STL (since renamed SVL) didn't appear until 1977, it was originally
going to be called Coyote after the convention naming for the closest
Post Office. However that spring the San Francisco Coyote Organization
demonstrated on the steps of the capital and it was quickly decided to
choose a different name (prior to the opening), eventually the closest
cross street. Vern and IMS move up from LA area to STL. It was same
year that I transferred from CSC to San Jose Research and would
work on some of System/R with Jim Gray and Vera Watson. Some amount of
criticism from IMS group about System/R, including index requiring
lots more I/O and double the disk space.

First SQL/RDBMS ships, Oracle
https://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Oracle.html

STL was in the process of doing the next great DBMS, "EAGLE" and we
were able to do technology transfer to Endicott (under the "radar",
while company pre-occupied with "EAGLE") for SQL/DS
https://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-SQL_DS.html

Later "EAGLE" implodes and there is a request for how fast could
System/R be ported to MVS ... eventually released as DB2, originally
for decision-support.
https://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-DB2.html

Trivia: Jim Gray departs for Tandem fall 1980, palming of some things
on me. The last product at IBM was HA/6000 starting 1988, originally
for NYTimes to move their newspaper system (ATEX) off VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix that had vaxcluster
support in same source base with unix). The S/88 product administrator
then starts taking us around to their customers and also has me do a
section for the corporate continuous availability strategy document
... it gets pulled when both Rochester/AS400 and POK/(high-end
mainframe) complain they couldn't meet the requirements.

Early Jan1992 have a meeting with Oracle CEO and IBM/AWD Hester tells
Ellison we would have 16-system clusters by mid92 and 128-system
clusters by ye92. Then late Jan92, cluster scale-up is transferred for
announce as IBM Supercomputer (for technical/scientific *ONLY*) and we
are told we can't work on anything with more than four processors (we
leave IBM a few months later).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available posts mentioning HA/CMP, S/88, Continuous Availability Strategy document: https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering https://www.garlic.com/~lynn/2025.html#119 Consumer and Commercial Computers https://www.garlic.com/~lynn/2025.html#106 Giant Steps for IBM? https://www.garlic.com/~lynn/2025.html#104 Mainframe dumps and debugging https://www.garlic.com/~lynn/2025.html#89 Wang Terminals (Re: old pharts, Multics vs Unix) https://www.garlic.com/~lynn/2025.html#57 Multics vs Unix https://www.garlic.com/~lynn/2025.html#24 IBM Mainframe Comparison https://www.garlic.com/~lynn/2024g.html#82 IBM S/38 https://www.garlic.com/~lynn/2024g.html#5 IBM Transformational Change https://www.garlic.com/~lynn/2024f.html#67 IBM "THINK" https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400 https://www.garlic.com/~lynn/2024f.html#25 Future System, Single-Level-Store, S/38 https://www.garlic.com/~lynn/2024f.html#3 Emulating vintage computers https://www.garlic.com/~lynn/2024e.html#138 IBM - Making The World Work Better https://www.garlic.com/~lynn/2024e.html#55 Article on new mainframe use https://www.garlic.com/~lynn/2024d.html#12 ADA, FAA ATC, FSD https://www.garlic.com/~lynn/2024d.html#4 Disconnect Between Coursework And Real-World Computers https://www.garlic.com/~lynn/2024c.html#105 Financial/ATM Processing https://www.garlic.com/~lynn/2024c.html#79 Mainframe and Blade Servers https://www.garlic.com/~lynn/2024c.html#60 IBM "Winchester" Disk https://www.garlic.com/~lynn/2024c.html#54 IBM 3705 & 3725 https://www.garlic.com/~lynn/2024c.html#7 Testing https://www.garlic.com/~lynn/2024c.html#3 ReBoot Hill Revisited https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964 https://www.garlic.com/~lynn/2024b.html#84 IBM DBMS/RDBMS https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career https://www.garlic.com/~lynn/2024b.html#29 DB2 https://www.garlic.com/~lynn/2024b.html#22 HA/CMP https://www.garlic.com/~lynn/2024.html#93 IBM, Unix, editors https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe https://www.garlic.com/~lynn/2023f.html#115 IBM RAS https://www.garlic.com/~lynn/2023f.html#72 Vintage RS/6000 Mainframe https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe https://www.garlic.com/~lynn/2023f.html#38 Vintage IBM Mainframes & Minicomputers https://www.garlic.com/~lynn/2022b.html#55 IBM History https://www.garlic.com/~lynn/2021d.html#53 IMS Stories https://www.garlic.com/~lynn/2021.html#3 How an obscure British PC maker invented ARM and changed the world -- virtualization experience starting Jan1968, online at home since Mar1970

2301 Fixed-Head Drum

From: Lynn Wheeler <lynn@garlic.com>
Subject: 2301 Fixed-Head Drum
Date: 05 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025.html#112 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#113 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#115 2301 Fixed-Head Drum

late 70s, I tried to get 2305-like "multiple exposure" (aka multiple
subchannel addresses, where controller could do real-time scheduling
of requests queued at the different subchannel addresses) for 3350
fixed-head feature, so I could do (paging) data transfer overlapped
with 3350 arm seek. There was group in POK doing "VULCAN", an
electronic disk ... and they got 3350 "multiple exposure" work
vetoed. Then VULCAN was told that IBM was selling every memory chip it
made as (higher markup) processor memory ... and canceled VULCAN,
however by then it was too late to resurrect 3350 multiple exposure
(and went ahead with non-IBM 1655).

trivia: after decision to add virtual memory to all 370s, some of
science center (4th flr) splits off and takes over the IBM Boston
Programming Center (3rd flr) for the VM370 Development group (morph
CP67->VM370). At the same time there was joint effort between Endicott
and Science Center to add 370 virtual machines to CP67 ("CP67H", the
new 370 instructions and the different format for 370 virtual
memory). When that was done there was then further CP67 mods for CP67I
which ran on 370 architecture (in CP67H 370 virtual machines for a
year before the first engineering 370 with virtual memory was ready to
test ... by trying to IPL CP67I). As more and more 370s w/virtual
memory became available, three engineers from San Jose came out to add
3330 and 2305 device support to CP67I for CP67SJ. CP67SJ was in
regular use inside IBM, even after VM370 became available.

CSC posts:
https://www.garlic.com/~lynn/subtopic.html#545tech
getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

recent posts mentioning cp/67h, cp/67i cp/67sj
https://www.garlic.com/~lynn/2025.html#122 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#121 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#10 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM

--
virtualization experience starting Jan1968, online at home since Mar1970

Why VAX Was the Ultimate CISC and Not RISC

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why VAX Was the Ultimate CISC and Not RISC
Newsgroups: comp.arch
Date: Thu, 06 Mar 2025 16:11:15 -1000

John Levine <johnl@taugh.com> writes:

I'm not so sure. The IBM Fortran H compiler used a lot of the 360's instruction
set and it is my recollection that even the dmr C compiler would generate memory
to memory instructions when appropriate. The PL.8 compiler generated code for 5
architectures including S/360 and 68K, and I think I read somewhere that its
S/360 code was considrably better than the native PL/I compilers.

I get the impression that they found that once you have a reasonable number of
registers, like 16 or more, the benefit of complex instructions drops because
you can make good use of the values in the registers.

re:
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025b.html#4 Why VAX Was the Ultimate CISC and Not RISC

long ago and far away ... comparing pascal to pascal front-end with
pl.8 back-end (3033 is 370 about 4.5MIPS)


Date: 8 August 1981, 16:47:28 EDT
To: wheeler

the 801 group here has run a program under several different PASCAL
"systems".  The program was about 350 statements and basically
"solved" SOMA (block puzzle..).  Although this is only one test, and
all of the usual caveats apply, I thought the numbers were
interesting...  The numbers given in each case are EXECUTION TIME ONLY
(Virtual on 3033).

6m 30 secs               PERQ (with PERQ's Pascal compiler, of course)
4m 55 secs               68000 with PASCAL/PL.8 compiler at OPT 2
0m 21.5 secs             3033 PASCAL/VS with Optimization
0m 10.5 secs             3033 with PASCAL/PL.8 at OPT 0
0m 5.9 secs              3033 with PASCAL/PL.8 at OPT 3

... snip ... top of post, old email index

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

The joy of FORTRAN

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The joy of FORTRAN
Newsgroups: alt.folklore.computers, comp.os.linux.misc
Date: Fri, 07 Mar 2025 06:46:48 -1000

cross@spitfire.i.gajendra.net (Dan Cross) writes:

VAX was really meant to unify the product line, offering PDP-10
class performance in something that was architecturally
descended from the PDP-11, which remained attractive at the low
end or embedded/industrial applications.

DEC in the 80s and 90s had a very forward-looking vision of
distributed computing; sadly they botched it on the business
side.

re:
https://www.garlic.com/~lynn/2024e.html#142 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#144 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#145 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#2 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#7 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#16 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#17 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#22 stacks are not hard, The joy of FORTRAN-like languages
https://www.garlic.com/~lynn/2025.html#124 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#125 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#131 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#132 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#133 The joy of FORTRAN

IBM 4300s competed with VAX in the mid-range market and sold in approx
same numbers in small unit orders ... bit difference was large
corporations with orders for hundreds of vm/4300s (in at least one
case almost 1000) at a time for placing out in departmental areas
(sort of the leading edge of distributed computing tsunami). old afc
post with decade of VAX sales, sliced&diced by year, model, US/non-US.
https://www.garlic.com/~lynn/2002f.html#0

Inside IBM, conference rooms were becoming scarce since so many were
being converted to vm4341 rooms. IBM was expecting to see same
explosion in 4361/4381 orders (as 4331/4341), but by 2nd half of 80s,
market was moving to workstations and large PCs, 30rs of pc market
share (original articles were separate URLs, now condensed to single
web page (original URLs remapped to displacements)
https://arstechnica.com/features/2005/12/total-share/

I got availability of early engineering 4341 in 1978 and IBM branch
heard about it and in jan1979 con me into doing national lab benchmark
(60s cdc6600 "rain/rain4" fortran) looking at getting 70 for compute
farm (sort of leading edge of the coming cluster supercomputing
tsunami). Then BofA was getting 60 VM/4341s for distributed System/R
(original SQL/relational) pilot.

upthread mentioned doing HA/CMP (targeted for both
technical/scientific and commercial) cluster scale-up (and then it is
transferred for announce as IBM Supercomputer for technical/scientific
*ONLY*) and we were told we couldn't work on anything with more than
four processors.

801/risc (PC/RT, RS/6000) didn't have coherent cache so didn't have
SMP scale-up ... only scale-up method was cluster ...

1993 large mainframe compared to RS/6000

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990  : 126MIPS; 16-system: 2BIPS; 128-system: 16BIPS

executive we reported to went over to head up AIM/Somerset to do
single-chip power/pc ...  and picked up Motorola 88k bus ... so could
then do SMP configs (and/or clusters of SMP)

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT

From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT
Date: 07 Mar, 2025
Blog: Facebook

long winded:

In early 80s, got IBM HSDT project, T1 and faster computer links
(terrestrial and satellite) and some amount of conflicts with the
communication group (note in 60s, IBM had 2701 controller that
supported T1 computer links, but going into 70s and uptake of
SNA/VTAM, issues appeared to cap controller links at 56kbits/sec). I
was working with NSF director and was to get $20M to interconnect NSF
Supercomputer Centers. Then congress cuts the budget, some other
things happen and eventually an RFP is released (in part based on what
we already have running).

NSF 28Mar1986 Preliminary Announcement
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for
online computer conferencing inside IBM likely contributed, folklore
was when corporate executive committee was told, 5of6 wanted to fire
me). The NSF director tried to help by writing the company a letter
(3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and
director of Research, copying IBM CEO) with support from other
gov. agencies ... but that just made the internal politics worse (as
did claims that what we already had operational was at least 5yrs
ahead of the winning bid), as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet.

in between, NSF was asking me to do presentations at some current
and/or possible future NSF Supercomputer locations (old archived
email)
https://www.garlic.com/~lynn/2011b.html#email850325
https://www.garlic.com/~lynn/2011b.html#email850325b
https://www.garlic.com/~lynn/2011b.html#email850326
https://www.garlic.com/~lynn/2011b.html#email850402
https://www.garlic.com/~lynn/2015c.html#email850408
https://www.garlic.com/~lynn/2011c.html#email850425
https://www.garlic.com/~lynn/2011c.html#email850425b
https://www.garlic.com/~lynn/2006w.html#email850607
https://www.garlic.com/~lynn/2006t.html#email850930
https://www.garlic.com/~lynn/2011c.html#email851001
https://www.garlic.com/~lynn/2011b.html#email851106
https://www.garlic.com/~lynn/2011b.html#email851114
https://www.garlic.com/~lynn/2006t.html#email860407
https://www.garlic.com/~lynn/2007.html#email860428
https://www.garlic.com/~lynn/2007.html#email860428b
https://www.garlic.com/~lynn/2007.html#email860430

had some exchanges with Melinda (at princeton)
https://www.leeandmelindavarian.com/Melinda#VMHist

from or to Melinda/Princeton (pucc)
https://www.garlic.com/~lynn/2007b.html#email860111
https://www.garlic.com/~lynn/2007b.html#email860113
https://www.garlic.com/~lynn/2007b.html#email860114
https://www.garlic.com/~lynn/2011b.html#email860217
https://www.garlic.com/~lynn/2011b.html#email860217b
https://www.garlic.com/~lynn/2011c.html#email860407

related
https://www.garlic.com/~lynn/2011c.html#email850426
https://www.garlic.com/~lynn/2006t.html#email850506
https://www.garlic.com/~lynn/2007b.html#email860124

earlier IBM branch brings me into Berkeley "10M" looking at doing
remote viewing
https://www.garlic.com/~lynn/2004h.html#email830804
https://www.garlic.com/~lynn/2004h.html#email830822
https://www.garlic.com/~lynn/2004h.html#email830830
https://www.garlic.com/~lynn/2004h.html#email841121
https://www.garlic.com/~lynn/2011b.html#email850409
https://www.garlic.com/~lynn/2004h.html#email860519

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 08 Mar, 2025
Blog: Facebook

The communication group dumb 3270s (and PC 3270 emulators) had
point-to-point coax from machine room to each terminal. Several large
corporations were starting to exceed building load limits from the
weight of all that coax, so needed much lighter and easier to manage
solution ... and CAT (shielded twisted pair) and token-ring LAN
technology (trivia: my wife was co-inventor on early token passing
patent used for the IBM Series/1 "chat ring")

IBM workstation division did their own cards for the PC/RT (16bit
PC/AT bus), including 4mbit token-ring card. Then for RS/6000
w/microchannel, they were told they couldn't do their own cards and
had to use PS2 microchannel cards. The communication group was
fiercely fighting off client/server and distributed computing (trying
to preserve their dumb terminal paradigm and install base) and had
severely performance kneecapped microchannel cards. The PS2
microchannel 16mbit token-ring card had lower card throughput than the
PC/RT 4mbit token-ring card (joke was PC/RT 4mbit T/R server would
have higher throughput than RS/6000 16mbit T/R server) ... PS2
microchannel 16mbit T/R card design point was something like 300 dumb
terminal stations sharing single LAN.

The new IBM Almaden research bldg had been heavily provisioned with
IBM wiring, but they found a $69 10mbit ethernet card had higher
throughput than the $800 16mbit T/R card (same IBM wiring) ... and
10mbit ethernet LAN also had higher aggregrate throughput and lower
latency. For the price difference for 300 stations, could get several
high-performance TCP/IP routers with channel interfaces, dozen or more
ethernet interfaces along with FDDI and telco T1 & T3 options.

1988 ACM SIGCOMM had article analyzing 30 station ethernet getting
aggregate 8.5mbit throughput, dropping to effective 8mbit throughput
when all device drivers were put in low level loop constantly
transmitting minimum size packets.

posts about communication group dumb terminal strategies
https://www.garlic.com/~lynn/subnetwork.html#terminal
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

recent posts mentioning token-ring:
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025.html#106 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#97 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#96 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#23 IBM NY Buildings
https://www.garlic.com/~lynn/2024g.html#101 IBM Token-Ring versus Ethernet
https://www.garlic.com/~lynn/2024g.html#53 IBM RS/6000
https://www.garlic.com/~lynn/2024g.html#18 PS2 Microchannel
https://www.garlic.com/~lynn/2024f.html#42 IBM/PC
https://www.garlic.com/~lynn/2024f.html#39 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024f.html#6 IBM (Empty) Suits
https://www.garlic.com/~lynn/2024e.html#138 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#102 Rise and Fall IBM/PC
https://www.garlic.com/~lynn/2024e.html#81 IBM/PC
https://www.garlic.com/~lynn/2024e.html#71 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#64 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#56 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024d.html#30 Future System and S/38
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#57 IBM Mainframe, TCP/IP, Token-ring, Ethernet
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024c.html#47 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#47 OS2
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024b.html#0 Assembler language and code optimization
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 08 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring

other trivia: In early 80s, I got HSDT, T1 and faster computer links
(both terrestrial and satellite) with lots of conflict with SNA/VTAM
org (note in the 60s, IBM had 2701 controller that supported T1 links,
but the transition to SNA/VTAM in the 70s and associated issues
appeared to cap all controllers at 56kbit/sec links).

2nd half of 80s, I was on Greg Chesson's XTP TAB and there were some
gov. operations involved ... so we took XTP "HSP" to ISO chartered
ANSI X3S3.3 for standardization ... eventually being told that ISO
only did network standards work on things that corresponded to OSI
... and "HSP" didn't because 1) was internetworking ... not in OSI
sitting between layer 3&4 (network & transport), 2) bypassed
layer 3/4 interface and 3) went directly to MAC LAN interface also not
in OSI, sitting in middle of layer3. had a joke that while IETF
required two interoperable implementations for standards progression,
that ISO didn't even require a standard to be implementable.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880, 3380, Data-streaming

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880, 3380, Data-streaming
Date: 08 Mar, 2025
Blog: Facebook

when I transfer to San Jose Research in 2nd half of 70s, was allowed
to wander around IBM (& non-IBM) datacenters in silicon valley,
including disk bldg14/engineering & bldg15/product test across the
street. They were running 7x24, prescheduled, stand-alone testing and
mentioned they had recently tried MVS for concurrent testing, but it
had 15min MTBF (requiring manual re-ipl) in that environment. I
offered to rewrite I/O supervisor to make it bullet proof and never
fail so they can do any amount of on-demand, concurrent testing,
greatly improving productivity. Downside they started calling me
anytime they had a problem and I had to increasingly spend time
playing disk engineer.

Bldg15 got early engineering systems for I/O product testing,
including 1st engineering 3033 outside POK processor development
flr. Testing was only taking a couple percent of 3033 CPU, so we
scrounge up 3830 and 3330 string and setup our own private online
service. One morning I get a call asking what I had done over the
weekend to completely destroy online response and throughput. I said
nothing, and asked what had they done. They say nothing, but
eventually find out somebody had replaced the 3830 with early
3880. Problem was the 3880 had replaced the really fast 3830
horizontal microcode processor with a really slow vertical microcode
processor (the only way it could handle 3mbyte/sec transfer was when
switched to data streaming protocol channels (instead end-to-end
handshake for every byte transferred, it transferred multiple bytes
per end-to-end handshake). There was then something like six months of
microcode hacks to try to do a better masking of how slow 3880
actually was.

Then 3090 was going to have all data streaming channels and initially
figured that 3880 was just like 3830 but with data streaming
3mbyte/sec transfer and configured number of channels based on that
assumption to meet target system throughput. When they found out how
bad channel busy (increase) was (unable to totally mask), they
realized they would have to significantly increase the number of
channels (which required extra TCM, they semi-facetiously claimed they
would bill the 3880 group for the extra 3090 manufacturing cost).

Bldg15 also get early engineering 4341 in 1978 ... and with some
tweaking of the 4341 integrated channels, it was fast enough to handle
3380 3mbyte/sec data streaming testing (the 303x channel directors
were slow 158 engines with just the 158 integrated channel microcode
and no 370 microcode). To otherwise allow 3380 3mbyte/sec to be
attached to 370 block-mux 1.5mbyte/sec channels, the 3880 "Calypso"
speed matching & ECKD channel programs were created.

Other trivia: people doing air bearing thin-film head simulation were
only getting a couple turn arounds/month on the SJR 370/195. We set
things up on the bldg15 3033 where they could get multiple turn
arounds/day (even though 3033 was not quite half the MIPs of the 195).
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_3370

trivia: there haven't been any CKD DASD made for decades, all being
simulated on industry standard fixed-block devices (dating back to
3375 on 3370 and can be seen in 3380 records/track formulas where
record size is rounded up to 3380 fixed cell size).

posts getting to play disk engineering in 14&15:
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

posts mentioning Calypso and ECKD
https://www.garlic.com/~lynn/2024g.html#3 IBM CKD DASD
https://www.garlic.com/~lynn/2024c.html#74 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2023d.html#117 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2018.html#81 CKD details
https://www.garlic.com/~lynn/2015g.html#15 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2015f.html#86 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2012o.html#64 Random thoughts: Low power, High performance
https://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2010h.html#30 45 years of Mainframe
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2009p.html#11 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2007e.html#40 FBA rant

--
virtualization experience starting Jan1968, online at home since Mar1970

Learson Tries To Save Watson IBM

From: Lynn Wheeler <lynn@garlic.com>
Subject: Learson Tries To Save Watson IBM
Date: 08 Mar, 2025
Blog: Facebook

Learson tried (& failed) to block the bureaucrats, careerists, and
MBAs from destroying Watson culture&legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

20 yrs later, IBM has one of the largest losses in the history of US
companies and was being reorged into the 13 "baby blues" in
preparation for breaking up the company (take-off on "baby bell"
breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 09 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring
https://www.garlic.com/~lynn/2025b.html#11 IBM Token-Ring

In the 60s, there were a couple commercial online CP67-based spin-offs
of the science center, also the science center network morphs into the
corporate internal network and technology also used for the corporate
sponsored univ. BITNET).
https://en.wikipedia.org/wiki/BITNET

Quote from one of the 1969 inventors of "GML" at the Science Center
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

Science-Center/corporate network was larger than ARPANET/Internet from
just about the beginning until sometime mid/late 80s (about the time
it was forced to move to SNA/VTAM). At the 1Jan1983 morph of ARPANET
to internetworking, there were approx 100 IMPs and 255 hosts ... at a
time the internal network was rapidly approaching 1000. I've
periodically commented that ARPANET was somewhat limited by
requirement for IMPs and associated approvals. Somewhat equivalent for
the corporate network was requirement that all links be encrypted and
various gov. resistance especially when links crossed national
boundaries. Old archive post with list of corporate locations that
added one or more nodes during 1983:
https://www.garlic.com/~lynn/2006k.html#8

After decision was made to add virtual memory to all IBM 370s, CP67
morphs into VM370 ... and TYMSHARE is providing commercial online
VM370 services
https://en.wikipedia.org/wiki/Tymshare
and in Aug1976 started offering its CMS-based online computer
conferencing for free to the (user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as "VMSHARE" ... archives here
http://vm.marist.edu/~vmshare
accessed via Tymnet:
https://en.wikipedia.org/wiki/Tymnet
after M/D buys TYMSHARE in the early 80s and discontinues some number
of things, VMSHARE service is moved to a univ. computer.

co-worker at science center responsible for early CP67-based wide-area
network and early days of the corporate internal network through much
of the 70s
https://en.wikipedia.org/wiki/Edson_Hendricks

Trivia: a decade after "GML" was invented, it morphs into ISO standard
"SGML", and after another decade morphs into "HTML" at CERN; first
webserver in the US is at CERN-sister institution, Stanford SLAC on
their VM370 system:
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
commercial, online virtual machine based services
https://www.garlic.com/~lynn/submain.html#online
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

1000th node globe

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 10 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring
https://www.garlic.com/~lynn/2025b.html#11 IBM Token-Ring
https://www.garlic.com/~lynn/2025b.html#14 IBM Token-Ring

also on the OSI subject:

OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems
Interconnection standards to become the global protocol for computer
networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt

Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business interests.
Computer scientist John Day, who designed protocols for the ARPANET,
was a key member of the U.S. delegation. In his 2008 book Patterns in
Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."

... snip ...

On the 60s 2701 T1 subject, IBM FSD (Federal System Division) had some
number of gov. customers that had 2701 that were failing in the 80s
and came up with (special bid) "T1 Zirpel" card for the IBM Series/1.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning "OSI: The Internet That Wasn't"
https://www.garlic.com/~lynn/2025.html#33 IBM ATM Protocol?
https://www.garlic.com/~lynn/2025.html#13 IBM APPN
https://www.garlic.com/~lynn/2024b.html#113 EBCDIC
https://www.garlic.com/~lynn/2024b.html#99 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2013j.html#65 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2013j.html#64 OSI: The Internet That Wasn't

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VM/CMS Mainframe

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VM/CMS Mainframe
Date: 10 Mar, 2025
Blog: Facebook

Predated VM370, originally 60s CP67 wide-area science center network
(RSCS/VNET) .... comment by one of the cambridge science center
inventors of GML in 1969 ...
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

It then morphs into the corporate internal network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s, about the time internal network was forced to convert to
SNA/VTAM) ... technology also used for the corporate sponsored univ
BITNET
https://en.wikipedia.org/wiki/BITNET
when decision was made to add virtual memory to all 370s, then CP67
morphs into VM370.

Co-worker at science center responsible for RSCS/VNET
https://en.wikipedia.org/wiki/Edson_Hendricks

RSCS/VNET used CP internal synchronous "diagnose" interface to spool
file system transferring 4Kbyte blocks ... on large loaded system,
spool file contention could limit it to 6-8 4k blocks/sec ... or
24k-32k bytes (240k-320k bits). I got HSDT in early 80s, with T1 and
faster computer links (and lots of battles with communication group,
aka 60s IBM had 2701 controller, but 70s transition to SNA/VTAM,
issues capped controllers at 56kbit/sec links) ....  supporting T1
links needed 3mbits (300kbytes) for each RSCS/VNET full-duplex T1. I
did a rewrite of CP spool file system in VS/Pascal running in a
virtual memory supporting asynchronous interface, contiguous
allocation, write-behind, and read-ahead able to provide RSCS/VNET
with multi-mbyte/sec throughput.

Also, releasing internal mainframe TCP/IP (implemented in VS/Pascal)
was being blocked by the communication group. When that eventually is
overturned, they changed their strategy ... because the communication
group had corporate strategic ownership of everything that crossed
datacenter walls, it had to be release through them. What shipped got
aggregate 44kbytes/sec using nearly whole 3090 CPU. I do RFC1044
support and in some tuning tests at Cray Research between Cray and
4341, got sustained 4341 channel throughput using only modest amount
of 4341 processor (something like 500 times throughput in bytes moved
per instruction executed). Later in the 90s, communication group hired
a silicon valley contractor to implement TCP/IP support directly in
VTAM. What he demo'ed had TCP running much faster than LU6.2. He was
then told that everbody knows that a "proper" TCP/IP implementation is
much slower than LU6.2 and they would only be paying for a "proper"
implementation.

trivia: The Pisa Science Center had done "SPM" for CP67
(inter virtual machine protocol, a superset of later VM370
VMCF, IUCV and SMSG combination) which was ported to (internal VM370
... which was also supported by the product RSCS/VNET (even though
"SPM" never shipped to customers). Late 70s, there was multi-user
spacewar client/server game done using "SPM" between CMS 3270 users
and the server ... and since RSCS/VNET supported the protocol, users
didn't have to be on same system as the server. An early problem was
people started doing robot players beating human players (and server
was modified to increase power use non-linear as interval between user
moves dropped below human threashold).

some VM (customer/product) history at Melinda's site
https://www.leeandmelindavarian.com/Melinda#VMHist

trivia, most of JES2 network came from HASP that had "TUCC" in
cols68-71 of the source, problem was it defined network nodes in
unused entries in the 255 pseudo spool device table ... typically
limit of 160-180 definitions ... and somewhat intermixed network
fields with job control fields in the header. RSCS/VNET had clean
layered implementation so was able to do a JES2 emulation driver w/o
much trouble. However the internal corporate network had quickly/early
passed 256 nodes and JES2 would trash traffic for origin or
destination wasn't in local table ... so JES2 systems typically had to
be restricted to boundary nodes behind protective RSCS/VNET
nodes. Also because of intermixing of fields, traffic between JES2
systems at different release levels could crash the destination MVS
system. As a result a large body of RSCS/VNET JES2 emulation driver
code grew up that understood different origin and destination JES2
formats and adjust fields for the directly connected JES2 destination.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HASP, ASP, JES2, JES3, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VM/CMS Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VM/CMS Mainframe
Date: 10 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#16 IBM VM/CMS Mainframe

Aka ... some of the MIT CTSS/7094 people went to the 5th flr for
MULTICS and others went to the IBM science center on 4th flr and did
virtual machines (initially CP40 on 360/40 with virtual memory
hardware mods, morphs into CP67 when 360/67 standard with virtual
memory became available), internal network, lots of online apps,
inventing "GML" in 1969, etc. When decision was made to add virtual
memory to all 370s, some of the people split off from CSC and
take-over the IBM Boston Programming Center on the 3rd flr for VM370
(and cambridge monitor system becomes conversational monitor system).

I had taken 2hr credit hr intro to fortran/computers and at the end of
semester was hired to do some 360 assembler on 360/30. The univ was
getting 360/67 for tss/360 replacing 709/1401; got a 360/30 temporary
replacing 1401 until 360/67 arrives (univ shutdown datacenter on
weekends and I had it all dedicated, but 48hrs w/o sleep made monday
classes hard). Within a year of taking intro class, the 360/67 comes
in and I was hired fulltime responsible for os/360 (tss/360 didn't
make it to production) and I still had the whole datacenter dedicated
weekends. CSC comes out to install CP67 (3rd after CSC itself and MIT
Lincoln Labs) and I rewrite large amounts of CP67 code ... as well as
adding TTY/ASCII terminal support (all picked up and shipped by
CSC). Tale of CP67 across tech sq quad at MIT Urban lab ... my
TTY/ASCII had max line length of 80chars ... they do a quick hack for
1200 chars (new ASCII device down at harvard) but don't catch all
dependencies ... and CP67 crashes 27 times in single day.
https://www.multicians.org/thvv/360-67.html
other history by Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning Urban lab and 27 crashes
https://www.garlic.com/~lynn/2024g.html#92 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024c.html#16 CTSS, Multicis, CP67/CMS
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2023f.html#66 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#61 The Most Important Computer You've Never Heard Of
https://www.garlic.com/~lynn/2022c.html#42 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022.html#127 On why it's CR+LF and not LF+CR [ASR33]
https://www.garlic.com/~lynn/2016e.html#78 Honeywell 200
https://www.garlic.com/~lynn/2015c.html#57 The Stack Depth
https://www.garlic.com/~lynn/2013c.html#30 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2010c.html#40 PC history, was search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2006c.html#28 Mount DASD as read-only
https://www.garlic.com/~lynn/2004j.html#47 Vintage computers are better than modern crap !

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VM/CMS Mainframe

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VM/CMS Mainframe
Date: 11 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#16 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#17 IBM VM/CMS Mainframe

In the 60s, IBM had 2701 that supported T1, then in 70s move to
SNA/VTAM, issues caped the controllers at 56kbits. I got HSDT project
in the early 80s, T1 and faster computer links (both terrestrial and
satellite) and lots of conflict with the communication group. Mid-80s,
communication group prepared report for corporate executive committee
that customers wouldn't be needing T1 before sometimes in the
90s. What they had done was survey of 37x5 "fat pipes", multiple
parallel 56kbits links treated as single logical link ... declining
number customers from 2-5 parallel links, dropping to zero by 6 or
7. What they didn't know (or didn't want to tell corporate executive
committee) was typical telco tariff for T1 was about the same as six or
seven 56kbit links. HSDT trivial survey found 200 customers with T1
links, just moved to non-communication group hardware & software
(mostly non-IBM, but for gov. customers with failing 2701, FSD had
Zirpel T1 cards for Series/1s).

Later in the 80s, communication group had 3737 that ran T1 link, whole
boatload of Motorola 68k processors and memories, had a mini-VTAM
emulation simulating CTCA to real local host VTAM. 3737 would
immediately reflect ACK to the local host (to keep transmission
flowing) before transmitting traffic to remote 3737, which reversed at
the remote end to remote host. The trouble was host VTAM would hit max
outstanding transmission, long before ACKs started coming back. Even
with short-haul, terrestrial T1, the latency for returning ACKs
resulting in VTAM only able to use trivial amount of the T1. HSDT had
early gone to dynamic adaptive rate-based pacing, easily
adapting to much higher transmission than T1, including much longer
latency satellite links (and gbit terrestrial cross-country links).

Trivia: 1988, IBM branch office asks me if I could help LLNL
standardize some serial stuff they were working with, which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980, initially 1gbit/sec, full-duplex, 200mbyte/sec
aggregate). Eventually IBM releases their serial channel with ES/9000
as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK
engineers become involved with FCS and define a protocol that
radically limits throughput, eventually released as FICON. The most
recent public benchmark I've found was z196 "Peak I/O" getting 2M IOPS
using 104 FICONs (20K IOPS/FICON). At the same time there was (native)
FCS announced for E5-2600 server blades claiming over million IOPS
(two such FCS higher throughput than 104 FICONs).

Even if SNA was saturating T1, it would be about 150kbytes/sec (late
80s w/o 3737 spoofing host VTAM, lucky to be 10kbytes/sec)... HSDT
saturating cross-country 80s native 1gbit FCS would be 100mbytes/sec
... IBM 3380 3mbyte/sec ... would need 33 3380 drive disk RAID at both
ends. Native FCS 3590 tape 42mbyte/sec (with 3:1 compression).

2005 TS1120, IBM & non-IBM, "native" data transfer up to 104mbytes/sec
(up to 1.5tbytes at 3:1 compressed)
https://asset.fujifilm.com/www/us/files/2020-03/71d28509834324b81a79d77b21af8977/359X_Data_Tape_Seminar.pdf

other trivia: Internal mainframe tcp/ip implementation was done in
vs/pascal ... and mid-80s communication group was blocking
release. When that got overturned, they changed their tactic and said
that since they had corporate strategic responsibility for everything
that crossed datacenter walls, it had to be released through
them. What shipped got aggregate 44kbyes/sec using nearly whole 3090
CPU. I then did the changes to support RFC1044 and in some tuning
tests at Cray Research between a Cray and 4341, got sustained 4341
channel throughput using only modest amount of 4341 CPU (something
like 500 times improvement in bytes moved per instruction executed).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
FICON and FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

some old 3737 email:
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2018f.html#email880715
https://www.garlic.com/~lynn/2011g.html#email881005

some posts mentioning 3737:
https://www.garlic.com/~lynn/2025.html#35 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024f.html#116 NASA Shuttle & SBS
https://www.garlic.com/~lynn/2024e.html#95 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#91 When Did "Internet" Come Into Common Use
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023d.html#120 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023c.html#57 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021j.html#32 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#31 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021d.html#14 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#97 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2019d.html#117 IBM HONE
https://www.garlic.com/~lynn/2019c.html#35 Transition to cloud computing
https://www.garlic.com/~lynn/2019b.html#16 Tandem Memo
https://www.garlic.com/~lynn/2018f.html#110 IBM Token-RIng
https://www.garlic.com/~lynn/2018b.html#9 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017.html#57 TV Show "Hill Street Blues"
https://www.garlic.com/~lynn/2016b.html#82 Qbasic - lies about Medicare
https://www.garlic.com/~lynn/2015g.html#42 20 Things Incoming College Freshmen Will Never Understand
https://www.garlic.com/~lynn/2015e.html#31 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015e.html#2 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015d.html#47 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014j.html#66 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014b.html#46 Resistance to Java
https://www.garlic.com/~lynn/2013n.html#16 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013j.html#66 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2012o.html#47 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012m.html#24 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012j.html#89 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2012j.html#87 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2012g.html#57 VM Workshop 2012
https://www.garlic.com/~lynn/2012f.html#92 How do you feel about the fact that India has more employees than US?
https://www.garlic.com/~lynn/2012d.html#20 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2011p.html#103 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011g.html#77 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011g.html#75 We list every company in the world that has a mainframe computer

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VM/CMS Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VM/CMS Mainframe
Date: 11 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#16 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#17 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe

trivia: source code card sequential numbers were used by source update
system (CMS update program). as undergraduate in 60s, I was changing
so much code, I created the "$" convention ... preprocessor (to update
program) would generate the sequence numbers for new statements before
passing a work/temp file to update command. after joining the science
center and the decision to add virtual memory to all 370s, joint
project with endicott was to 1) add virtual 370 machine support to
CP67 (running on real 360/67) and 2) modify CP67 to run on virtual
memory 370 ... which included implementing multi-level source update
(originally done in EXEC recursively applying source updates) ... was
running in CP67 370 virtual machine for a year before 1st engineer 370
(w/virtual memory) was operational (ipl'ing the 370 CP67 was used to
help verify that machine)

trivia: mid-80s, got a request from Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist
asking for a copy of the original multi-level source update done in
exec. I had triple-redundant tape of archived files from 60s&70s
... and was able to pull it off from archive tape. It was fortunate
because because not long later, Almaden Research had an operational
problem mounting random tapes as scratch and I lost nearly dozen tapes
... including all three replicated tapes with 60s&70s archive.

Internet trivia: one of the people that worked on multi-level update
implementation at CSC was MIT student ... that went on later to do
DNS.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts mentioning CSC, MIT student, multi-level update
https://www.garlic.com/~lynn/2024b.html#74 Internet DNS Trivia
https://www.garlic.com/~lynn/2019c.html#90 DNS & other trivia
https://www.garlic.com/~lynn/2017i.html#76 git, z/OS and COBOL
https://www.garlic.com/~lynn/2014e.html#35 System/360 celebration set for ten cities; 1964 pricing for oneweek
https://www.garlic.com/~lynn/2013f.html#73 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013e.html#85 Sequence Numbrs (was 32760?
https://www.garlic.com/~lynn/2013b.html#61 Google Patents Staple of '70s Mainframe Computing
https://www.garlic.com/~lynn/2011p.html#49 z/OS's basis for TCP/IP
https://www.garlic.com/~lynn/2007k.html#33 Even worse than UNIX

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose and Santa Teresa Lab

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose and Santa Teresa Lab
Date: 13 Mar, 2025
Blog: Facebook

IBM convention was to name after closest post office ... which was
"coyote" ... it was quickly changed after spring demonstration on
capital steps by the San Fran Women's "COYOTE" union. By 1980, it was
bursting at the seams and 300 people (and terminals) from the IMS
organization were being moved to offsite bldg (just south of main
plant site) with dataprocessing back to STL machine room.

I had transferred to SJR (bldg28 on plant site) and got to wander
around IBM (and non-IBM) datacenters in silicon valley, including disk
bldg14/engineering and bldg15/product test across the street. They
were running 7x24, prescheduled, stand alone testing and mentioned
that they had recently tried MVS but it had 15min MTBF (requiring
manual re-ipl). I offer to rewrite I/O supervisor to make it bullet
proof and never fail, allowing any amount of on-demand testing
... greatly improving productivity. Downside I would get sucked into
any kind of problem they might have and had to increasingly play disk
engineer.

Then in 1980, STL cons me into doing channel-extender support for the
IMS people being moved offsite. They had tried "remote 3270" and found
human factors totally unacceptable ... channel-extender allowed
channel attached 3270 controllers to be placed at the offsite bldg,
resulting in no perceived difference in human factors between offsite
and inside STL.

Then they found that the systems with channel-extenders had 10-15%
greater throughput than systems w/o. STL had spread all the channel
attached 3270 controllers across all block-mux channels with 3830 disk
controllers. The channel-extender boxes had significantly less channel
busy (than native channel attached 3270 controllers) for same amount
of 3270 terminal traffic ... improving disk I/O throughput.

In SJR, I worked with Jim Gray and Vera Watson on original
SQL/relational, System/R ... and while STL (and rest of company) was
preoccupied with the next, new, greatest DBMS "EAGLE", managed to do
tech transfer (under the "radar") to Endicott for SQL/DS. Then when
Jim left IBM for Tandem in fall of 1980, he palms off DBMS consulting
for w/STL IMS (Vern Watts)
https://www.vcwatts.org/ibm_story.html

Then when "EAGLE" implodes, request was made for how fast could
System/R be ported to MVS ... which eventually ships as "DB2"
(originally for decision support only).

getting to play disk enginneer
https://www.garlic.com/~lynn/subtopic.html#disk
channel extender support
https://www.garlic.com/~lynn/submisc.html#channel.extender
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

STL T3 microwave to bldg12

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose and Santa Teresa Lab

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose and Santa Teresa Lab
Date: 14 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#20 IBM San Jose and Santa Teresa Lab

BofA was also getting 60 vm/4341s for System/R

RIP
https://www.mercurynews.com/obituaries/vernice-lee-watts/
for some reason one of couple connections still on linkedin

Note, also did similar channel-extender for Boulder ... then got HSDT
project in early 80s, T1 and faster computer links (both terrestrial
and satellite) and many conflicts with the communication group. Note
in 60s, IBM had 2701 controller supporting T1, then in 70s the move to
SNA/VTAM and the issues appeared to cap controllers at 56kbits/sec.

HSDT first long-haul T1 satellite was between Los Gatos lab and
Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi

E&S lab in Kingston. Los Gatos had collins digital radio (C-band
microwave) to San Jose bldg12 (similar to STL microwave pictured
previously). Both Kingston and San Jose had T3 C-band 10M satellite dishes.
Later HSDT got its own Ku-band TDMA satellite system with 4.5M dishes
in Los Gatos and Yorktown and 7M dish in Austin (and I got part of Los
Gatos wing with offices and labs).

Before research moved up the hill to Almaden, bldg28 had earthquake
remediation ... adding new bldg around the old bldg. Then bldg14 got
earthquake remediation and engineering (temporarily) moved to bldg86
(offsite, near the moved IMS group). Bldg86 engineering also got an
EVE (endicott verification engine, custom hardware used to verify VLSI
chip design, something like 50,000 times faster than software on
3033). Did a T1 circuit from Los Gatos to bldg12 to bldg86 ... so
Austin could use the EVE to verify RIOS (chip set for RS/6000), claims
it help bring RIOS in a year early. Since then bldgs 12, 15, 28, 29
and several others, have all been plowed under.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
channel extender support
https://www.garlic.com/~lynn/submisc.html#channel.extender
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
getting to play disk enginneer
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose and Santa Teresa Lab

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose and Santa Teresa Lab
Date: 14 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#20 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025b.html#21 IBM San Jose and Santa Teresa Lab

After Future System imploded and mad rush to get products back into
the 370 product pipelines, including quck&dirty 3033&3081
http://www.jfsowa.com/computer/memo125.htm

(and before transferring to SJR on west coast), was con'ed into
helping with a SMP 16-CPU 370 and we got the 3033 processor engineers
to help in their spare time (a lot more interesting than remapping 168
logic to 20% faster chips), which everybody thought was really great
until somebody tells head of POK that it could be decades before POK
favorite son operating system (MVS) had (effective) 16-CPU support
(IBM MVS pubs claiming 2-CPU support only getting 1.2-1.5 times
throughput of single processor), POK doesn't ship 16-CPU machine until
turn of century. Then head of POK invites some of us to never visit
POK again, and directs 3033 processor engineers heads down and no
distractions.

After transfering to SJR, bldg15 (across the street) gets 1st
engineering 3033 outside POK processor engineering for I/O testing
(testing only takes percent or two of CPU, so we scrounge up 3830 and
string of 3330s for private online service). Then 1978, they also get
an engineering 4341 and in Jan1979, branch office cons me into doing
4341 benchmark for national labs looking at getting 70 for compute
farm (sort of the leading edge of the coming cluster supercomputing
tsunami).

disclaimer: last product we did at IBM was HA/6000, started out for
NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scaleup with national
labs (LLNL, LANL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster support
in same source base with Unix, I also do a distributed lock manager
supporting VAXCluster semantics to ease ATEX port). Early JAN92, have
meeting with Oracle CEO where IBM/AWD Hester tells Ellison that we
would have 16-system clusters by mid92 and 128-system clusters by
ye92. Then late Jan92, cluster scaleup is transferred for announce as
IBM Supercomputer (for technical/scientific *ONLY*) and we are told we
can't work with anything that has more than four processors, we leave
a few months later.

1993 mainframe/RS6000 (industry benchmark; no. program iterations
compared to reference platform)

• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990  : 126MIPS; 16CPU cluster: 2BIPS; 128CPU cluster: 16BIPS

Executive that we had been reporting to, moves over to head-up
Somerset/AIM (Apple, IBM, Motorola) doing single chip 801/risc
power/pc ... also motorola 88k risc bus enabling SMP multiprocessor
configurations. However, i86/Pentium new generation where i86
instructions are hardware pipelined translated to RISC micro-ops (on
the fly) for actual execution (negating RISC throughput advantage
compared to i86).

• 1999 single IBM PowerPC 440 hits 1,000MIPS
• 1999 single Pentium3 hits 2,054MIPS (twice PowerPC 440)
• Dec2000 z900, 16 processors, 2.5BIPS (156MIPS/proc)

• 2010 E5-2600, two XEON 8core chips, 500BIPS (30BIPS/proc)
• 2010 z196, 80 processors, 50BIPS (625MIPS/proc)

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HA/CMP processor
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

Forget About Cloud Computing. On-Premises Is All the Rage Again

From: Lynn Wheeler <lynn@garlic.com>
Subject: Forget About Cloud Computing. On-Premises Is All the Rage Again
Date: 15 Mar, 2025
Blog: Facebook

Forget About Cloud Computing. On-Premises Is All the Rage Again. From
startups to enterprise, companies are lowering costs and regaining
control over their operations
https://towardsdatascience.com/forget-about-cloud-computing-on-premises-is-all-the-rage-again/

90s, cluster computing started to become the "rage" ... similar
technologies for both cloud and supecomputing .... large scale
assembly of commodity parts for (at most) 1/3rd the price of brand
name computers. then started to have some brand name vendors doing
"white box" assembly of commodity parts for customer on-site cluster
computing (at reduced price). A decade ago, industry news was claiming
open system server part vendors were shipping at least half their
product directly to large cloud computing operations (that would
assemble their own systems) and IBM sells off its brand name open
system server business. A large cloud operation can have multiple
score megadatacenters around the world, each megadatacenter with at
least half million blade servers.

These operations had so radically reduced their server costs that
things like power consumption were increasingly becoming major cost
and they were putting heavy pressure on server part makers to optimize
computing power consumption ... threatening to move to chips optimized
for battery operation (reduced individual system peak computer power,
compensated for by larger number of systems that had equivalent
aggregate computation at lower aggregate power consumption). System
costs had so radically been reduced, that any major improvement in
part power consumption could easily justify swamping out old systems
for new.

There were stories of cloud operations that provided for service that
supported use of a credit card to spin up on-demand cluster
supercomputer that would rank in one of the largest in the world.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

Forget About Cloud Computing. On-Premises Is All the Rage Again

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Forget About Cloud Computing. On-Premises Is All the Rage Again
Date: 16 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#23 Forget About Cloud Computing. On-Premises Is All the Rage Again

I had taken 2 credit hr intro to fortran/computers and at end of
semester was hired to rewrite 1401 MPIO in 360 assembler for
360/30. Univ. shutdown datacenter on the weekend and I had the whole
datacenter dedicated, although 48hrs w/o sleep made monday classes
hard. Univ was getting 360/67 for tss/360 replacing 709/1401 and got a
360/30 replacing 1401 temporary pending arrival of 360/67. 360/67
arrived within yr of taking intro class and I was hired fulltime
responsible for os/360 (tss/360 never came to production). Then CSC
comes out to install CP67 (3rd after CSC itself and MIT Lincoln Labs)
and I mostly play with it during my dedicated weekend time.

Then before I graduate, I'm hired fulltime into a small group in the
Boeing CFO office to help with consolidating all dataprocessing into
an independent business unit (including offering services to
non-Boeing entities). I think Renton datacenter largest in the world,
360/65s arriving faster than they could be installed, boxes constantly
being staged in the hallways around the machine room (joke that Boeing
was getting 360/65s like other companies got keypunches, precursor to
cloud megadatacenters). Lots of politics between Renton director and
CFO who only had a 360/30 up at Boeing field for payroll (although
they enlarge the machine room to install 360/67 for me to play with
when I wasn't doing other stuff).

During the 60s, there were also two spin-offs of CSC that began
offering CP67/CMS commercial online services (specializing in
wallstreet financial industry). This was in the period when IBM
rented/leased 360 computers and charges were based on the "system
meter" ... which ran whenever the CPU(s) and/or any I/O channels were
busy (CPU(s) and all channels had to be idle for 400ms before system
meter stopped). To reduce the IBM billing and people costs, CSC and
the commercial spinoffs modified CP67 for offshift "dark room"
operation with no humans present and terminal channel programs that
allowed channels to stop (but were instantly "on" whenever any
characters arrived) part of 7x24 availability (sort of cloud
equivalent of systems that would go dormant drawing no power when
idle, but instantly operational "on-demand"). Trivia: long after IBM
had switched to selling computers, IBM's "MVS" operating system still
had a 400ms timer event that would have guaranteed that the system
meter never stopped.

IBM CSC
https://www.garlic.com/~lynn/subtopic.html#545tech
online computer services posts
https://www.garlic.com/~lynn/submain.html#online
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880, 3380, Data-streaming

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880, 3380, Data-streaming
Date: 16 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming

When I transferred to SJR on the west coast, got to wander around IBM
(and non-IBM) datacenters in silicon valley, including disk
bldg14/engineering and bldg15/product test across the street. They
were running 7x24, pre-scheduled, stand-alone testing and mentioned
that they had recently tried MVS, but it had 15min MTBF (in that
environment) requiring manual re-ipl. I offer to rewrite I/O system to
make it bullet proof and never fail, allowing any amount of on-demand
testing, greatly improving productivity. Bldg15 then got 1st
engineering 3033 (outside POK cpu engineering) and since testing only
used a percent or two of CPU, we scrounge up a 3830 and a 3330 string
for our own, private online service. At the time, air bearing
simulation (for thin film head design) was only getting a couple turn
arounds a month on SJR 370/195. We set it up on bldg15 3033 (slightly
less than half 195 MIPS) and they could get several turn arounds a
day.

A couple years later when 3380 was about to ship, FE had a test stream
of a set of 57 hardware simulated errors that were likely to occur and
in all 57 cases, MVS was (still) crashing and in 2/3rds of the cases,
no indication what caused the failure
https://www.garlic.com/~lynn/2007.html#email801015

first thin film head was 3370
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

then used for 3380; original 3380 had 20 track spacings between each
data track, then cut the spacing in half for double the capacity, then
cut the spacing again for triple the capacity (3380K). The "father of
801/risc" then talks me into helping with a "wide" disk head design,
read/write 16 closely spaced data tracks in parallel (plus follow two
servo tracks, one on each side of 16 data track groupings). Problem
was data rate would have been 50mbytes/sec at a time when mainframe
channels were still 3mbytes/sec. However 40mbyte/sec disk arrays were
becoming common and Cray channel had been standardized as HIPPI.
https://en.wikipedia.org/wiki/HIPPI

1988, IBM branch asks if I could help LLNL (national lab) standardize
some serial stuff they were working with, which quickly becomes fibre
channel standard ("FCS", initially 1gbit, full-duplex, got RS/6000
cards capable of 200mbytes/sec aggregate for use with 64-port FCS
switch). In 1990s, some serial stuff that POK had been working with
for at least the previous decade is released as ESCON (when it is
already obsolete, 17mbytes/sec). Then some POK engineers become
involved with FCS and define heavy weight protocol that significantly
reduces ("native") throughput, which ships as "FICON". Latest public
benchmark I've seen was z196 "Peak I/O" getting 2M IOPS using 104
FICON. About the same time a FCS is announced for E5-2600 blades
claiming over a million IOPS (two such FCS having higher throughput
than 104 FICON). Also IBM pubs recommended that SAPs (system assist
processors that do actual I/O) be held to 70% CPU (or around 1.5M
IOPS).
https://en.wikipedia.org/wiki/Fibre_Channel

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880, 3380, Data-streaming

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880, 3380, Data-streaming
Date: 16 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming

were problems 3081 with data streaming 3mbyte channels or earlier
370s? Allowing 3mbyte 3380s to be used with 370 1.5mbyte channels, had
3880 "Calypso" speed matching buffer & (original) ECKD CCWs ... but
had enormous problems (old email from 07Sep1982 mentions large number
of severity ones, engineers on site for the hardware problems ... but
claims that ECKD software was in much worse shape).
https://www.garlic.com/~lynn/2007e.html#email820907b

selector & block mux channels did end-to-end hand-shake for every byte
transferred and aggregate channel length caped at 200ft. data
streaming channels (for 3mbyte/sec 3380s) did multiple byte transfer
for each end-to-end handshake and increase aggregate channel to 400ft.

1978, bldg15 (also) got engineering 4341/E5 and in jan1979, a branch
office gets me to do a benchmark for national lab that was looking at
getting 70 for compute farm (sort of leading edge of the coming
cluster supercomputing tsunami)

decade later, last product we did at IBM was HA/6000, started out for
NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scaleup with national
labs (LLNL, LANL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster support
in same source base with Unix, I also do a distributed lock manager
supporting VAXCluster semantics to ease ATEX port). Early JAN92, have
meeting with Oracle CEO where IBM/AWD Hester tells Ellison that we
would have 16-system clusters by mid92 and 128-system clusters by
ye92. Then late Jan92, cluster scaleup is transferred for announce as
IBM Supercomputer (for technical/scientific *ONLY*) and we are told we
can't work with anything that has more than four processors, we leave
a few months later.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880, 3380, Data-streaming

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880, 3380, Data-streaming
Date: 16 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#26 IBM 3880, 3380, Data-streaming

Future System was completely different from 370 and was going to
completely replace (internal politics was killing off 370 efforts and
lack of new 370 stuff during the period is credited with giving clone
370 makers their market foothold). When FS imploded, there was mad
rush to get stuff back into 370 product pipelines, including
quick&dirty 3033&3081
http://www.jfsowa.com/computer/memo125.htm

about the same time the head of POK managed to convince corporate to
kill the vm370 project, shutdown the development group and transfer
all the people to POK for MVS/XA. (Endicott managed to save the VM370
product mission for the mid-range, but had to recreate a development
group from scratch). Some of the people that went to POK developed the
primitive virtual machine VMTOOL (in 370/xa architecture, required the
SIE instruction to move in/out virtual machine mode) in support of
MVS/XA development.

Then customers weren't moving to MVS/XA as fast as predicted, however
Amdahl was having better success being able to run both MVS and MVS/XA
concurrently on the same machine with their (microcode hypervisor)
"Multiple Domain". As a result, VMTOOL was packaged 1st as VM/MA
(migration aid) and then VM/SF (system facility) able to run MVS and
MVS/XA concurrently on 3081. However, because originally VMTOOL and
SIE was never intended for production operation and limited microcode
memory, SIE microcode had to be "paged" (part of the 3090 claim that
3090 SIE was designed for performance from the start).

Then POK decided they wanted a few hundred people to create VM/XA,
bring VMTOOL up to the feature, function and performance of VM370
... counter from Endicott was sysprog in IBM Rochester had added full
370/XA to VM/370 ... POK wins.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

some vmtool, vm/ma, vm/sf, vm/xa posts
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2021c.html#56 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM WatchPad

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM WatchPad
Date: 16 Mar, 2025
Blog: Facebook

IBM WatchPad
https://en.m.wikipedia.org/wiki/IBM_WatchPad

... before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, kildall worked on IBM CP/67 (precursor to VM370) at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

... aka CP/M "Microprocessor" rather than "67"

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html

According to the New York Times, it was Opel who met with Bill Gates,
CEO of the then-small software firm Microsoft, to discuss the
possibility of using Microsoft PC-DOS OS for IBM's
about-to-be-released PC. Opel set up the meeting at the request of
Gates' mother, Mary Maxwell Gates. The two had both served on the
National United Way's executive committee.

... snip ...

then communication group was fiercely fighting off client/server and
distributed computing trying to preserve their dumb terminal paradigm
(aka PCs limited to 3270 emulation)

late 80s, senior disk engineer got talk scheduled at communication
group internal annual world-wide conference supposedly on 3174
performance, but opened the talk that communication group was going to
be responsible for demise of disk division. The disk division was
seeing data fleeing mainframe datacenters to more distributed
computing friendly platforms with drop in disk sales. The disk
division had come up with number of solutions but were constantly
vetoed by the communication group with their corporate strategic
ownership of everything that crossed datacenter walls. GPD/Adstar
software executive partially compensated by investing in distributed
computing startups that would use IBM disks (and periodically asked us
to drop by his investments to lend a hand).

It wasn't just disks and a couple years later IBM has one of the
largest losses in the history of US companies and was being reorged
into the 13 "baby blues" in preparation for breakup of the company
(takeoff on "baby bell" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup

The communication group was also performance kneecaping PS2
microchannel cards. IBM AWD workstation division had done their own
cards for the PC/RT (PC/AT 16bit bus), including 4mbit token-ring
card. For RS/6000 microchannel, they were told they couldn't do their
own cards, but had to use standard PS2 cards. It turns out the PS2
microchannel 16mbit token-ring card had lower card throughput than the
PC/RT 4mbit token-ring card (joke that PC/RT 4mbit TR server would
have higher throughput than RS/6000 16mbit TR server).

Mid-80s, communication group was trying to block release of mainframe
TCP/IP support. When they lost, they change their strategy, since they
had corporate responsibility for everything that crossed datacenter
walls, it had to be released through them. What shipped got aggregate
44kbytes/sec using nearly whole 3090 processor. I then do the changes
to support RFC1044 and in some tuning tests at Cray Research between
Cray and 4341, got sustained 4341 channel throughput using only modest
amount of 4341 processor (something like 500 times increase in bytes
moved per instruction executed).

Communication Group preserving dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
IBM Cambridge Science Center
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

Learson Tries To Save Watson IBM

From: Lynn Wheeler <lynn@garlic.com>
Subject: Learson Tries To Save Watson IBM
Date: 17 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~/lynn/2025b.html#13 Learson Tries To Save Watson IBM
other recent Financial Engineering
https://www.garlic.com/~/lynn/2025b.html#0 Financial Engineering

Note AMEX and KKR were in competition for private-equity,
reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help. Then
IBM has one of the largest losses in the history of US companies and
was preparing to breakup the company when the board hires the former
president of AMEX as CEO to try and save the company, who uses some of
the same techniques used at RJR.
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Stockman and financial engineering company
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:

IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.

pg465/loc10014-17:

Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.

... snip ...

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate
Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind
paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st

The company has represented that its dividends and share repurchases
have come to a total of over $159 billion since 2000.

... snip ...

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM
Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a
little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud
Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
former AMEX president and IBM CEO
https://www.garlic.com/~lynn/submisc.html#gerstner
retirement/pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback

--
virtualization experience starting Jan1968, online at home since Mar1970

Some Career Highlights

From: Lynn Wheeler <lynn@garlic.com>
Subject: Some Career Highlights
Date: 18 Mar, 2025
Blog: Facebook

Last product did at IBM was HA/CMP. It started out HA/6000 for the
NYTimes to migrate their newspaper system (ATEX) off VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster
support in same source base with Unix (I do a distributed lock
manager supporting VAXCluster semantics to ease the migration). We
did several marketing trips in Europe (a different city every day and
several customers/day) and Far East. One trip to Hong Kong, we were
riding elevator up in large bank building/skyscraper, with the local
marketing team, when a newly minted marketing rep in the back, asks if
I was the "wheeler" of the "wheeler scheduler", I said yes, he said we
studied you at the Univ. of Waterloo (I asked if there was any mention
of the joke I had included in the code).

As undergraduate in the 60s, univ had hired me fulltime responsible
for OS/360 and then CSC came out to install CP67 (3rd after CSC itself
and MIT Lincoln Labs) and I mostly got to play with it during my
dedicated weekend time (although 48hrs w/o sleep made Monday classes
hard). I redid a lot of CP67, including implementing dynamic adaptive
resource management ("wheeler scheduler") which a lot of IBM and
customers ran. After joining IBM one of my hobbies was enhanced
production operating systems for internal datacenters (one of the
first and long time customer was the world-wide online
sales&marketing support HONE systems). After decision to add
virtual memory to 370s, a VM370 development group was formed and in
the morph from CP67->VM370, lots of stuff was dropped and/or
greatly simplified. Starting with VM370R2, I was integrating lots of
stuff from my CP67 into VM370 for (internal) "CSC/VM" .... and the
SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
organization passed resolution asking that the VM370 "wheeler
scheduler" be released to customers.

Trivia: Early JAN1992 had HA/CMP meeting with Oracle CEO, where
IBM/AWD Hester tells Ellison that we would have 16-system clusters by
mid92 and 128-system clusters by ye92. Then late JAN1992, cluster
scaleup was transferred for announce as IBM Supercomputer (for
technical/scientific *ONLY*) and we were told we couldn't work on
anything with more than four processors (we leave IBM a few months
later).

Late 1992, IBM has one of the largest losses in the history of US
companies and was being reorganized into the 13 "baby blues" in
preparation for breaking up the company (take-off on "baby bell"
break-up a decade earlier):
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.

Note: 20yrs before 1992 loss, Learson tried (and failed) to block the
bureaucrats, careerists, and MBAs from destroying Watson
culture&legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

misc
https://www.enterprisesystemsmedia.com/mainframehalloffame
http://mvmua.org/knights.html
... and IBM Systems mag article gone 404
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
... other details
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Dynamic Adaptive Resource Management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Some Career Highlights

From: Lynn Wheeler <lynn@garlic.com>
Subject: Some Career Highlights
Date: 19 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#30 Some Career Highlights

Early 80s, I was introduced to John Boyd and would sponsor his
briefings at IBM (in 50s, instructor at USAF weapons school, he was
considered possibly best jet fighter pilot in the world, then he went
on to redo the original F-15 design, cutting weight nearly in half,
then responsible for Y16 and Y17 that become F-16 and F-18). In 89/90,
the commandant of the Marine Corps leverages Boyd for make-over of the
Marine Corps (at the time IBM was desperately in need of make-over).
When Boyd passed in 1997, the USAF had pretty much disowned him and it
was the Marines at Arlington and his effects go to the (Marine) Gray
research and library. Former commandant continues to sponsor Boyd
conferences at Marine University in Quantico. One year, the former
commandant comes in after lunch and speaks for two hrs, totally
throwing schedule off, but nobody was going to complain. I'm sitting
in the back opposite corner of the room and when he was done, he makes
a straight line for me, as he was approaching and all I could think of
was all the Marines I had offended in the past and somebody had set me
up (he passed last April).

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Forget About Cloud Computing. On-Premises Is All the Rage Again

From: Lynn Wheeler <lynn@garlic.com>
Subject: Forget About Cloud Computing. On-Premises Is All the Rage Again
Date: 19 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#23 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again

Jan1979, got con'ed by IBM branch office into doing engineering 4341
benchmark for national lab looking at getting 70 4341s for compute
farm (sort of leading edge of cluster supercomputing tsunami). Decade
later (1988) last product did at IBM was HA/6000, originally for
NYTimes to move their newspaper system (ATEX) off VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster
support in same source base with Unix, I did a distributed block
manager supporting VAXCluster semantics to ease the port).

Early Jan1992, there is HA/CMP meeting with Oracle CEO where
IBM/AWD/Hester tells Ellison that we would have 16-system clusters by
mid92 and 128-system clusters by ye92. Then late Jan1992, cluster
scale-up is transferred for announce as IBM supercomputer (for
technical/scientific "ONLY") and we are told we couldn't work on
anything with more than four processors (we leave IBM a few months
later).

Not longer later, we are bought in as consultants for small
client/server startup, two former Oracle people (that had been in the
Ellison meeting) are there responsible for something called "commerce"
server and they want to do payment transactions on the server, the
startup had also invented this stuff they called "SSL" they want to
use, it is now frequently called "electronic commerce". I had
responsibility for everything between "commerce" servers and the
payment networks.

Was also working with some other consultants that were doing stuff at
a nearby company called GOOGLE. As a service they were collecting all
web pages they could find on the Internet and supporting a search
service for finding things. The consultants first were using rotating
ip-addresses with DNS A-records for load balancing (but one of the
problems were DNS responses tended to be cached locally and were very
poor at load-balancing). They then modified the GOOGLE boundary
routers to maintain information about back-end server workloads and
provided dynamic adaptive workload balancing.

One of the issue was as number of servers exploded and they start
assembling their own as part of enormous megadatacenters, they had
also so dramatically reduced system costs that they could provision
number of servers that were possibly ten times the normal workload,
but available for peak "on-demand" requirements.

Based on work done for "e-commerce", I do a talk "Why Internet Isn't
Business Cricial Dataproceessing" that (among others), Postel
(IETF/Internet RFC editor) sponsored at ISI/USC.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
e-commerce payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available megadatacenter posts https://www.garlic.com/~lynn/submisc.html#megadatacenter some recent. posts mentioning "Why Internet Isn't Business Critical Dataprocessing" https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol? https://www.garlic.com/~lynn/2024g.html#80 The New Internet Thing https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced https://www.garlic.com/~lynn/2024g.html#25 Taligent https://www.garlic.com/~lynn/2024g.html#20 The New Internet Thing https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network https://www.garlic.com/~lynn/2024f.html#47 Postel, RFC Editor, Internet https://www.garlic.com/~lynn/2024e.html#41 Netscape https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O https://www.garlic.com/~lynn/2024d.html#47 E-commerce https://www.garlic.com/~lynn/2024c.html#92 TCP Joke https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet https://www.garlic.com/~lynn/2024b.html#33 Internet https://www.garlic.com/~lynn/2024.html#71 IBM AIX https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication https://www.garlic.com/~lynn/2023f.html#8 Internet https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67 https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security https://www.garlic.com/~lynn/2023.html#42 IBM AIX https://www.garlic.com/~lynn/2023.html#31 IBM Change https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about? https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net" https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024 https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net" https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished https://www.garlic.com/~lynn/2022b.html#38 Security https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career -- virtualization experience starting Jan1968, online at home since Mar1970

3081, 370/XA, MVS/XA

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3081, 370/XA, MVS/XA
Date: 20 Mar, 2025
Blog: Facebook

Future System was completely different from 370 and was going to
replace it (internal politics was killing off 370 efforts during FS
and claim is the lack of new 370 during the period is responsible for
giving the clone makers their market foothold, also ibm marketing had
to fine tune their FUD skills). Then when FS implodes there is made
rush to get stuff back into the 370 product pipelines, including
kicking off the quick&dirty 3033&3081.
http://www.jfsowa.com/computer/memo125.htm

308x were going to be multiprocessor only and original two processor
3081D had lower aggregate processing than single processor Amdahl. IBM
then quickly doubled the processor cache size for 3081K ... which was
about same aggregate MIPS as single processor Amdahl, although MVS
documents said that its 2-CPU multiprocessor support only had 1.2-1.5
throughput of single processor, aka 2-CPU 3081K with same aggregate
MIPS as single processor Amdahl, would only have 1.5 times (or less)
the throughput.

Also ACP/TCP (airline, reservation, transaction) systems didn't have
multiprocessor support, and IBM was concerned that whole market would
transition to the latest Amdahl single processor. Eventually IBM did
offer a 1-CPU 3083 (3081 with one of the processors removed) for
ACP/TCP market.

There is story of IBM having flow sensor between the heat exchange
unit and TCMs ... but not external to heat exchange unit. One customer
lost flow external to the heat exchange unit ... so the only sensor
left was thermal, by the time the thermal tripped, it was too late and
all the TCMs fried. IBM then retrofitted flow sensors external to the
heat exchange unit.

Trivia: as undergraduate in the 60s, univ. hired me fulltime
responsible for OS/360, then before I graduate I'm hired fulltime into
small group in Boeing CFO office to help with the formation of Boeing
Computing Services (consolidate all dataprocessing into independent
business unit, including offering service to non-Boeing entities).

A decade ago, I was asked to track down decision to add virtual memory
to all 370s and found staff to executive making decision. Basically
MVT storage management was so bad, that regions sizes were being
specified at four times larger than used ... so typical 1mbyte 370/165
only would run four concurrent regions, insufficient to keep machine
busy and justified. Going to 16mbyte virtual memory (SVS) allowed
increasing number of regions by factor of four (caped at 15 because
4bit storage protect keys) with little or no paging (as systems got
larger then ran into the 15 limit and spawned MVS)

Turns out Boeing Huntsville had run into similar with MVT. They had
gotten a two processor 360/67 TSS/360 with lot of 2250s for CAD work
... but TSS wasn't production ... so ran it as two 360/65s with MVT
... and ran into the MVT problem. They modified MVT release 13 to run
in virtual memory mode (but no paging) that partially addressed MVT's
storage management problems.

I had been writing articles in the 70s about needing increasing number
of concurrently running applications. In the early 80s, I wrote that
since beginning of 360s, disk relative system performance had declined
by an order of magnitude (disks got 3-5 faster, systems got 40-50
times faster, so the only way to keep up is to have huge number of
concurrent i/o). Disk division executives took exception and assigned
the division performance group to refute the claim, after a couple
weeks, they came back and said I had slightly understated the problem.

Note article about IBM was finding that customers weren't converting
to MVS as planned
http://www.mxg.com/thebuttonman/boney.asp

something similar in the 80s with getting customers to migrate from
MVS->MVS/XA. Amdahl was having better success since they had
(microcode hypervisor) "multiple domain" (similar to LPAR/PRSM a
decade later on 3090) being able to run MVS & MVS/XA concurrently.

After FS imploded, the head of POK had also convinced corporate to
kill the vm/370 product, shutdown the development group and transfer
all people to POK for MVS/XA (Endicott managed to save the VM/370
product mission for the mid-range, but had to recreate a development
group from scratch) ... so there was no equivalent IBM capability for
running MVS/XA and MVS concurrently

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some posts mentioning Boeing Huntsville, CSA, MVS
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2012h.html#57 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2010m.html#16 Region Size - Step or Jobcard

The analysis that DASD relative system performance was respun into SHARE
presentation on configuring filesystems for improved throughput:

SHARE 63 Presentation B874

DASD Performance Review
8:30 August 16, 1984
Dr. Peter Lazarus

IBM Tie Line 543-3811
Area Code 408-463-3811
GPD Performance Evaluation
Department D18
Santa Teresa Laboratory
555 Bailey Avenue
San Jose, CA., 95150

.... snip ... posts mentioning getting to play disk engineer in bldgs14&15 https://www.garlic.com/~lynn/subtopic.html#disk a few recent posts reference relative DASD performance https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine https://www.garlic.com/~lynn/2025.html#107 IBM 370 Virtual Storage https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage https://www.garlic.com/~lynn/2024g.html#65 Where did CKD disks come from? https://www.garlic.com/~lynn/2024f.html#9 Emulating vintage computers https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360 https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360 https://www.garlic.com/~lynn/2024c.html#109 Old adage "Nobody ever got fired for buying IBM" https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design https://www.garlic.com/~lynn/2023g.html#32 Storage Management https://www.garlic.com/~lynn/2023e.html#92 IBM DASD 3380 https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2 https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive -- virtualization experience starting Jan1968, online at home since Mar1970

IBM 370/125

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370/125
Date: 20 Mar, 2025
Blog: Facebook

1975, I got asked to try and get VM/370 running on 256kbyte 125-II
that was in the office of a foreign shipping company in Manhattan (not
supported). There were two problems,

1) there was microcode bug in 125 370 "long instructions" that
prechecked ending addresses before executing instructions (all 360
instructions, and most 370 instructions, but "long" instructions which
were to incrementally execute until address ran into problem (storage
protect or end of memory). At boot VM/370 executed MVCL to clear
memory and check for end of memory, since precheck prevented
instruction from executing, machine effectively appeared as if it had
zero memory.

2) CP67->VM370 resulted in lots of kernel bloat ... well over
100kbytes and not supported on less than 384Kbytes. I had done some
CP67 optimization including reducing fixed kernel size to under
70kbytes ... so asked to see how close I could get VM370 to that for
125.

Then the 125 group asked if I could do multiprocessor support for a
five processor 125. 115&125 had nine position memory bus for up to
nine microprocessors. All 115 microprocessors were the same
(integrated controllers, 370 cpu, etc) with the microprocessor running
370 microcode getting about 80KIPS. The 125 was the same but had a
faster microprocessors for the one running 370 microcode getting about
120KIPS. 125 systems rarely had more than four controller
microprocessors and the single 370 microprocessor, so at least four
were empty (five 120KIPS would get close to 600KIPS, .6MIPS).

At the time Endicott also asked me to help in doing ECPS microcode
assist for 138/148 ... and I would also implement the same for 125
multiprocessor with some fancy tweaks that put a lot of multiprocessor
support also moved into into microcode. Then Endicott objected to
releasing 125 5-CPU processor since it would have higher throughput
than 148. In the escalation meetings, I had to argue for both sides of
the table ... and Endicott eventually won.

So I then did a 370 software implementation of the multiprocessor
support for internal VM370s, initially for HONE (internal world-wide
sales & marketing support online systems) so they could add 2nd
processors to their 168 and 158 systems.

posts mentioning 125 5-CPU multiprocessor
https://www.garlic.com/~lynn/submain.html#VAMPS
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
internal IBM CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

old archived post with initial analysis for 138/148 ECPS
https://www.garlic.com/~lynn/94.html#21

some recent posts mentioning ECPS
https://www.garlic.com/~lynn/2024e.html#33 IBM 138/148
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023f.html#57 Vintage IBM 370/125
https://www.garlic.com/~lynn/2023d.html#95 370/148 masthead/banner
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#24 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#79 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023b.html#64 Another 4341 thread
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022e.html#102 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#87 370/195
https://www.garlic.com/~lynn/2021k.html#38 IBM Boeblingen
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021c.html#62 MAINFRAME (4341) History

--
virtualization experience starting Jan1968, online at home since Mar1970

3081, 370/XA, MVS/XA

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3081, 370/XA, MVS/XA
Date: 20 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#33 370/XA, MVS/XA

Shortly after graduating and joining IBM, the 370/195 group cons me
into helping with multithreading the machine. 195 had 64 entry
pipeline and out-of-order execution, but no branch prediction and no
speculative execution (conditional branches drained the pipeline), so
most codes ran machine at half throughput. Running two threads,
simulating 2-CPU multiprocessor, could have higher aggregate
throughput (modulo MVT/MVS 2-CPU multiprocessor support throughput
only having 1.2-1.5 times throughput of single processor).

this has Amdahl winning the battle to make ACS, 360 compatible,
shortly after it gets killed, he leaves IBM and starts his own
company. It also mentions multithreading (and some of ACS/360 features
that show up more than 20yrs later with ES/9000)
https://people.computing.clemson.edu/~mark/acs_end.html

Sidebar: Multithreading: In summer 1968, Ed Sussenguth investigated
making the ACS/360 into a multithreaded design by adding a second
instruction counter and a second set of registers to the
simulator. Instructions were tagged with an additional "red/blue" bit
to designate the instruction stream and register set; and, as was
expected, the utilization of the functional units increased since more
independent instructions were available.

... snip ...

However, new 195 work got shutdown with the decision to add virtual
memory to all 370s (wasn't worth the effort to add virtual memory to
195).

Then came Future System effort, completely different from 370 and was
going to replace 370 (during FS, internal politics was killing off 370
activities and the lack of new 370 during FS is credited with giving
clone 370 system makers their market foothold)
http://www.jfsowa.com/computer/memo125.htm

Later after FS implodes and the mad rush to get stuff back into the
370 product pipelines (including kicking off quick&dirty 3033&30381),
I was asked to help with 16-CPU multiprocessor (and we con the 3033
processor engineers into working on it in their spare time, lot more
interesting than remapping 168 logic to 20% faster chips).. this was
after work on 5-CPU 370/125 was killed and doing 2-CPU VM370 for
internal datacenters ... HONE 2-CPU was getting twice the throughput
of 1-cpu (compared to MVS 2=CPU getting only 1.2-1.5 times).
https://www.garlic.com/~lynn/2025b.html#34 IBM 370/125

Everybody thought 16-cpu 370 was really great until somebody told the
head of POK that it could be decades before the POK favorite son
operating system (MVS) had (effective) 16-CPU support (i.e. at the
time MVS docs claimed that MVS 2-CPU support only had 1.2-1.5 times
throughput of a 1-CPU ... multiprocessor overhead increased as number
of CPUs increased; POK doesn't ship a 16-CPU system until after turn
of century). The head of POK then invites some of us to never visit
POK again and directs the 3033 processor engineers, heads down and no
distractions.

SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

some recent posts mentioning 370/195 multithreading
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#24 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024e.html#115 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024d.html#101 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#66 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#52 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2023f.html#89 Vintage IBM 709
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2022h.html#32 do some Americans write their 1's in this way ?
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#12 Computer Server Market
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#60 370/195
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2021k.html#46 Transaction Memory
https://www.garlic.com/~lynn/2021h.html#51 OoO S/360 descendants
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195

--
virtualization experience starting Jan1968, online at home since Mar1970

FAA ATC, The Brawl in IBM 1964

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: FAA ATC, The Brawl in IBM 1964
Date: 21 Mar, 2025
Blog: Facebook

FAA ATC, The Brawl in IBM 1964
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514

Two mid air collisions 1956 and 1960 make this FAA procurement
special. The computer selected will be in the critical loop of making
sure that there are no more mid-air collisions. Many in IBM want to
not bid. A marketing manager with but 7 years in IBM and less than one
year as a manager is the proposal manager. IBM is in midstep in coming
up with the new line of computers - the 360. Chaos sucks into the fray
many executives- especially the next chairman, and also the IBM
president. A fire house in Poughkeepsie N Y is home to the technical
and marketing team for 60 very cold and long days. Finance and legal
get into the fray after that.

... snip ...

Didn't deal with Fox & people while in IBM, but did a project with
them (after we left IBM) in the company they had formed (after they
left IBM)

The last project we did at IBM was HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
and had lots of dealings with TA to FSD president ... who was also
filling in 2nd shift writing ADA code for the latest IBM FAA
modernization project ... and we were also asked to review the overall
design.

HA/6000 was started 1988, originally for NYTimes to move their
newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
when started doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commerical cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster
support in same source base with Unix). Early Jan1992, in meeting with
Oracle CEO, IBM/AWD/Hester told Ellison that we would have 16-system
clusters by mid92 and 128-system clusters by ye92.

Mid Jan92, gave FSD lastest update on HA/CMP scale-up ("MEDUSA") and
they decided to use it for gov. supercomputers and the TA informed the
Kingston supercomputer group.  old email from him:


Date: Wed, 29 Jan 92 18:05:00
To: wheeler

MEDUSA uber alles...I just got back from IBM Kingston. Please keep me
personally updated on both MEDUSA and the view of ENVOY which you
have. Your visit to FSD was part of the swing factor...be sure to tell
the redhead that I said so. FSD will do its best to solidify the
MEDUSA plan in AWD...any advice there?

Regards to both Wheelers...

... snip ... top of post, old email index

Then day or two later, cluster scale-up is transferred to Kingston for
announce as IBM Supercomputer (for technical/scientific *ONLY*) and we
were told we couldn't work on anything with more than four processors
(we leave IBM a few months later). Less than 3weeks later,
Computerworld news 17feb1992 ... IBM establishes laboratory to develop
parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

After leaving IBM, also did some consulting with Fundemental (&
Sequent), FAA was using FLEX-ES (on Sequent) for 360 emulation; gone
404, but lives on at wayback machine
https://web.archive.org/web/20241009084843/http://www.funsoft.com/
https://web.archive.org/web/20240911032748/http://www.funsoft.com/index-technical.html
and Steve Chen (CTO at Sequent) before IBM bought them and shut it down
https://en.wikipedia.org/wiki/Sequent_Computer_Systems

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

past posts referencing FAA & "Brawl"
https://www.garlic.com/~lynn/2025.html#99 FAA And Simulated IBM Mainframe
https://www.garlic.com/~lynn/2024e.html#100 360, 370, post-370, multiprocessor
https://www.garlic.com/~lynn/2024d.html#12 ADA, FAA ATC, FSD
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2024.html#114 BAL
https://www.garlic.com/~lynn/2023f.html#84 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2023d.html#82 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#73 Some Virtual Machine History
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022b.html#97 IBM 9020
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021i.html#20 FAA Mainframe
https://www.garlic.com/~lynn/2021f.html#9 Air Traffic System
https://www.garlic.com/~lynn/2021e.html#13 IBM Internal Network
https://www.garlic.com/~lynn/2021.html#42 IBM Rusty Bucket
https://www.garlic.com/~lynn/2019b.html#88 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2019b.html#73 The Brawl in IBM 1964

--
virtualization experience starting Jan1968, online at home since Mar1970

FAA ATC, The Brawl in IBM 1964

From: Lynn Wheeler <lynn@garlic.com>
Subject: FAA ATC, The Brawl in IBM 1964
Date: 21 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#36 FAA ATC, The Brawl in IBM 1964

trivia: I was doing stuff for disk bldg15/product test lab that got an
engineering 4341 in 1978. In Jan1979, branch office cons me into doing
benchmark for national lab that was looking at getting 70 4341s for a
compute farm (sort of the leading edge of the coming cluster
supercomputing tsunami).
https://www.garlic.com/~lynn/2001n.html#6000clusters2

GOVERNMENT AGENCIES GO CRAZY FOR RISC SYSTEM/6000 CLUSTERS (From Ed
Stadnick-Competitive Consultant)

Federal agencies have caught clustermania. Over the last six months,
an increasing number of sites have strung together workstations with
high-speed networks to form powerful replacements or augmentations to
their traditional supercomputers.

At least nine federally funded supercomputer centers recently have
installed workstation clusters, and virtually all of these clusters
are based on IBM Corp.'s RISC System/6000 workstation line.

Growing Interest - In fact, the interest in clusters at federal and
university sites has grown so much that IBM announced last month a
cluster product that it would service and support. IBM's basic cluster
configuration consists of at least two RISC System/6000 workstations,
AIX, network adapters, cables and software packages.

The interest in clusters caught us by surprise, said Irving
Wladawsky-Berger, IBM's assistant general manager of supercomputing
systems. "It is one of these events where the users figured out what
to do with our systems before we did."

Jeff Mohr, the chief scientist at High-Performance Technologies Inc.,
a federal systems integrator, said: "If you look at a Cray Y-MP 2E and
a high-end RISC System/6000... the price differential can be literally
40-to-1. But if you take a look at the benchmarks, particularly on
scalar problems, the differential can be about 5-to-1. So on certain
problems, clustering can be very, very effective."

Agencies that have these cluster include the National Science
Foundation at Cornell University and University of Illinois, DOE's Los
Alamos National Laboratory, FERMI and Livermore National Labs, and the
Naval Research Lab in Washington D.C.

Source: Federal Computer Week Date: May 11, 1992

... snip ...
and
https://www.garlic.com/~lynn/2001n.html#6000clusters3

other trivia: Early 80s, I got HSDT project, T1 and faster computer
links (both satellite and terrestrial). First long haul T1 was between
the IBM Los Gatos lab on the west coast and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston that had a boat load of Floating Pointing Systems
boxes (including 40mbyte/sec disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems

Cornell University, led by physicist Kenneth G. Wilson, made a
supercomputer proposal to NSF with IBM to produce a processor array of
FPS boxes attached to an IBM mainframe with the name lCAP.

... snip ...

Early on had been working with NSF director and was suppose to get
$20M to interconnect the NSF Supercomputing Centers. Then congress
cuts the budget, some other things happen and eventually a RFP is
released (in part based on what we already had running), NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, it becomes the NSFNET backbone,
precursor to modern internet.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Computers in the 60s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Computers in the 60s
Date: 21 Mar, 2025
Blog: Facebook

took 2 credit hr intro to computers, at end of semester was hired to
rewrite 1401 MPIO in 360 assembler for 360/30. Univ was getting 360/67
for tss/360 replacing 709(tape->tape)/1401(unit record front end)
... 360/30 temporarily replacing 1401 pending arrival of
360/67. univ. shutdown datacenter on weekends and i was given the
whole place dedicated (although 48hrs w/o sleep made monday classes
hard). They gave me a bunch of hardware & software manuals and I got
to design and implement my own monitor, device drivers, interrupt
handlers, storage management, error recovery, etc; within a few weeks
had 2000 card program. Within a year, the 360/67 arrives and I was
hired fulltime responsible for os/360 (ran as 360/65, tss/360 didn't
come to production). Before I graduate, univ library got ONR grant to
do online catalogue, part of the money went for 2321 data cell. The
project was also selected for betatest for original CICS product
... and CICS support and debuging was added to tasks. CICS wouldn't
come up ... turns out CICS had hard coded some BDAM options (that
wasn't covered in the documentation) and library had built datasets
with different set of options.

before I graduate, was hired into small group in the Boeing CFO's
office to help with the formation of Boeing Computing Services
(consolidate all dataprocessing into independent business unit,
including offering services to non-Boeing entities). I think Renton
largest in the world, 360/65s arriving faster than they could be
installed, boxes constantly staged in hallways around machine room
(joke that Boeing got 360/65s like other companies got key
punches). Lots of politics between Renton director and CFO who only
had a 360/30 up at Boeing field for payroll, although they enlarge the
machine room to install a 360/67 for me to play with when I wasn't
doing other stuff.

At the univ. CSC had come out to install CP67 (3rd after CSC itself
and MIT Lincoln Labs) which I mostly got to play with during my
weekend dedicated time. CP67 had come with 1052 & 2741 terminal
support with automagic terminal type identification using SAD CCW to
change port scanner type. Univ. had some ASCII TTY 33&35s ... so I
integrate ASCII terminal support in with the automagic terminal
type. I then wanted to have a single dial-in number for all terminal
types ("hunt group") ... didn't quite work, while could change scanner
type, IBM had taken short cut and hard wired port baud
rate. Univ. then kicks off clone controller product, channel interface
board for Interdata3 programmed to emulate IBM controller (with auto
baud support). This was upgraded with Interdata4 for the channel
interface and a cluster of Interdata3s for port interfaces which
Interdata and later Perkin-Elmer sell (and four of us are written up
for some part of IBM clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
CICS &/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

some recent Univ 709, 1401, MPIO, 360/30, 360/67, Boeing CFO, Renton
posts
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#1 Large Datacenters
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"

--
virtualization experience starting Jan1968, online at home since Mar1970

FAA ATC, The Brawl in IBM 1964

From: Lynn Wheeler <lynn@garlic.com>
Subject: FAA ATC, The Brawl in IBM 1964
Date: 21 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#36 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2025b.html#37 FAA ATC, The Brawl in IBM 1964

CSC had tried to get 360/50 for hardware modifications to add virtual
memory support and implement virtual machine support, however all the
extra 360/50s were going to FAA ATC effort, so they had to settle for
a 360/40 to modify with virtual memory and they implemented
"CP/40". Then when 360/67 became available standard with virtual
memory, CP/40 morphs into CP/67 (precursor to VM370). I was
undergraduate at univ and fulltime responsible for OS/360 (running on
360/67 as 360/65), when CSC came out to install CP67 (3rd after CSC
itself and MIT Lincoln Labs) and I got to rewrite a lot of CP67
code. Six months later, CSC was having one week CP67 class in LA. I
arrive Sunday and am asked to teach the CP67 class, the CSC members
that were going to teach it had given notice on friday ... leaving for
one of CSC commercial CP67 online spinoffs.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

couple recent posts mentioning CP/40 and teaching one week CP/67 class
https://www.garlic.com/~lynn/2024f.html#40 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024.html#28 IBM Disks and Drums

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM APPN

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM APPN
Date: 22 Mar, 2025
Blog: Facebook

https://www.garliic.com/~lynn/2025.html#0 IBM APPN
https://www.garliic.com/~lynn/2025.html#1 IBM APPN
https://www.garliic.com/~lynn/2025.html#2 IBM APPN
https://www.garliic.com/~lynn/2025.html#12 IBM APPN
https://www.garliic.com/~lynn/2025.html#13 IBM APPN

same time involved in doing HSDT, T1 and faster computer links (both
terrestrial and satellite) ... mentioned in the earlier APPN posts

was asked to see about putting out a VTAM/NCP simulator done by baby
bell on IBM Series/1 as IBM Type-1 product ... basically compensating
for lots of SNA/VTAM short comings along with many new features (in
part encapsulating SNA traffic tunneled through real networking). Part
of my presentation at fall 1986 SNA ARB (architecture review board)
meeting in
Raleigh: https://www.garlic.com/~lynn/99.html#67

also part of "baby bell" presentation at spring 1986 IBM user COMMON
meeting
https://www.garlic.com/~lynn/99.html#70

Lots of criticism of the ARB presentation, however the Series/1 data
came from baby bell production operation, the 3725 came from the
communication group HONE configurator (if something wasn't accurate,
they should correct their 3725 configurator).

objective was to start porting it to RS/6000 after it first ships as
IBM product. Several IBMers involved were well acquainted with the
communication group internal politics and attempted to provide
countermeasure to everything that might be tried, but what happened
next can only be described as reality/truth is stranger than fiction.

CSC trivia: one of my hobbies after joining IBM was enhanced
production operating systems for internal datacenters and HONE (after
CSC itself) was my first and long time customer. Also other CSC
co-workers tried hard to convince CPD that they should use the
(Series/1) Peachtree processor for 3705.

T1 trivia: FSD for gov. accounts with failing 60s IBM
telecommunication controllers supporting T1, had come out with the
Zirpel T1 cards for Series/1 (which hadn't been available to the "baby
bell").

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some recent posts mentioning Series/1 zirpel T1 card
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#15 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#99 Terminals
https://www.garlic.com/~lynn/2024g.html#79 Early Email
https://www.garlic.com/~lynn/2024e.html#34 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1

--
virtualization experience starting Jan1968, online at home since Mar1970

AIM, Apple, IBM, Motorola

From: Lynn Wheeler <lynn@garlic.com>
Subject: AIM, Apple, IBM, Motorola
Date: 23 Mar, 2025
Blog: Facebook

The last product we did at IBM was HA/6000 approved by Nick Donofrio
in 1988 (before RS/6000 was announced) for the NYTimes to move their
newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix that had vaxcluster
support in same source base with unix). The S/88 product administrator
then starts taking us around to their customers and also has me do a
section for the corporate continuous availability strategy
document ... it gets pulled when both Rochester/AS400 and
POK/(high-end mainframe) complain they couldn't meet the requirements.

When HA/6000 started, the executive that we had first reported to,
then goes over to head up Somerset for AIM (later leaves for SGI and
president of MIPS)
https://en.wikipedia.org/wiki/AIM_alliance
do a single chip 801/RISC for Power/PC, including Motorola 88k
bus/cache enabling multiprocessor configurations

Early Jan1992 have a meeting with Oracle CEO and IBM/AWD Hester tells
Ellison we would have 16-system clusters by mid92 and 128-system
clusters by ye92. Then late Jan92, cluster scale-up is transferred for
announce as IBM Supercomputer (for technical/scientific *ONLY*) and we
are told we can't work on anything with more than four processors (we
leave IBM a few months later).

1992, IBM has one of the largest losses in the history of US companies
and was in the process of being re-orged into the 13 "baby blues" in
preparation for breaking up the company (take off on the "baby bell"
breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup

Not long after leaving IBM, I was brought in as consultant into small
client/server startup, two of the former Oracle people (that were in
the Hester/Ellison meeting) were there responsible for something they
called "commerce server" and they wanted to do payment transactions,
the startup had also invented this technology they called "SSL" they
wanted to use, it is now frequently called "electronic commerce" (or
ecommerce).

I had complete responsibility for everything between "web servers" and
gateways to the financial industry payment networks. Payment network
trouble desks had 5min initial problem diagnoses ... all circuit
based. I had to do a lot of procedures, documentation and software to
bring packet-based internet up to that level. I then did a talk (based
on ecommerce work) "Why Internet Wasn't Business Critical
Dataprocessing" ... which Postel (Internet standards editor) sponsored
at ISI/USC.

Other Apple history, 80s, IBM and DEC co-sponsored MIT Project Athena
(X-windows, Kerberos, etc), each contributing $25M. Then IBM sponsored
CMU group ($50M) that did MACH, Camelot, Andrew widgets, Andrew
filesystem, etc. Apple brings back Job and MAC is redone built on CMU
MACH.
https://en.wikipedia.org/wiki/Mach_(kernel)

1993, mainframe and multi-chip RIOS RS/6000 comparison (industry
benchmark, number program iterations compared to reference platform)

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU

RS6000/990 : 126MIPS; 16-system cluster: 2BIPS; 128-system cluster:
             16BIPS

During 90s, i86 implements pipelined, on-the-fly, hardware translation
of i86 instructions to RISC micro-ops for execution, largely negating
RISC throughput advantage. 1999, enabled multiprocessor, but still
single core chips

single IBM PowerPC 440 hits 1,000MIPS
single Pentium3 hits 2,054MIPS (twice PowerPC 440)

by comparison, mainframe Dec2000

z900: 16 processors, 2.5BIPS (156MIPS/proc)

then 2010, mainframe versus i86

E5-2600, two XEON 8core chips, 500BIPS (30BIPS/proc)
z196, 80 processors, 50BIPS (625MIPS/proc)

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ecommerce payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

some recent posts mention business ciritical dataprocessing
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#25 Taligent
https://www.garlic.com/~lynn/2024g.html#20 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024f.html#47 Postel, RFC Editor, Internet
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 70s & 80s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 70s & 80s
Date: 23 Mar, 2025
Blog: Facebook

Amdahl won the battle to make ACS, 360 compatible. It was then
canceled, folklore that executives felt it would advance state of the
art too fast and IBM would loose control of the market. Amdahl leaves
IBM and starts his own computer company. More details including some
features that show up more than 20yrs later with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html

1972, Learson tries (and fails) to block the bureaucrats, careerists
and MBAs from destroying Watson culture and legacy; pg160-163, 30yrs
of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

IBM starts "Future System", completely different than 370 and was
going to completely replace it. During FS, internal politics were
killing off 370 efforts (claim that the lack of new 370 during FS
period is what gave clone 370 makes, including Amdahl, their market
foothold). When FS finally implodes, there is mad rush to get stuff
back into the 370 product pipelines, including kicking off quick&dirty
3033&3081 efforts
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

Future System ref, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

After joining IBM, one of my hobbies was enhanced production operating
systems for internal datacenters (online sales&marketing support HONE
was one of the first and long time customers). I also continued to
visit customers and attend user group meetings. The director of one of
the largest financial datacenters like me to stop by periodically and
talk technology. At one point the IBM branch manager horribly offended
the customer and in retaliation they order an Amdahl system (a lone
Amdahl in vast sea of blue, up until then Amdahl had been selling into
technical/scientific/univ and this would be the first for "true blue"
commercial). I was asked to go onsite for 6-12 months (apparently to
obfuscate why they were ordering an Amdahl machine). I talk it over
with the customer and then decline the IBM offer. I was then told that
the branch manager was good sailing buddy of IBM CEO and if I didn't,
I could forget career, promotions, raises.

3033 started out 168 logic remapped to 20% faster chips and 3081 was
going to be multiprocessor only using some left-over FS
technology. 2-CPU 3081D benchmarked slower than Amdahl single-cpu and
even some benchmarks slower than 2-cpu 3033. They quickly double the
processor cache size for 3081K with about same aggregate MIPS as
single CPU Amdahl (although MVS docs was that 2-CPU overhead only got
1.2-1.5 times throughput of 1-CPU system, so Amdahl still had distinct
edge)

I was introduced to John Boyd in the early 80s and would sponsor his
briefings at IBM. The Commandant of Marine Corps in 89/90 leverages
Boyd for make-over of the corps (when IBM was desperately in need of
make-over, at the time the two organizations had about the same number
of people). USAF pretty much had disowned Boyd when he passed in 1997
and it was the Marines that were at Arlington and Boyd's effects go to
Quantico. The (former) commandant (passed 20Mar2024) continued to
sponsor Boyd conferences for us at Marine Corps Univ, Quantico.

In 1992, IBM has one of the largest losses in the history of US
companies and was being re-organized into the 13 "baby blues" in
preparation for breaking up the company (take-off on the "baby bell"
breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup

some more detail
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 70s & 80s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 70s & 80s
Date: 23 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s

early 80s got HSDT project, T1 and faster computer links (both
terrestrial and satellite) and some battles with communication group;
note 60s IBM had telecommunication controller supporting T1, IBM move
to SNA/VTAM in the 70s, with the associated issues seem to cap
controllers at 56kbit/sec links

mid-80s was asked to see about putting out a VTAM/NCP simulator done
by baby bell on IBM Series/1 as IBM Type-1 product ... basically
compensating for lots of SNA/VTAM short comings along with many new
features (in part encapsulating SNA traffic tunneled through real
networking). Part of my presentation at fall 1986 SNA ARB
(architecture review board) meeting in Raleigh:
https://www.garlic.com/~lynn/99.html#67
also part of "baby bell" presentation at spring 1986 IBM user COMMON
meeting
https://www.garlic.com/~lynn/99.html#70

Lots of criticism of the ARB presentation, however the Series/1 data
came from baby bell production operation, the 3725 came from the
communication group HONE configurator (if something wasn't accurate,
they should correct their 3725 configurator).

objective was to start porting it to RS/6000 after it first ships as
IBM product. Several IBMers involved were well acquainted with the
communication group internal politics and attempted to provide
countermeasure to everything that might be tried, but what happened
next can only be described as reality/truth is stranger than fiction.

T1 trivia: FSD for gov. accounts with failing 60s T1 IBM
telecommunication controllers, had come out with the Zirpel T1 cards
for Series/1 (which hadn't been available to the "baby bell").

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

recent posts mentioning Zirpel T1 card
https://www.garlic.com/~lynn/2025b.html#40 IBM APPN
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#15 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#99 Terminals
https://www.garlic.com/~lynn/2024g.html#79 Early Email
https://www.garlic.com/~lynn/2024e.html#34 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 70s & 80s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 70s & 80s
Date: 23 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#43 IBM 70s & 80s

Depends what workload was, my brother was apple regional marketing rep
and when came into hdqtrs, I could be invited to business dinners and
could argue mac design with developers (before mac was announced). He
also figured out how to remote dial into the s/38 that ran apple to
track manufacturing and shipment schedules.

I was doing some work for disk bldg14/engineering and bldg15/product
test ... when bldg15 got engineering 4341 in 1978 and jan1979, IBM
branch office cons into doing benchmark for national lab looking at
getting 70 for a compute farm (sort of the leading edge of the coming
cluster supercomputing tsunami).

4300s sold into same mid-range market as DEC VAX machines and in about
same numbers in small unit orders. Big difference was large
corporations with orders for hundreds of vm/4341s at a time for
placing out in departmental areas (sort of the leading edge of the
coming distributed computing tsunami). Inside IBM, conference rooms
were becoming scarce because so many had been converted to
departmental distributed vm/4341s.

MVS saw big explosion in these departmental machines but were locked
out out of the market since only new non-datacenter DASD were
fixed-block 3370s (and MVS never did FBA support). Eventually came the
3375, w/CKD emulation. However it didn't do MVS much good, support for
departmental vm/4341s was measured in scores of machines per person,
while MVS was still scores of support personnel per system.

aka 1000 distributed AS/400s supported by 10-20 people?

IBM 4341 announced 30jan1979, FCS 30Jun1979, replaced by 4381
announced 1983, as/400 announced jun1988, released aug1988 ... almost
decade later.

trivia: when I 1st transferred from CSC to SJR in the 70s, I got to
wander around IBM (and non-IBM) datacenters in silicon valley,
including disk bldg14/engineering and bldg15/product test across the
street. They were running 7x24, prescheduled, stand-alone testing and
mentioned they had recently tried MVS, but it had 15min MTBF (in that
environment) requiring manual re-ipl. I offered to rewrite I/O
supervisor to be bullet proof and never fail, allowing any amount of
ondemand, concurrent testing, greatly improving productivity.
Downside, was I started getting phone calls asked for help when they
had problems, and I had to increasingly play disk engineer, also
worked some with disk engineer that got RAID patent. Later when 3380s
were about to ship, FE had test package of 57 simulated errors, in all
57 cases, MVS was still failing (requiring re-ipl) and in 2/3rds of
the cases with no indication of what caused the failure.

getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

Earlier, all during FS, I continued to work on 370 and would
periodically ridicule what they were doing. I had done an internal
page-mapped filesystem (single level store) for CMS (that was never
released to customers) and would claim I learned what not to do from
TSS/360 (which some of FS was modeled). Later I learned that Rochester
S/38 was sort of a significantly simplified FS followon ... part of
simplification was allowing scatter allocate across all disks in the
system (as a result, all disks had to be backed up as single entity
and any time there was single disk failure ... common in the period
... it required replacing the failed disk and restoring the complete
filesystem. This became increasingly traumatic as number of disks in
the system increased that S/38 became an early RAID adopter (wasn't
seen in small single disk configuration). One of the final nails in
the FS coffin was by the Houston Science Center that if 370/195
applications were redone for FS machine built out of the fastest
technology available, it would have the throughput of 370/145 (about
30 times slowdown). There was a lot of throughput headroom between
available technology and S/38 market throughput requirements.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

I was also working with Jim Gray and Vera Watson on original
SQL/relational, System/R and we were able to do tech transfer to
Endicott for SQL/DS "under the radar" while the company was
preoccupied with the next great DBMS, "EAGLE". When Jim Gray departed
IBM Research fall 1980 for Tandem, he was palming off some stuff on
me, including wanted me to help with the System/R joint project with
BofA, that was getting 60 VM/4341s for System/R (including further
reducing operations support for distributed VM/4341s).. Later when
"EAGLE" imploded there was request for how fast could System/R be
ported to MVS, which was eventually released as DB2, originally for
"decision support" only.

original SQL/relational, System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

Business Planning

From: Lynn Wheeler <lynn@garlic.com>
Subject: Business Planning
Date: 25 Mar, 2025
Blog: Facebook

Starting in the late 70s, we had lots of discussions about how
computer use illiterate most of IBM was ... especially management and
executives, and what could we do to turn it around. Also 3270
terminals were part of fall planning and each one required VP
sign-off. Then there was period with rapidly spreading rumors that
some senior executives might be using PROFS/EMAIL and lots of other
executives and middle management began rerouting justified
technical/development 3270 deliveries to their desks (for a status
symbol, to create a facade that they might be computer
literate). These were typically turned on in the morning and left on
all day with the VM logon logo or some cases PROFS menu being burned
into the screen ... while their admin people actually processed their
email. This management rerouting of 3270s and later large screen PCs
(as status symbols) continued through the 80s.

1972, Learson had tried (and failed) to block the bureaucrats,
careerists, and MBAs from destroying watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Future System
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and
*MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive

... snip ...

I was introduced to John Boyd in the early 80s and use to sponsor his
briefings at IBM. The Commandant of Marine Corps in 89/90 leverages
Boyd for make-over of the corps (same time IBM was desperately in need
of make-over, at the time the two organizations had about the same
number of people). Then 1992 (20yrs after Learson's attempt to save
the company), IBM has one of the largest losses in the history of US
companies and was being re-orged into the 13 "baby blues" in
preparation for breaking up the company (take off on "baby bell"
breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup

some more detail
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

trivia: in the wake of FS implosion, Endicott cons me into helping
with ECPS for virgil/tully (138/148), initial analysis:
https://www.garlic.com/~lynn/94.html#21

then I'm con'ed into presenting the 138/148 case to business planners
around the world. Find out that US "region" planners tended to
forecast what ever Armonk said was strategic (because that is how they
got promoted) ... and manufacturing was responsible for inaccurate
regional forecasts. Different in world-trade, countries ordered and
took delivery of forecasted machines and business planners were held
accountable for problems. That gave rise to joke in the US, Armonk
habit of declaring things "strategic" things that weren't selling well
(excuse for sales/marketing incentives/promotions).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
John Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning 138/148 business planning in US regions and
world trade:
https://www.garlic.com/~lynn/2017b.html#2 IBM 1970s
https://www.garlic.com/~lynn/2016e.html#92 How the internet was invented
https://www.garlic.com/~lynn/2012d.html#70 Mainframe System 370
https://www.garlic.com/~lynn/2005g.html#16 DOS/360: Forty years

--
virtualization experience starting Jan1968, online at home since Mar1970

POK High-End and Endicott Mid-range

From: Lynn Wheeler <lynn@garlic.com>
Subject: POK High-End and Endicott Mid-range
Date: 26 Mar, 2025
Blog: Facebook

Early last decade customer asked me to track down decision to add
virtual memory too all 370s and found staff to executive that made
decision. basically MVT storage management was so bad that regions had
to be specified four times larger than used, as a result, typical
1mbyte 370/165 only ran four regions concurrently, insufficient to
keep system busy and justified. Going to MVT mapped to single 16mbyte
virtual address space (similar to running MVT in a CP67 16mbyte
virtual machine), allowed number of concurrently running regions to be
increased by factor of four times (caped at 15 because 4bit storage
protect key) with little or no paging.
https://www.garlic.com/~lynn/2011d.html#73

However as systems got bigger, needed more than 15 concurrent running
tasks and move to MVS, giving each task its own 16mbyte virtual
address space. However, OS/360 was heavily pointer passing API and
they map a 8mbyte image into every 16mbyte virtual address space,
leaving 8mbytes for task. Then because subsystems were given their own
virtual address space, needed some way to pass information back&forth
and the common segment ("CSA") was created mapped into every address
space. However CSA space requirement was somewhat proportional to
number of subsystems and number of concurrent tasks that CSA quickly
became "common system area", by 3033 was frequently 5-6mbytes leaving
2-3mbytes (but threatening to become 8, leaving nothing for tasks). As
a result, for MVS, a subset of 370/xa access registers was retrofitted
to 3033 as "dual address space mode" (by person that shortly left IBM
for HP labs ... and was one of the major architects for Itanium) so
MVS subsystems could directly access caller's address space.

The other issue was with the increase in concurrently running tasks
and MVS bloat, 3033 throughput was increasingly limited with only
16mbytes of real storage. Standard 370 had a 16bit page table entry
with 2 undefined bits. They co-opt those bits to prefix to 12bit
4096kbyte page number (16mbytes) for 14bit page number (allowing
mapping 16mbyte virtual address into 64mbyte real address, enabling
64mbyte real storage).

3081 initially ran 370 mode (and could do the 3033 hack for 64mbyte
real storage), but with MVS bloat things were increasingly becoming
desperate for MVS/XA and 31-bit addressing. 3081 did support "data
streaming" channels i.e. selector and block mux did end-to-end
handshake for every byte transferred, "data streaming" channels did
multiple byte transfer per end-to-end handshake, increasing max
channel distrance from 200ft to 400ft and 3mbyte/sec transfers,
supporting 3880 controllers with 3mbyte/sec 3380 disks.

When FS imploded, Endicott cons me into helping with ECPS microcode
assist (originally 138/148, but then for 4300s also)
https://www.garlic.com/~lynn/94.html#21

and another group gets me to help with a 16-cpu, tightly-coupled 370
multiprocessor and we con the 3033 processor engineers into working on
it in their spare time (lot more interesting than remapping 168 logic
to 20% faster chips).

Everybody thought the 16-cpu was really great until somebody
tells the head of POK that it could be decades before the POK favorite
son operating system ("MVS") had (effective) 16-cpu support. At the
time, MVS docs had 2-cpu system only had 1.2-1.5 times the throughput
of single cpu system (and overhead increased as number CPUs
increased). Then head of POK invites some of us to never visit POK
again and directed the 3033 processor engineers "heads down and no
distractions" (note POK doesn't ship 16-CPU tightly-coupled until
after turn of century).

Early 80s, I get permission to give talk on how ECPS was done at user
group meetings, including monthly BAYBUNCH hosted by Stanford
SLAC. After the meetings, Amdahl attendees would grill me for
additional details. They said they had created MACROCODE mode,
370-like instructions that ran in microcode mode in order to quickly
respond to the plethora of trivial 3033 micocrode hacks done required
for running latest MVS. They were then in process of using it to
implement microcode hypervisor ("multiple domain"), sort of like 3090
LPAR/PRSM nearly decade later. Customers then weren't converting to
MVS/XA as IBM planned, but Amdahl was having more success because they
could run MVS and MVS/XA concurrently on the same machine.

Also after the implosion of FS, the head of POK had managed to
convince corporate to kill VM370 product, shutdown the development
group and transfer all the people to POK for MVS/XA (Endicott
eventually manages to save the VM370 product mission for the
mid-range, but had to recreate a development group from scratch). The
group did manage to implement a very simplified 370/XA VMTOOL for
MVS/XA development (never intended for release to customers). However,
with the customers not moving to MVS/XA as planned (and Amdahl having
more success), the VMTOOL was packaged as VM/MA (migration aid) and
VM/SF (system facility) allowing MVS and MVS/XA to be run concurrently
on IBM machines.

Other trivia: 3081s were going to be multiprocessor only, initial 2cpu
3081D aggregate MIPS was less than 1cpu Amdahl and some benchmark
throughput was even less than 2cpu 3033. Quickly processor caches were
doubled for 3081K which had about same aggregate MIP-rate as 1cpu
Amdahl (but MVS had much lower throughput because of the 2cpu
overhead).

future systems posts
https://www.garlic.com/~lynn/submain.html#futuresys

other posts mentioning MVS, CSA, MVS/XA VMTOOL/ VM/MA, VM/SF, VM/XA, &
SIE
https://www.garlic.com/~lynn/2025b.html#27 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Datacenters

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Datacenters
Date: 27 Mar, 2025
Blog: Facebook

I had taken 2 credit hr intro to fortran/computers and at end of
semester, I was hired to rewrite 1401 MPIO for 360/30. Univ was
getting 360/67 for tss/360 to replace 709/1401 and got 360/30
temporarily pending 360/67. Then when 360/67 shows up I was hired
fulltime responsible for OS/360 (ran as 360/65, tss/360 wasn't up to
production). Then before I graduate, I was hired fulltime into small
group in Boeing CFO office to help with creation of Boeing Computer
Services (consolidate all dataprocessing into independent business
unit). I think Renton datacenter largest in the world, 360/65s
arriving faster than they could be installed, boxes constantly staged
in hall ways around machine room (joke that Boeing was getting 360/65s
like other companies got keypunches).

After graduating, I leave CFO and join IBM science center, one of
hobbies was enhanced production operating systems for internal
datacenters, first CP67L, then CSC/VM, then after transferring to
research on the west coast, I got to wander around IBM (& non-IBM)
datacenters in silicon valley including DASD bldg14/engineering and
bldg15/product test, across the street. They were running 7x24,
pre-scheduled, stand-alone testing and mentioned that they had
recently tried MVS, but it had 15min MTBF failure (requiring manual
re-ipl) in that environment. I offer to rewrite I/O supervisor to make
it bullet proof and never fail allowing any amount of concurrent,
on-demand testing, greatly improving productivity (downside was I
started getting phone calls any time they had any sort of problem and
I had to spend increasing amount of time playing disk engineer). There
was joke that I worked 4shift weeks, 1st shift in 28, 2nd shift in
14/15, 3rd shift in 90, and 4th shift (weekends) up in Palo Alto
HONE. Later I also got part of a wing in bldg29 for office and labs.

Then bldg15, product test, got a engineering 3033 (first outside of
POK processor engineering) for doing DASD testing. Testing only took a
percent or two of processor, so we scrounge up a 3830 and 3330 string
for setting up our own private online service (and run a 3270 coax
under the street to my office in 28). At the time, air-bearing
simulation (part of designing thin film disk head, originally used for
3370 FBA DASD) was getting a couple turn arounds a month on the SJR
370/195. We set it up on the 3033 in bldg 15 (with less than half MIPS
of 195) and they could get several turn arounds a day.

3272(& 3277) had .086sec hardware response. then 3274/3278 was
introduced with lots of 3278 hardware move back to 3274 controller,
cutting 3278 manufacturing costs and significantly driving up coax
protocol chatter ... increasing hardware response to .3sec to .5sec
depending on amount of data (in the period studies were showing .25sec
response improved productivity). Letters to the 3278 product
administrator complaining about interactive computing got a response
that 3278 wasn't intended for interactive computing but data entry
(sort of electronic keypunch).

.086sec hardware response required .164sec system response (for .25sec
response). joke about 3278, time machine was required to transmit
responses into the past (in order for .25sec response). I had several
internal SJR/VM systems with .11sec system response (SJR, STL,
consolidated US HONE sales&marketing support up in palo alto, etc)

3270 did have half-duplex problem, if typing away and hit key just as
screen was being updated, keyboard would lockup and would have to stop
and hit reset before continue. Yorktown had FIFO boxes made for 3277,
unplug the keyboard from the screen, plug in the FIFO box and plug the
keyboard into the FIFO box (it would hold chars in the FIFO box
whenever screen was being written, eliminating the lockup problem).

Later IBM/PC 3277 emulator cards had 4-5 upload/download throughput of
3278 emulator cards.

TSO/MVS users never noticed 3278 issues, since they rarely ever saw
even 1sec system response. One of MVS/TSO problems wasn't so much TSO
but OS/360 extensive use of multi-track search that would result in
1/3rd second I/O that locks up the device, the controller (and all
devices on that controller), and channel (and all controllers on that
channel). SJR's 370/195 was then replaced with a 370/168 for MVS and a
370/158 VM/370 with dual channel connections to all 3830 DASD
controllers, but controllers and strings were categorized as VM/370
only and MVS only.

One morning, bldg28 operations had mistakenly mounted a MVS 3330 on a
VM/370 string and within five minutes, operations started getting
irate calls from VM/370 users all over bldg28 asking what happened to
interactive response. It came down to the mismounted MVS 3330 and
operations said that they couldn't move it until 2nd shift. VM370
people then put up a one pack VS1 3330 (highly optimized for running
under VM370) on a MVS string ... and even though the VS1 system was
running in virtual machine on heavily loaded 158, it was able to bring
the 168 MVS system to its knees ... alleviating some of the response
troubles for the VM370 users. Operations then said that they would
immediately move the mismounted MVS 3330, if we move the VS1 3330.

Then 1980, STL was bursting at the seams and moving 300 people (&
3270s) from the IMS group to offsite bldg (just south of main plant
site). They had tried "remote 3270" but found the human factors
totally unacceptable. I then get con'ed into doing channel extender
support, placing channel-attached 3270 controllers at the offsite
bldgs, which resulted in no perceptible difference in off-site and
inside STL response. STL had been configuring their 168s with 3270
controller spread across all channels with 3830 controllers. It turns
out the 168s for the offsite group, saw a 10-15% improvement in
throughput. The issue was that IBM 3270 controllers had high channel
busy interfering with DASD I/O, the channel-extender radically reduced
channel busy for the same amount of 3270 terminal traffic.

trivia: before transferring to SJR, I had got to be good friends with
the 3033 processor engineers. After Future System implosion
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

I got ask to help with 16-CPU SMP, tightly-coupled machine and we
con'ed the 3033 processor engineers into helping in their spare time
(a lot more interesting than remapping 168 logic to 20% faster chips),
everybody thought it was great until somebody told the head of POK
that it could be decades before the POK favorite son operating system
("MVS") had (effective) 16-CPU support (MVS docs at time had 2-CPU
operation only had 1.2-1.5 times throughput of 1-CPU because of
enormous multiprocessor overhead) and he invites some of us to never
visit POK again and directs the 3033 processor engineers heads down
and no distractions ... POK doesn't ship a 16-CPU system until after
turn of century. Note in the morph from CP67->VM370, lots of features
had been dropped including multiprocessor support. Starting with
VM370R2, I start moving lots of CP67 stuff back into VM370 (for
CSC/VM). Then early VM370R3, I put in multiprocessor support,
initially for consolidated US HONE operations in Palo Alto so they
could add a 2nd processor to each of their systems (and 2-CPU were
getting twice the throughput of previous 1-CPU).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
cp67l, csc/vm, sjr/vm, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
playing disk engineer in bldgs 14/15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some undergraduate, 709/1401, MPIO, 360/67, Boeing CFO, Renton
datacenter posts
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

posts mentioning response and 3272/3277 3724/3278 comparison
https://www.garlic.com/~lynn/2025.html#127 3270 Controllers and Terminals
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2014h.html#106 TSO Test does not support 65-bit debugging?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Datacenters

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Datacenters
Date: 27 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters

1988, IBM branch office asked if I could help LLNL (national lab)
standardize some serial stuff that they were working with. It quickly
becomes "fibre channel standard" ("FCS", including some stuff that I
had done in 1980, initially 1gbit/sec, full-duplex, aggregate
200mbytes/sec). Then POK finally get their stuff released with ES/9000
as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK
engineers become involved with FCS and define a heavy-weight protocol
that significantly reduces throughput that is released as FICON. The
most recent public benchmark that I can find is z195 "Peak I/O" that
got 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time, a
FCS was announce for E5-2600 server blade claiming over million IOPS
(two such FCS having higher throughput than 104 FICON. Note IBM docs
says that SAPs (system assist processors that do actual I/O) need to
be kept to 70% CPU (or about 1.5M IOPS). Note also no CKD DASD had
been made for decades, all being simulated on industry standard
fixed-block disks (extra simulation layer overhead for disk I/O).

1993, mainframe and multi-chip RIOS RS/6000 comparison (industry
benchmark, number program iterations compared to reference platform)

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990  : 126MIPS; 16-system cluster: 2BIPS; 128-system cluster:
              16BIPS

During 90s, i86 implements pipelined, on-the-fly, hardware translation
of i86 instructions to RISC micro-ops for execution, largely negating
RISC throughput advantage. 1999, enabled multiprocessor, but still
single core chips

single IBM PowerPC 440 hits 1,000MIPS
single Pentium3 hits 2,054MIPS (twice PowerPC 440)

by comparison, mainframe Dec2000

z900: 16 processors, 2.5BIPS (156MIPS/proc)

then 2010, mainframe (z196) versus i86 (E5-2600 server blade)

E5-2600, two XEON 8core chips, 500BIPS (30BIPS/proc)
z196, 80 processors, 50BIPS (625MIPS/proc)

multi-core chips ... each core is somewhat analogous to a large car
assembly building, each building with multiple assembly lines, some
specializing in types of vehicles, orders come in one end, assigned to
a line and comes off line to large parking area ... and released from
building in same sequence that order order appeared. sometimes a
vehicle will reach station that doesn't have part locally, can be
pulled off the line, while part request is sent to remote
warehouse. BIPS now is number of vehicles/hour (and less how long each
vehicle takes to build).

trivia: I was introduced to John Boyd in early 80s and would sponsor
his briefings. He had lot of stories to tell, including being very
vocal against electronics across the trail, possibly as punishment he
was put in command of "spook base" (about the same time I'm at Boeing)
also claimed it had the largest air conditioned bldg in that part of
the world.
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
Boyd biography claims that "spook base" was $2.5B windfall for IBM
(about ten times IBM systems in Boeing Renton datacenter).

Marine Corps Commandant in 89/90 leverages Boyd for make-over for the
corps (at a time when IBM was desperately in need of make-over, also
IBM and the corps had about same number of people). Then IBM has one
of the largest losses in the history of US companies and was being
reorganized into the 13 "baby blues" in preparation for breaking up
the company (take-off on the "baby bell" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.

FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

POK High-End and Endicott Mid-range

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: POK High-End and Endicott Mid-range
Date: 27 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#47 POK High-End and Endicott Mid-range

FS/Future System:
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

I continued to work on 370 all during FS and perriodically would
ridicule what they were doing

Future System ref, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and
*MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive

... snip ...

one of the last nails in the FS coffin was study by Houston Science
Center that if 370/195 apps were redone for FS machine made out of
fastest technology available, it would have throughput of 370/145
(about 30 times slowdown)

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880, 3380, Data-streaming

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880, 3380, Data-streaming
Date: 28 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#26 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#27 IBM 3880, 3380, Data-streaming

Old email about 3880 speed matching Calypso (and Calypso ECKD) and
double density 3350
https://www.garlic.com/~lynn/2015f.html#email820111
https://www.garlic.com/~lynn/2007e.html#email820907b

also mentioned is MVS supporting FBA devices. I had offered to do it
for the MVS group but got back an answer then even if I provided fully
integrated and tested I needed business case of $26M ($200M
incremental sales) to cover cost of education, training and
publications ... and since IBM was already selling every disk made,
sales would just switch from CKD to FBA for the same amount of
revenue.

posts about getting to play disk engineering in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disks
posts discussing DASD, CKD, FBA, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd

some posts mentioning $26M for MVS FBA support:
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#16 REXX and DUMPRX
https://www.garlic.com/~lynn/2024b.html#110 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#32 3081 TCMs
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2022f.html#85 IBM CKD DASD
https://www.garlic.com/~lynn/2021b.html#78 CKD Disks
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019b.html#25 Online Computer Conferencing
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018f.html#34 The rise and fall of IBM
https://www.garlic.com/~lynn/2018e.html#22 Manned Orbiting Laboratory Declassified: Inside a US Military Space Station
https://www.garlic.com/~lynn/2017j.html#88 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2016g.html#74 IBM disk capacity
https://www.garlic.com/~lynn/2016c.html#12 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015h.html#24 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2015f.html#86 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
https://www.garlic.com/~lynn/2014b.html#18 Quixotically on-topic post, still on topic
https://www.garlic.com/~lynn/2014.html#94 Santa has a Mainframe!
https://www.garlic.com/~lynn/2013n.html#54 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2013i.html#2 IBM commitment to academia
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2013f.html#80 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013d.html#2 Query for Destination z article -- mainframes back to the future
https://www.garlic.com/~lynn/2013c.html#68 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013.html#40 Searching for storage (DASD) alternatives
https://www.garlic.com/~lynn/2012p.html#32 Search Google, 1960:s-style
https://www.garlic.com/~lynn/2012o.html#58 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing
https://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012g.html#19 Co-existance of z/OS and z/VM on same DASD farm
https://www.garlic.com/~lynn/2011j.html#57 Graph of total world disk space over time?
https://www.garlic.com/~lynn/2011e.html#44 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011b.html#47 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???

--
virtualization experience starting Jan1968, online at home since Mar1970

POK High-End and Endicott Mid-range

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: POK High-End and Endicott Mid-range
Date: 29 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#47 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#49 POK High-End and Endicott Mid-rangea

As part of ECPS 138/148, Endicott tried to get corporate permission to
allow them to preinstall VM370 on every 138/148 shipped (VS1 with VS1
handshaking ran faster under VM370 than on bare machine) ... however
with POK in the process of getting VM/370 product killed ... it was
vetoed.

Endicott also cons me into going around the world presenting the
138/148 business case (and I learned some about difference between US
region planners and WTC country planners ... including much of WTC was
mid-range market). Note also much of POK had been deeply involved in
the FS disaster ...including how VS2 R3 (MVS) would be the FS
operating system ... pieces of old email about decision to add virtual
memory to all 370s.
https://www.garlic.com/~lynn/2011d.html#73

also as mentioned one of final nails in FS coffin was analysis that if
370/195 apps were redone for FS machine made out fastest hardware
technology available, they would have throughput of 370/145 (30 times
slow down)

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mentioning 138/148, ecps, endicott, vm370, virtual memory
for all 370s, POK getting VM370 product "killed"
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#10 IBM 37x5
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017d.html#83 Mainframe operating systems?
https://www.garlic.com/~lynn/2017c.html#80 Great mainframe history(?)
https://www.garlic.com/~lynn/2015b.html#39 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2011k.html#9 Was there ever a DOS JCL reference like the Brown book?
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2010d.html#78 LPARs: More or Less?
https://www.garlic.com/~lynn/2009r.html#51 "Portable" data centers
https://www.garlic.com/~lynn/2009r.html#38 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Modernization

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Modernization
Date: 29 Mar, 2025
Blog: Facebook

review of 1990 FAA ATC modernization supposedly had hardware
redundancies masking all failures ... greatly simplifying software
... big one (at least) they missed was checking for human mistakes
(needed to go back and correct infrastructure programmed for "correct
operation" ... including human part).

Mid-90s, prediction mainframes were going away and financial spent
billions on new transaction support. Huge amount of cobol (some from
the 60s) that did overnight settlement that had since been added some
"real-time" transaction support (with settlement deferred/queued for
overnight window ... which was becoming major bottleneck). Billions
were spent redoing for straight-through processing on large
numbers of "killer micros" running in parallel. They ignored warnings
that the industry parallel libraries they were using had 100 times the
overhead of mainframe batch cobol (totally swamping throughput)
... until major pilots went down in flames and predictions it would be
decades/generations before it was tried again.

posts modernization faa atc modernization
https://www.garlic.com/~lynn/2025b.html#36 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2024.html#114 BAL
https://www.garlic.com/~lynn/2023f.html#84 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2022b.html#97 IBM 9020
https://www.garlic.com/~lynn/2013n.html#76 A Little More on the Computer
https://www.garlic.com/~lynn/2012i.html#42 Simulated PDP-11 Blinkenlight front panel for SimH
https://www.garlic.com/~lynn/2007e.html#52 US Air computers delay psgrs
https://www.garlic.com/~lynn/2005c.html#17 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2002g.html#16 Why are Mainframe Computers really still in use at all?

posts mentioning overnight batch cobol and modernization for
straight-through processing
https://www.garlic.com/~lynn/2025.html#78 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#76 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2022g.html#69 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2018f.html#85 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2017f.html#11 The Mainframe vs. the Server Farm: A Comparison
https://www.garlic.com/~lynn/2017d.html#39 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017.html#82 The ICL 2900

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Datacenters

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Datacenters
Date: 30 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#49 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#51 IBM Datacenters

3090 story trivia: The 3090 product administrator tracked me down
after 3090s had been out for a year. 3090 channels had been designed
to have only 3-5 "channel errors" aggregate across all customers per
year ... but there was a industry service that collected customer EREP
data from IBM and non-IBM (compatible) mainframes and published
summaries ... which showed (customer) 3090 channels had aggregate of
20 "channel errors".

Turns out when I had done channel-extender support in 1980 (originally
for IBM STL but also for IBM Boulder), and for various kinds of
transmission errors, I would simulate unit-check/channel-check
... kicking of channel program retry. While POK managed to veto
releasing my support to customers, a vendor replicated it and it was
running in some customer shops. 3090 product administrator asked me if
I could do something ... I researched retry and showed that simulating
unit-check/IFCC (interface control check) effectively resulted in the
same channel program retry and got the vendor to change to simulating
"IFCC".

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.exender

some past posts mentioning the incident
https://www.garlic.com/~lynn/2025.html#28 IBM 3090
https://www.garlic.com/~lynn/2024g.html#42 Back When Geek Humour Was A New Concept To Me
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#27 STL Channel Extender
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2016h.html#53 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2012l.html#25 X86 server
https://www.garlic.com/~lynn/2012e.html#54 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2010m.html#83 3270 Emulator Software

--
virtualization experience starting Jan1968, online at home since Mar1970

Planet Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: Planet Mainframe
Date: 30 Mar, 2025
Blog: Facebook

Planet Mainframe
https://planetmainframe.com/influential-mainframers-2024/lynn-wheeler/
taking votes for 2025

2024 Planet Mainframe BIO
https://planetmainframe.com/influential-mainframers-2024/lynn-wheeler/

some 2022 Linkedin posts

z/VM 50th part 1 through 8
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-7-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50-part-8-lynn-wheeler/

and then there is

Knights of VM
http://mvmua.org/knights.html
Mainframe Hall of Fame
https://www.enterprisesystemsmedia.com/mainframehalloffame
Mar/Apr '05 eserver article
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
Apr2009 Greater IBM Connect article
https://www.garlic.com/~lynn/ibmconnect.html

Reinventing Virtual Machines
https://cacm.acm.org/opinion/reinventing-virtual-machines/

original virtual machines, CP/40 and CP/67 at IBM Cambridge Science
Center
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

POK High-End and Endicott Mid-range

From: Lynn Wheeler <lynn@garlic.com>
Subject: POK High-End and Endicott Mid-range
Date: 31 Mar, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#47 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#49 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#51 POK High-End and Endicott Mid-rangea

3081 was some warmed over FS technology
http://www.jfsowa.com/computer/memo125.htm

The 370 emulator minus the FS microcode was eventually sold in 1980 as
as the IBM 3081. The ratio of the amount of circuitry in the 3081 to
its performance was significantly worse than other IBM systems of the
time; its price/performance ratio wasn't quite so bad because IBM had
to cut the price to be competitive. The major competition at the time
was from Amdahl Systems -- a company founded by Gene Amdahl, who left
IBM shortly before the FS project began, when his plans for the
Advanced Computer System (ACS) were killed. The Amdahl machine was
indeed superior to the 3081 in price/performance and spectaculary
superior in terms of performance compared to the amount of circuitry.]

... snip ...

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

from one of the 3033 processor engineers worked with in their spare
time on 16-cpu 370 (before head of POK put a stop to it when he heard
that it could be decades before POK favorite son operating system
("MVS") would have (effective) 16-CPU support (POK doesn't ship 16-cpu
machine until after turn of century), once 3033 was out the door, they
start on trout/3090
https://www.garlic.com/~lynn/2006j.html#email810630

of course it wasn't just paging SIE that was 3081 problems.

smp, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

recent posts mentioning SIE, 16-cpu 370, 3033 processor engineers and
head of POK:
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#35 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#89 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#56 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024g.html#37 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#90 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#46 IBM TCM
https://www.garlic.com/~lynn/2024f.html#37 IBM 370/168
https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#17 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2021i.html#66 Virtual Machine Debugging

--
virtualization experience starting Jan1968, online at home since Mar1970

POK High-End and Endicott Mid-range

From: Lynn Wheeler <lynn@garlic.com>
Subject: POK High-End and Endicott Mid-range
Date: 01 Apr, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#47 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#49 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#51 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#55 POK High-End and Endicott Mid-rangea

discusses Amdahl winning battle to make ACS compatible ... folklore it
was killed because there was fear that it would advanced state of the
art too fast and IBM would loose control of the market ... Amdahl
leaves IBM shortly later ... also mentions features show up with
ES/9000 more than 20yrs later
https://people.computing.clemson.edu/~mark/acs_end.html

then during FS, internal politics was killing off 370 efforts and
claim that lack of new 370 during FS is credited with giving clone 370
makers (including Amdahl) their market foothold. When FS imploded
there was mad rush to get stuff back into the 370 product pipelines,
including kicking off quick&dirty 3033&3081 in parallel. One of the
last nails in the FS coffin was analysis by the IBM Houston Science
Center if 370/195 apps were redone for FS machine made out of the
fastest technology available, it would have throughput of 370/145
(about 30 times slowdown).

Future System ref, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

 ... and perhaps most damaging, the old culture under Watson Snr and
Jr of free and vigorous debate was replaced with *SYNCOPHANCY*
and *MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous,"

... snip ...

early 70s was seminal for IBM, Learson tried (and failed) to block the
bureaucrats, careerists, and MBAs from destroying Watson
culture/legacy. refs pg160-163
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

20 yrs later, IBM has one of the largest losses in the history of US
companies and was being re-orged into the 13 "baby blues" in
preparation for breaking up the company (take-off on "baby bell"
breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup

additional information
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

futuresys posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downturn, Downfall, Breakup

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downturn, Downfall, Breakup
Date: 03 Apr, 2025
Blog: Facebook

Early 70s was seminal for IBM, Learson tried (and failed) to block the
bureaucrats, careerists, and MBAs from destroying Watson
culture/legacy. refs pg160-163
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Then comes Future System, completely different than 370 and was going
to completely replace 370. Internal politics during FS was killing off
370 efforts and the lack of new 370 products during FS is claimed to
have given the clone 370 system makers (including Amdahl, who had left
IBM shortly before FS, and after ACS/360 was killed), their market
foothold. Then the Future System disaster, from 1993 Computer Wars:
The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

... one of the last nails in the FS coffin was study by Houston
Science Center that if 370/195 apps were redone for FS machine made
out of fastest technology available, it would have throughput of
370/145 (about 30 times slowdown). When FS finally implodes, there was
mad rush to get stuff back into 370 product pipelines, including quick
and dirty 3033&3081 efforts.
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

I continued to work on 360/370 all during FS and would periodically
ridicule what they were doing (including drawing analogy with long
playing cult film down at central sq), which wasn't exactly career
enhancing. Early 80s, I was introduced to John Boyd and would sponsor
his briefings at IBM. Then in 89/90, the Commandant of Marines Corps
leverages Boyd for a make-over of the Marine Corps (at a time when IBM
was desperately in need of make-over ... at that time, the Marine
Corps and IBM had approx. same number of people).

20 yrs after Learson's failed effort, IBM has one of the largest
losses in the history of US companies and was being re-orged into the
13 "baby blues" in preparation for breaking up the company (take-off
on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.

more  information
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
end of ACS/360 reference
https://people.computing.clemson.edu/~mark/acs_end.html

Future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downturn, Downfall, Breakup

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downturn, Downfall, Breakup
Date: 04 Apr, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#57 IBM Downturn, Downfall, Breakup

.... after FS finally implodes, Endicott cons me into working on
138/148 (and then used for 4300s) ECPS VM370 microcode assist ... and
then going around the world presenting business case to planners (WTC
saw better acceptance than US regions). Endicott then tries to get
corporate to allow them to pre-install VM370 on every machine shipped
... but the head of POK was in process of convincing corporate to kill
the VM370 product (and shut down the development group, moving all the
people to POK for MVS/XA; Endicott did eventually acquire VM370
product mission for the mid-range, but had to recreate a development
group from scratch). Old archived post with initial ECPS analysis
https://www.garlic.com/~lynn/94.html#21

was also roped into helping with 16-CPU 370 multiprocessor and we con
the 3033 processor engineers into helping in their spare
time. Everybody thought it was great until somebody tells head of POK
that it could be decades before the POK favorite son operating system
("MVS") had (effective) 16-CPU support. At the time MVS documentation
claimed that 2-CPU support only had 1.2-1.5 throughput of single
processor system (note POK doesn't ship 16-CPU machine until after
turn of century). The head of POK then invites some of us to never
visit again and directs the 3033 processor engineers, heads down and
no distractions. trivia: after graduating and joining IBM, one of my
hobbies was enhanced production operation systems for internal
datacenters (and the new sales&marketing support HONE systems were
one of my first ... and long time customers). After the decision to
add virtual memory to all 370s, the decision was to also do VM370. In
the morph from CP67->VM370 lots of stuff was simplified and/or
dropped (including dropping multiprocessor support). Starting with
VM370R2, I start adding CP67 stuff back into VM370 ... and then for my
VM370R3-based CSC/VM, I add multiprocessor support back in (initially
for HONE, so they could add 2nd CPU to all their systems ... and their
2-CPU systems were getting twice the throughput of the single CPU).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp

Note AMEX and KKR were in competition for private-equity,
reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help. Then
IBM has one of the largest losses in the history of US companies and\
was preparing to breakup the company when the board hires the former
president of AMEX as CEO to try and save the company, who uses some of
the same techniques used at RJR.
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Stockman and financial engineering company
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:

IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.

pg465/loc10014-17:

Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.

... snip ...

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate
Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st

The company has represented that its dividends and share repurchases
have come to a total of over $159 billion since 2000.

... snip ...

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts
Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a
little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud
Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
former AMEX president and IBM CEO
https://www.garlic.com/~lynn/submisc.html#gerstner
pensions posts
https://www.garlic.com/~lynn/submisc.html#pensions
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Retain and other online

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Retain and other online
Date: 04 Apr, 2025
Blog: Facebook

I took two credit hr intro to fortran/computers and at end of semester
was hired to reimplement 1401 MPIO in assembler for 360/30. The
univ. was getting 360/67 replacing the 709/1401 and 360/30 temporarily
replaced 1401, pending arrival of 360/67. The univ. shutdown
datacenter on weekends and I had the whole place dedicated, although
48hrs w/o sleep made Monday classes hard. I was given pile of hardware
& software manuals and got to design my own monitor, device drivers,
interrupt handlers, error recovery, storage management and within a
few weeks had 2000 card program. Then within a yr of taking intro
class, 360/67 arrived and I was hired fulltime responsible for OS/360
(ran as 360/65, tss/360 hadn't come to production operation). Student
fortran ran under second on 709 (tape->tape), but initially over a
minute on OS/360. I install HASP cutting the time in half. For
MFT-R11, I start doing carefully reorged stage2 SYSGEN to place
datasets and PDS members for optimized arm seek and multi-track
search, cutting another 2/3rds to 12.9secs (student fortran never got
better than 709 until I install Univ. of Waterloo WATFOR).

Then before I graduate, I'm hired fulltime into small group in the
Boeing CFO office to help with formation of Boeing Computer Services
(consolidate all dataprocessing into independent business unit). I
think Renton largest IBM 360 datacenter in the world, 360/65s arriving
faster than they could be installed, boxes constantly staged in
hallways around machine room (some joke that Boeing was getting
360/65s like other companies got keypunches). Lots of politics between
Renton director and CFO who only had 360/30 up at Boeing Field for
payroll (although they enlarge the room for 360/67 for me to play
with, when I'm not doing other stuff).

Later in the early 80s, I'm introduced to John Boyd and would sponsor
his all day briefings at IBM. John had lots of stories including being
very vocal that the electronics across the trail wouldn't
work. Possibly as punishment he is put in command of "spook base"
(about the same time I'm at Boeing) ... some ref:
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
Boyd biography has "spook base" a $2.5B "windfall" for IBM (ten times
Renton).

Both Boeing and IBM team had story on 360 announce, Boeing places an
order that makes the IBM rep the highest paid IBM employee that year
(in the days of straight commission). The next year, IBM transitions
to "quota" and late January another Boeing order makes his quota for
the year. His quota is then "adjusted" and he leaves IBM.

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

recent posts mention 709/1401, MPIO, Boeing CFO, Renton
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#38 IBM Computers in the 60s
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#1 Large Datacenters
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Retain and other online

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Retain and other online
Date: 05 Apr, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online

Person that did VMSG (profs group borrowed the VMSG code for their
email client) also did CMS parasite/story ... 3270 emulation sort of
like IBM/PC with HLLAPI-like language ... before IBM/PC ... old
archived description with sample stories (story ran in less than 8k
bytes)
https://www.garlic.com/~lynn/2001k.html#35
story that automated RETAIN "PUT Bucket" retriever
https://www.garlic.com/~lynn/2001k.html#36

old email about internal devlopment using parasite/story to automated
testcase drivers ... including HONE (APL-based) AIDS development (HONE
was online sales&marketing support, predated VM370/CMS originally
with CP67/CMS) using it to automate some configurator operation
(mentions SEQUOYA, >400kbytes APL-code in every HONE APL workspace,
that automatically started to provide tailored/canned interactive
environment for sales&marketing)
https://www.garlic.com/~lynn/2019d.html#email840117
reference to IMS using it for stress/regression testings
https://www.garlic.com/~lynn/2019d.html#email840117b

trivia: one of my hobbies after graduating and joining IBM (instead of
staying with Boeing CFO) was enhanced production operating systems for
internal datacenters and HONE was one of my first (back to CP67/CMS
days) and long time customer. US HONE datacenters were consolidated in
Palo Alto in the mid-70s (all the systems were merged into largest,
single-system image, shared-dasd complex in the world with fall-over
and load balancing, trivia, when FACEBOOK 1st moves into silicon
valley, it was into a new bldg built next door to the former US HONE
datacenter).

Note: after the decision to add virtual memory to all 370s, there was
decision to do VM370 and in the morph from CP67->VM370 lots of
features/functions were simplified an/or dropped (including
multiprocessor support) . For VM370R2, I started moving lots of CP67
back into VM370 and then for VM370R3-based CSC/VM, I added
multiprocessor support back in, initially for US HONE so they could
add a 2nd processor to all their systems. After the earthquake, US
HONE was replicated 1st in Dallas and then another in Boulder (as HONE
clones were sprouting up all over the world). Besides, supporting HONE
(as hobby), I was asked to go over to install the first HONE clones in
Paris (for EMEA) and Tokyo (for AFE) .. my first overseas trips (vague
memory that YEN was over 300/dollar)

trivia: one of emails mentions ITE (internal technical exchange)
... annual, internal VM370 conference originally hosted by SJR in
bldg28 auditorium.  previous post in this thread, mentioning John
Boyd, 1st time I hosted his all day briefing, it was also in SJR
bldg28 auditorium

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

other recent posts mention parasite/story
https://www.garlic.com/~lynn/2025.html#90 Online Social Media
https://www.garlic.com/~lynn/2024f.html#91 IBM Email and PROFS
https://www.garlic.com/~lynn/2024e.html#27 VMNETMAP
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#43 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981

--
virtualization experience starting Jan1968, online at home since Mar1970

Capitalism: A Six-Part Series

From: Lynn Wheeler <lynn@garlic.com>
Subject: Capitalism: A Six-Part Series
Date: 05 Apr, 2025
Blog: Facebook

Capitalism: A Six-Part Series
https://www.amazon.com/Capitalism-Six-Part-Noam-Chomsky/dp/B07BF2S4FS
https://www.amazon.com/Capitalism/dp/B07DHY1P2J

The father claimed he didn't know anything about Iran Contra because
he was fulltime deregulating the S&L industry ... causing the S&L
crisis
http://en.wikipedia.org/wiki/Savings_and_loan_crisis
with help from other members of his family
https://web.archive.org/web/20140213082405/https://en.wikipedia.org/wiki/Savings_and_loan_crisis#Silverado_Savings_and_Loan
and another
http://query.nytimes.com/gst/fullpage.html?res=9D0CE0D81E3BF937A25753C1A966958260
The S&L crisis had 30,000 criminal complaints and 1000 prison terms.

This century, the son's economic mess was 70 times larger then the
father's S&L crisis and proportionally should of had 2.1M criminal
complaints and 70k prison terms.

Corporations use to be for chartering projects in the public interest,
then they got it changed so that they could be run in private/self
interest
https://www.uclalawreview.org/false-profits-reviving-the-corporations-public-purpose/
... and even more recently got people rights under the 14th amendment
https://www.amazon.com/We-Corporations-American-Businesses-Rights-ebook/dp/B01M64LRDJ/
pgxiv/loc74-78:

Between 1868, when the amendment was ratified, and 1912, when a
scholar set out to identify every Fourteenth Amendment case heard by
the Supreme Court, the justices decided 28 cases dealing with the
rights of African Americans--and an astonishing 312 cases dealing with
the rights of corporations.

... snip ...

and increasing rights ever since.

S&L crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
regulatory capture
https://www.garlic.com/~lynn/submisc.html#regulatory.capture

some posts mentioning capitalsim and piketty
https://www.garlic.com/~lynn/2021b.html#83 Capital in the Twenty-First Century
https://www.garlic.com/~lynn/2017h.html#1 OT:  book:  "Capital in the Twenty-First Century"
https://www.garlic.com/~lynn/2016c.html#65 A call for revolution
https://www.garlic.com/~lynn/2016c.html#53 Qbasic
https://www.garlic.com/~lynn/2014m.html#55 Piketty Shreds Marginal Productivity as Neoclassical Justification for Supersized Pay
https://www.garlic.com/~lynn/2014f.html#14 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2012o.html#73 These Two Charts Show How The Priorities Of US Companies Have Gotten Screwed Up

--
virtualization experience starting Jan1968, online at home since Mar1970

Capitalism: A Six-Part Series

From: Lynn Wheeler <lynn@garlic.com>
Subject: Capitalism: A Six-Part Series
Date: 05 Apr, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#61 Capitalism: A Six-Part Series

note that John Foster Dulles played major role rebuilding Germany
economy, industry, military from the 20s up through the early 40s
https://www.amazon.com/Brothers-Foster-Dulles-Allen-Secret-ebook/dp/B00BY5QX1K/
loc865-68:

In mid-1931 a consortium of American banks, eager to safeguard their
investments in Germany, persuaded the German government to accept a
loan of nearly $500 million to prevent default. Foster was their
agent. His ties to the German government tightened after Hitler took
power at the beginning of 1933 and appointed Foster's old friend
Hjalmar Schacht as minister of economics.

loc905-7:

Foster was stunned by his brother's suggestion that Sullivan &
Cromwell quit Germany. Many of his clients with interests there,
including not just banks but corporations like Standard Oil and
General Electric, wished Sullivan & Cromwell to remain active
regardless of political conditions.

loc938-40:

At least one other senior partner at Sullivan & Cromwell, Eustace
Seligman, was equally disturbed. In October 1939, six weeks after the
Nazi invasion of Poland, he took the extraordinary step of sending
Foster a formal memorandum disavowing what his old friend was saying
about Nazism

... snip ...

June1940, Germany had a victory celebration at the NYC Waldorf-Astoria
with major industrialists. Lots of them were there to hear how to do
business with the Nazis
https://www.amazon.com/Man-Called-Intrepid-Incredible-Narrative-ebook/dp/B00V9QVE5O/

somewhat replay of the Nazi celebration, after the war 5000
industrialists and corporations from across the US had conference at
the Waldorf-Astoria, and in part because they had gotten such a bad
reputation for the depression and supporting Nazis, as part of
attempting to refurbish their horribly corrupt and venal image, they
approved a major propaganda campaign to equate Capitalism with
Christianity.
https://www.amazon.com/One-Nation-Under-God-Corporate-ebook/dp/B00PWX7R56/

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Retain and other online

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Retain and other online
Date: 05 Apr, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online

after 23jun69 unbundling ... starting to charge for (application)
software, SE services, maint, etc. At the time SE training was part of
large group at customer site ... but they they couldn't figure out how
not to charge for trainee SE at customers.

Eventually HONE started out for branch office SEs to practice online
with guest operating systems running in CP67 virtual machines.

Besides doing CP67 (and bunch of other things), CSC (Cambridge
Scientific Center) also ported APL\360 to CP67/CMS as CMS\APL
... redoing stotage management for multi-mbyte demand page workspaces
(instead of 16kbyte swap) and APIs for system services (like file i/o)
enabling real world apps.

HONE then started using CMS\APL for online sales and marketing support
AIDS ... which comes to dominate all HONE activity ... and online CP67
guest operating practice just dwindles away.

23jun69 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downturn, Downfall, Breakup

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downturn, Downfall, Breakup
Date: 05 Apr, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#57 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#58 IBM Downturn, Downfall, Breakup

I was blamed for online computer conferencing on the (non-SNA)
internal network in the late 70s and early 80s. It really took off
spring of 1981 when I distributed a trip report of visit to Jim Gray
at Tandem. There was only about 300 that directly participated but
claims 25,000 were reading. Folklore was that when corporate executive
committee was told, 5of6 wanted to fire me. Some of the results were
officially supported software and sanctioned/moderated discussion
groups, also researcher was paid to sit in back of my office for
9months taking notes on how I communicated, face-to-face, telephone,
etc. got copies of all my incoming/outgoing email and logs of all
instant messages (results were IBM research reports, papers,
conference talks, books, and Stanford Phd, joint with language and
computer ai, winograd was advisor on computer side). and from IBM
Jargon (copy here)
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

... snip ...

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Supercomputer Datacenters

From: Lynn Wheeler <lynn@garlic.com>
Subject: Supercomputer Datacenters
Date: 06 Apr, 2025
Blog: Facebook

LANL (national lab) behind standardization Cray channel as HIPPI
https://en.wikipedia.org/wiki/HIPPI
Then some work on serial HIPPI (still 800mbit/sec)

Then in 1988, IBM branch asks if I could help LLNL (national lab) get
some serial stuff they were working with, standardized, which quickly
becomes FCS (fibre channel standard, including some stuff I had done
in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec). By
1992, had FCS microchannel cards for RS/6000 (and IBM disk division
AdStaR w/PACHEKO-C FCS in disk arrays)
https://en.wikipedia.org/wiki/Fibre_Channel

Later POK releases some serial stuff they been playing with for
previous decade, with ES/9000 as ESCON (when it was already obsolete,
17mbyte/sec).

Quite a bit later, POK engineers become involved with FCS and define a
protocol that significantly reduces FCS throughput that is eventually
released as FICON. Latest public benchmark I can find was z196 "Peak
I/O" which got 2M IOPS using a 104 FICON. About the same time a FCS
was announced for E5-2600 server blades claiming over million IOPS
(two such native FCS having higher throughput than 104 FICON). Note
also IBM pubs recommend that SAPs (system assist processors that do
actual I/O) be kept to 70% (or 1.5M IOPS). Also no real CKD DASD have
been made for decades, all being emulated on industry standard fixed
block disks (with extra layer of simulation).

mid-80s IBM 3090 HIPPI trivia: attempts were trying to sell 3090s into
the compute intensive market ... but required HIPPI I/O support
... and 3090 was stuck at 3mbyte/sec datastreaming channels. Some
engineers hack the 3090 expanded-store bus (extra memory where
synchronous instructions moved 4k pages between expanded-memory and
processor-memory) to attach HIPPI I/O devices ... and used the
synchronous instructions with special expanded store addresses to
perform HIPPI I/O (sort of like PC "peak/poke")

FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

trivia: UCSD supercomputer was operated by General Atomics ... which
was also marketing LANL supercomputer archive system as "Datatree"
(and LLNL supercomputer filesystem as "Unitree"). Congress was pushing
national labs to commercialize advanced technology making the US more
competitive. NCAR also spun off their filesystem in "Mesa Archive".

DataTree and UniTree: software for file and storage management
https://ieeexplore.ieee.org/document/113582

In 1988, I got HA/6000 product, originally for NYTimes to move their
newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix that have VAXCluster
support in same source base with UNIX). Besides HA/CMP as cluster
supercomputer (later, cluster scale-up was transferred for announce as
IBM Supercomputer and we were told we couldn't work on anything with
more than four processors), we also had Unitree (LLNL LINCS) and
(NCAR) Mesa Archival porting to HA/CMP.

In the 80s, NCAR had started off with IBM 4341 and NSC HYPERCHANNEL
... each NSC box had up to four 50mbit LAN interfaces, there were NSC
boxes for most mainframe channels, telco T1 and T3, as well as IBM
mainframe channel emulator. NCAR had NSC mainframe channel-interface
boxes for IBM 4341 (NSC A220) and their supercomputers and IBM channel
emulator boxes (NSC A515) for attaching IBM DASD Controllers (and
drives). IBM DASD Controllers had two channel interfaces that attached
directly to 4341 channel and to NSC A515 emulated channel.
Supercomputers would send the 4341 a file record read/write requests
(over HYPERCHANNEL). If read, 4341 would initially check that file was
staged to disk, if not it would do I/O to copy from tape to disk. It
would then download a channel program into the appropriate NSC channel
box (A515) and return information to the requesting supercomputer the
information for it to directly invoke the (A515 downloaded) channel
program (resulting in direct I/O transfer between the IBM DASD and the
supercomputer ("NAS" network attached storage, with "3rd party
transfer"). In spin-off to "Mesa Archival" the 4341 function was being
moved to HA/CMP.

other trivia: IBM communication group was trying to block IBM
mainframe TCP/IP support. When that failed, they changed tactic; since
they had corporate strategic ownership of everything that crossed
datacenter walls, it had to be released through them. What shipped got
44kbytes/sec aggregate using nearly whole 3090 processor. I then did
support for RFC1044 and in some tuning tests at Cray Research, between
Cray and 4341, got 4341 sustained channel throughput, using only
modest amount of 4341 CPU (something 500 times improvement in bytes
moved per instruction executed).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3101 Glass Teletype and "Block Mode"

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3101 Glass Teletype and "Block Mode"
Date: 07 Apr, 2025
Blog: Facebook

within a year of taking two credit hr intro to fortran/computers,
360/67 was installed at univ, replacing 709/1401 for tss/360, which
never came to fruition and I was hired fulltime responsible for
os/360. Then CSC came out to install CP67 (3rd installation after CSC
itself and MIT lincoln labs) and I mostly played with it during my
dedicated 48hr weekend window. Initially I rewrote a lot of CP67 to
improve OS/360 test jobstream (running 322secs on real machine) from
856secs to 435secs (cut CP67 CPU from 534secs to 113secs). CP67
originally came with 2741 and 1052 terminal support (and "magic"
terminal type identification), and since univ had some tty/ascii, I
added tty terminal support (integrated with magic terminal type
support). I then wanted to have a single dial-in phone number for all
terminal types, which only worked if the baud rate were all the same
... IBM had taken short-cut and hardwired baud rate for each
port. Univ then kicks off clone controller project, building channel
interface board for Interdata/3 programmed to simulate IBM
telecommunication controller with addition that did dynamic baud
rate. This was then upgraded with Interdata/4 for the channel
interface and cluster of Interdata/3s for port interfaces (and four of
us are written up responsible for some part of the clone controller
business)
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

After graduating and joining IBM, I got 2741 at home (Mar1970),
eventually replaced with (ASCII) 300baud CDI Miniterm, then late 70s,
a 1200baud IBM Topaz/3101 (initially mod1, simple glass teletype). I
track down contact in Fujisawa for "mod2" and they sent ROMs to
upgrade mod1 to mod2 (that included "block mode" support)

There was (VM370) ascii->3270 simulation for IBM home terminal program
that would leverage 3101 "block mode" .... later upgraded for "PCTERM"
when IBM/PC was released (supported string compression and string
caching).

ASCII trivia, originally, 360s were suppose to be ASCII machines,
however the unit record gear wasn't ready, so they initially
("temporarily") shipped as EBCDIC with BCD gear ... the biggest
computer goof:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

some topaz/3101 "block mode" posts
https://www.garlic.com/~lynn/2024g.html#84 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2023f.html#91 Vintage 3101
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2022d.html#28 Remote Work
https://www.garlic.com/~lynn/2021i.html#68 IBM ITPS
https://www.garlic.com/~lynn/2017h.html#12 What is missing?
https://www.garlic.com/~lynn/2014i.html#11 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014h.html#77 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014e.html#49 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2013k.html#16 Unbuffered glass TTYs?
https://www.garlic.com/~lynn/2012m.html#25 Singer Cartons of Punch Cards
https://www.garlic.com/~lynn/2010b.html#27 Happy DEC-10 Day
https://www.garlic.com/~lynn/2008m.html#37 Baudot code direct to computers?
https://www.garlic.com/~lynn/2006y.html#31 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2006y.html#24 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2006y.html#4 Why so little parallelism?
https://www.garlic.com/~lynn/2006y.html#0 Why so little parallelism?
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2004e.html#8 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2003n.html#7 3270 terminal keyboard??
https://www.garlic.com/~lynn/2003c.html#35 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#34 difference between itanium and alpha
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#1 ASR33/35 Controls
https://www.garlic.com/~lynn/2001b.html#12 Now early Arpanet security
https://www.garlic.com/~lynn/2000g.html#17 IBM's mess (was: Re: What the hell is an MSX?)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23Jun1969 Unbundling and HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23Jun1969 Unbundling and HONE
Date: 07 Apr, 2025
Blog: Facebook

23jun1969, IBM unbundling announce started to charge for (application)
software, SE services, maint., etc. SE training use to include part of
large group at customer's site. However with unbundling they couldn't
figure out how not to charge for the onsite trainee SEs. Solution was
branch office SEs practicing with online CP67 "HONE" datacenters
running guest operating systems in virtual machines.

After I graduate and join IBM Cambridge Science Center, one of my
hobbies was enhanced production operating systems for internal
datacenters and HONE was one of my 1st (and long time)
customer. Cambridge Science center then ported APL\360 to CP67/CMS as
CMS\APL, which included redoing storage management for demand paged
multi-mbyte workspaces (instead of 16kbyte swaped) and APIs for system
services (like file I/O, enabling lots of real world applications).
HONE then starts offering APL-based sales&marketing support
applications which came to dominate all HONE activity (and guest
operating system practice just withered away).

Then with decision to add virtual memory to all 370s, there was also
decision to do VM370 and some of the science center splits off and
takes over the Boston Programming Center on the 3rd flr ... and in the
morph of CP67->VM370 lots of stuff was simplified or dropped. I then
start adding bunch of CP67 into VM370R2 for my internal CSC/VM and US
HONE datacenters move to 370. The Palo Alto Science Center does
APL\CMS for VM370/CMS as well as the APL microcode assist for 370/145
(claiming APL throughput of 370/168) and US HONE consolidates all
their datacenters across the back parking lot from PASC (trivia: when
facebook 1st moves into silicon valley, it is in a new bldg built next
door to the former consolidated US HONE datacenter).

US HONE systems are enhanced to largest loosely-coupled, single-system
image, shared DASD with fall-over and load-balancing across the
complex and I upgrade CSC/VM to VM370R3-base with addition of CP67
multiprocessor support (initially for HONE so they can add a 2nd CPU
to all systems). After the California earthquake, the US HONE
datacenter is replicated in Dallas and then a 3rd in Boulder (while
other HONE clones were cropping up all over the world). Nearly all
HONE (US and world-wide) offerings were APL-based (by far the largest
use across the world).

Trivia: PASC had done enhanced code optimizing, initially for internal
FORTQ ... eventually released as FORTRAN HX. For some extremely compute
intensive HONE APL-based calculations, a few of the APL applications
were modified to have APL invoke FORTQ versions of the code.

Other trivia: APL purists were criticizing the CMS\APL API for system
services and eventually counter with shared variable ... and APL\CMS
morphs into APLSV followed by VS/APL.

4341 was better than twice MIPS of 360/65 ... and extraordinary
price/performance. I was using early engineering VM/4341 when branch
office hears about it in jan1979 (well before 1st customer ship) and
cons me into doing benchmark for national lab that was looking at
getting 70 for compute farm (leading edge of the coming cluster
supercomputing tsunami).

In the 80s, VM/4300s were selling into same mid-range market as DEC
VAX/VMS and in similar numbers for single/small unit orders. Big
difference was large corporations ordering hundreds of VM/4300s at a
time for distributing out in departmental areas (leading edge of the
coming distributed computing tsunami). Inside IBM, so many conference
rooms were being converted to VM/4300 rooms, that conference rooms
became scarce.

23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23Jun1969 Unbundling and HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23Jun1969 Unbundling and HONE
Date: 07 Apr, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#67 IBM 23Jun1969 Unbundling and HONE

by mid-70s, most mainframe orders were required to be 1st run through
HONE configurators.

trivia: Co-worker at science center had done an APL-based analytical
system model ... which was made available on HONE as Performance
Predictor (branch enters customer configuration and workload
information and then can ask what-if questions about effect of
changing configuration and/or workloads). The consolidated US HONE
single-system-image uses a modified version of performance predictor
to make load balancing decisions.

I also use it for 2000 automated benchmarks (takes 3months elapsed
time), in preparation for my initial VM370 "charged for" kernel add-on
release to customers (i.e. during unbundling, the case was made that
kernel software was still free, then with the rise of 370 clone makers
during FS and after FS implodes, decision was made to transition to
charging for all kernel software, and pieces of my internal CSC/VM was
selected as initial guinea pig, after transition completed to charge
for all kernel software, the OCO-wars started).

The first 1000 benchmarks have manually specified configuration and
workload profiles that are uniformly distributed across known
observations of real live systems (with 100 extreme combinations
outside real systems). Before each benchmark, the modified
performance predictor predicts benchmark performance and then
compares the results with its prediction (and saves all values). The
2nd 1000 automated benchmarks have configuration and workload profiles
specified by the modified performance predictor (searching for
possible problem combinations).

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark

some recent posts mentioning automated benchmarking, performance
predictor, hone
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#6 Testing
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#33 Copyright Software
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc

--
virtualization experience starting Jan1968, online at home since Mar1970

Amdahl Trivia

From: Lynn Wheeler <lynn@garlic.com>
Subject: Amdahl Trivia
Date: 07 Apr, 2025
Blog: Facebook

Amdahl trivia, 60s Amdahl had won the battle to make ACS, 360
compatible; ... then it was killed (folklore that they were afraid
that it would advance the state of the art too fast and IBM might
loose control of the market) and Amdahl leaves IBM ... ref has some
ACS/360 features that show up more than 20yrs later with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html

Then FS completely different than 370 and was going to completely
replace it (and internal politics was killing off 370 efforts, with
little new 370s during the period, gave clone 370 makers their market
foothold, including Amdahl). When 370 implodes there is mad rush to
get stuff back into the 370 product pipelines, including kicking off
quick&dirty 3033 & 3081 in parallel. 3081 was some redone FS, huge
number of circuits for performance (some claim was enough circuits to
build 16 168s, major motivation for TCMs, to cram all those circuits
into smaller physical area).
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

Initial was two processor 3081D that aggregate MIPS was less than
Amdahl single processor, they then double 3081 processor cache sizes
for 3081K, bringing it up to about same as Amdahl single processor
(however MVS docs said 2-CPU only got 1.2-1.5 times throughput of
1-CPU because of MVS two processor overhead ... and 3084 throughput
was significantly less than Amdahl two processor).

Also in the wake of FS implosion, I get asked to help with 16-CPU 370
SMP and we con the 3033 processor engineers into helping in their
spare time (a lot more interesting than remapping 168 logic to 20%
faster chips). Everybody thought it was great until somebody tells
head of POK that it could be decades before POK favorite son operating
system ("MVS") had (effective) 16-processor support (i.e. based on its
2-cpu overhead was so great) and invites some of us to never visit POK
again and directed 3033 processor engineers "heads down and no
distractions". Note POK doesn't ship a 16-CPU SMP until after turn of
century. Contributing was head of POK was in the process of convincing
corporate to kill VM370 product, shutdown the development group and
move all the people to POK for MVS/XA (Endicott eventually manages to
save VM370 product mission for mid-range, but had to recreate a VM370
development group from scratch).

trivia: In the morph of CP67->VM370 they simplify and/or drop lots
of stuff (including CP67 SMP support). I start moving CP67 stuff back
into VM370R2 for my internal CSC/VM. Then for VM370R3-based CSC/VM, I
add multiprocessor support in, initially for US HONE so they can add
2nd processor to each system (that would get twice the throughput of
1-CPU system).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Kernel Histories

From: Lynn Wheeler <lynn@garlic.com>
Subject: Kernel Histories
Date: 08 Apr, 2025
Blog: Facebook

joke was some Kingston (OS/360) MFT people went to Boca to reinvent
MFT for Series/1 ("RPS")... later IBM san jose research physics summer
student did Series/1 EDX

trivia: also before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, Kildall worked on IBM CP67/CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

lots of folklore on USENET AFC that person responsible for DEC VMS
went to m'soft responsible for NT.

some of the MIT CTSS/7094 people went to 5th floor for MULTICS (some
of belllabs people return home and did simplified MULTICS as "UNIX"),
others went to IBM science center on 4th flr, modified a 360/40 with
virtual memory and did CP40/CMS. It morphs into CP67/CMS when 360/67
standard with virtual memory became available (precursor to
vm370/cms).

IBM and DEC both contributed $25M to MIT funding Project Athena
... then IBM funded $50M to CMU that did MACH (unix work-alike),
camelot (IBM underwrites camelot spinoff as transarc and then buys
transarc outright), andrew, etc. CMU MACH was used for NeXT and then
brought over to Apple (with UCB BSD unix-alike)
https://www.operating-system.org/betriebssystem/_english/bs-darwin.htm

By the related UNIX design Mac OS X profits from the protected memory
area and established preemptive multitasking. The Kernel consists of 5
components. Includet are the Mach Mikrokernel with the BSD subsystem,
file system, network ability and the I/O Kit

... snip ...

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23Jun1969 Unbundling and HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23Jun1969 Unbundling and HONE
Date: 09 Apr, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025b.html#67 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#68 IBM 23Jun1969 Unbundling and HONE

in the morph of CP67->VM370, they simplify and/or drop a lot of
features (including multiprocessor support), then for VM370R3 version
of my internal CSC/VM, I had multiprocessor support back in, initially
for HONE so they can add a 2nd processor to every system (largest APL
operation in the world)

note: lots of schedulers made decisions based on most recent events of
some sort ... as undergraduate in 60s, I did dynamic adaptive resource
management for CP67 (which IBM released as part of standard scheduler,
universities use to refer to it as "wheeler scheduler"). It was one of
the things simplified/dropped in morph from CP67->VM370 ... but I put
it back in for internal CSC/VM (and ran at HONE). Turns out the APL
hack to lock up other systems, didn't work with mine ... and it wasn't
just an APL-only hack ... it was possible to do it with CMS EXEC
script.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

--
virtualization experience starting Jan1968, online at home since Mar1970



--
previous, index - home