List of Archived Posts

2025 Newsgroup Postings (03/01 - 05/10)

Financial Engineering
Large Datacenters
Why VAX Was the Ultimate CISC and Not RISC
Clone 370 System Makers
Why VAX Was the Ultimate CISC and Not RISC
RDBMS, SQL/DS, DB2, HA/CMP
2301 Fixed-Head Drum
Why VAX Was the Ultimate CISC and Not RISC
The joy of FORTRAN
HSDT
IBM Token-Ring
IBM Token-Ring
IBM 3880, 3380, Data-streaming
Learson Tries To Save Watson IBM
IBM Token-Ring
IBM Token-Ring
IBM VM/CMS Mainframe
IBM VM/CMS Mainframe
IBM VM/CMS Mainframe
IBM VM/CMS Mainframe
IBM San Jose and Santa Teresa Lab
IBM San Jose and Santa Teresa Lab
IBM San Jose and Santa Teresa Lab
Forget About Cloud Computing. On-Premises Is All the Rage Again
Forget About Cloud Computing. On-Premises Is All the Rage Again
IBM 3880, 3380, Data-streaming
IBM 3880, 3380, Data-streaming
IBM 3880, 3380, Data-streaming
IBM WatchPad
Learson Tries To Save Watson IBM
Some Career Highlights
Some Career Highlights
Forget About Cloud Computing. On-Premises Is All the Rage Again
3081, 370/XA, MVS/XA
IBM 370/125
3081, 370/XA, MVS/XA
FAA ATC, The Brawl in IBM 1964
FAA ATC, The Brawl in IBM 1964
IBM Computers in the 60s
FAA ATC, The Brawl in IBM 1964
IBM APPN
AIM, Apple, IBM, Motorola
IBM 70s & 80s
IBM 70s & 80s
IBM 70s & 80s
Business Planning
POK High-End and Endicott Mid-range
IBM Datacenters
IBM Datacenters
POK High-End and Endicott Mid-range
IBM 3880, 3380, Data-streaming
POK High-End and Endicott Mid-range
Mainframe Modernization
IBM Datacenters
Planet Mainframe
POK High-End and Endicott Mid-range
POK High-End and Endicott Mid-range
IBM Downturn, Downfall, Breakup
IBM Downturn, Downfall, Breakup
IBM Retain and other online
IBM Retain and other online
Capitalism: A Six-Part Series
Capitalism: A Six-Part Series
IBM Retain and other online
IBM Downturn, Downfall, Breakup
Supercomputer Datacenters
IBM 3101 Glass Teletype and "Block Mode"
IBM 23Jun1969 Unbundling and HONE
IBM 23Jun1969 Unbundling and HONE
Amdahl Trivia
Kernel Histories
IBM 23Jun1969 Unbundling and HONE
Cluster Supercomputing
Cluster Supercomputing
Cluster Supercomputing
Armonk, IBM Headquarters
Corporate Network
Corporate Network
IBM Downturn
IBM 3081
IBM 3081
IBM 3081
IBM 3081
Mainfame System Meter
IBM 3081
An Ars Technica history of the Internet, part 1
Packet network dean to retire
Packet network dean to retire
Technology Competitiveness
Packet network dean to retire
IBM AdStar
IBM AdStar
IBM AdStar
IBM AdStar
IBM AdStar
MVT to VS2/SVS
OSI/GOSIP and TCP/IP
Open Networking with OSI
Heathkit
Heathkit
IBM Future System, 801/RISC, S/38, HA/CMP
IBM Financial Engineering (again)
IBM AdStar
IBM Downturn, Downfall, Breakup
IBM S/88
IBM S/88
IBM 23Jun1969 Unbundling and HONE
IBM 23Jun1969 Unbundling and HONE
System Throughput and Availability
System Throughput and Availability
System Throughput and Availability
System Throughput and Availability II
System Throughput and Availability II
CERN WWW, Browsers and Internet
ROLM, HSDT
SHARE, MVT, MVS, TSO
IBM Downturn, Downfall, Breakup
SHARE, MVT, MVS, TSO
IBM 168 And Other History
Too Much Bombing, Not Enough Brains
HSDT, SNA, VTAM, NCP
MVT to VS2/SVS
VM370/CMS and MVS/TSO
VM370/CMS and MVS/TSO
MOSAIC
IBM z17

Financial Engineering

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Financial Engineering
Date: 01 Mar, 2025
Blog: Facebook
The last product we did was HA/6000 approved by Nick Donofrio in 1988 (before RS/6000 was announced) for the NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had vaxcluster support in same source base with unix). The S/88 product administrator then starts taking us around to their customers and also has me do a section for the corporate continuous availability strategy document ... it gets pulled when both Rochester/AS400 and POK/(high-end mainframe) complain they couldn't meet the requirements.

Early Jan1992 have a meeting with Oracle CEO and IBM/AWD Hester tells Ellison we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later).

1992, IBM has one of the largest losses in the history of US companies and was in the process of being re-orged into the 13 "baby blues" in preparation for breaking up the company (take off on the "baby bell" breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

Not long after leaving IBM, I was brought in as consultant into small client/server startup, two of the former Oracle people (that we had worked with on HA/CMP cluster scaleup) were there responsible for something they called "commerce server" and they wanted to do payment transactions, the startup had also invented this technology they called "SSL" they wanted to use, it is now frequently called "electronic commerce" (or ecommerce).

I had complete responsibility for everything between "web servers" and gateways to the financial industry payment networks. Payment network trouble desks had 5min initial problem diagnoses ... all circuit based. I had to do a lot of procedures, documentation and software to bring packet-based internet up to that level. I then did a talk (based on ecommerce work) "Why Internet Wasn't Business Critical Dataprocessing" ... which Postel (Internet standards editor) sponsored at ISI/USC.

Stockman and IBM financial engineering company:
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall Street momo traders. It was actually a stock buyback contraption on steroids. During the five years ending in fiscal 2011, the company spent a staggering $67 billion repurchasing its own shares, a figure that was equal to 100 percent of its net income.

pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82 billion, or 122 percent, of net income over this five-year period. Likewise, during the last five years IBM spent less on capital investment than its depreciation and amortization charges, and also shrank its constant dollar spending for research and development by nearly 2 percent annually.
... snip ...

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases have come to a total of over $159 billion since 2000.

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
ecommerce gateways
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

Large Datacenters

From: Lynn Wheeler <lynn@garlic.com>
Subject: Large Datacenters
Date: 01 Mar, 2025
Blog: Facebook
I had taken 2credit-hr intro do fortran/computers and at the end of the semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30. Univ was getting 360/67 for tss/360 to replace 709/1401 and got a 360/30 temporarily (replacing 1401) pending availability of 360/67. Univ shutdown datacenter on weekends and I got the whole place dedicated (although 48hrs w/o sleep made Mondays hard). I was given a bunch of hardware & software manuals and to to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and within a few weeks had a 2000 card assembler program. The 360/67 arrives within year of talking intro class and I was hired fulltime responsible for os/360 (tss/360 never came to production).

Then before I graduate, I'm hired fulltime into a small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter largest in the world with 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field (although they enlarge the machine room to install 360/67 for me to play with when I'm not doing other stuff). Then when I graduate, instead of staying with the CFO, I join IBM science center.

I was introduced to John Boyd in the early 80s and would sponsor his briefings at IBM. He had lots of stories, including being very vocal about electronics across the trail wouldn't work. Possibly as punishment he was put in command of "spook base" (Boyd would say it had the largest air conditioned bldg in that part of the world) about the same time I'm at Boeing
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
Access to the environmentally controlled building was afforded via the main security lobby that also doubled as an airlock entrance and changing-room, where twelve inch-square pidgeon-hole bins stored individually name-labeled white KEDS sneakers for all TFA personnel. As with any comparable data processing facility of that era, positive pressurization was necessary to prevent contamination and corrosion of sensitive electro-mechanical data processing equipment. Reel-to-reel tape drives, removable hard-disk drives, storage vaults, punch-card readers, and inumerable relays in 1960's-era computers made for high-maintainence systems. Paper dust and chaff from fan-fold printers and the teletypes in the communications vault produced a lot of contamination. The super-fine red clay dust and humidity of northeast Thailand made it even more important to maintain a well-controlled and clean working environment.

Maintenance of air-conditioning filters and chiller pumps was always a high-priority for the facility Central Plant, but because of the 24-hour nature of operations, some important systems were run to failure rather than taken off-line to meet scheduled preventative maintenance requirements. For security reasons, only off-duty TFA personnel of rank E-5 and above were allowed to perform the housekeeping in the facility, where they constantly mopped floors and cleaned the consoles and work areas. Contract civilian IBM computer maintenance staff were constantly accessing the computer sub-floor area for equipment maintenance or cable routing, with the numerous systems upgrades, and the underfloor plenum areas remained much cleaner than the average data processing facility. Poisonous snakes still found a way in, causing some excitement, and staff were occasionally reprimanded for shooting rubber bands at the flies during the moments of boredom that is every soldier's fate. Consuming beverages, food or smoking was not allowed on the computer floors, but only in the break area outside. Staff seldom left the compound for lunch. Most either ate C-rations, boxed lunches assembled and delivered from the base chow hall, or sandwiches and sodas purchased from a small snack bar installed in later years.

... snip ...

Boyd biography says "spook base" was a $2.5B "windfall" for IBM (ten times Renton).

In 89/90 the Commandant of Marine Corps leverages Boyd for make-over of the corps (at a time when IBM was desperately in need of make-over).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
1992 IBM has one of the largest losses in history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company (take off on "baby bells" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup.

Boyd passes in 1997 and the USAF had pretty much disowned him and it was the Marines at Arlington and his effects go to Quantico. The 89/90 commandant continued to sponsor regular Boyd themed conferences at Marine Corps Univ. In one, the (former) commandant wanders in after lunch and speaks for two hrs (totally throwing schedule off, but nobody complains). I'm in the back corner of the room and when he is done, he makes a beeline straight for me (and all I could think of was I had been setup by Marines I've offended in the past, including former head of DaNang datacenter and later Quantico).

IBM Cambridge Science Center
https://www.garlic.com/~lynn/subtopic.html#545tech
Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

some recent 709/1401, MPIO, 360/67, univ, Boeing CFO, Renton posts
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360

--
virtualization experience starting Jan1968, online at home since Mar1970

Why VAX Was the Ultimate CISC and Not RISC

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why VAX Was the Ultimate CISC and Not RISC
Newsgroups: comp.arch
Date: Sat, 01 Mar 2025 18:29:50 -1000
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
IBM tried to commercialize it in the ROMP in the IBM RT PC; Wikipedia says: "The architectural work on the ROMP began in late spring of 1977, as a spin-off of IBM Research's 801 RISC processor ... The first examples became available in 1981, and it was first used commercially in the IBM RT PC announced in January 1986. ... The delay between the completion of the ROMP design, and introduction of the RT PC was caused by overly ambitious software plans for the RT PC and its operating system (OS)." And IBM then designed a new RISC, the RS/6000, which was released in 1990.

ROMP originally for DISPLAYWRITER follow-on ... running CP.r operating system and PL.8 programming language. ROMP was minimal 801, didn't have supervisor/problem mode ... at the time their claim was PL.8 would only generate correct code and CP.r would only load/execute correct programs. They claimed 40bit addressing ... 32 bit addresses ... but top four bits selected 16 "segment registers" that contained 12bit segment-identifiers. ... aka 28bit segment displacement and 12bit segment-ids (40bits) .... and any inline code could change segment register value ... as easily as could load any general register.

When follow-on to DISPLAYWRITER was canceled, they pivoted to UNIX workstation market and got the company that had done AT&T unix port to IBM/PC for PC/IX ... to do AIX. Now ROMP needed supervisor/problem mode and inline code could no longer change segment register values ... needed to have supervisor call.

Folklore is they also had 200 PL.8 programmers and needed something for them to do, so they gen'ed a abstract virtual machine system ("VRM") (implemented in PL.8) and had AIX port be done to the abstract virtual machine definition (instead of real hardware) .... claiming that the combined effort would be less (total effort) than having the outside company do the AIX port to the real hardware (also putting in a lot of IBM SNA communication support).

The IBM Palo Alto group had been working on UCB BSD port to 370, but was redirected to do it instead to bare ROMP hardware ... doing it in enormously significantly less resources than the VRM+AIX+SNA effort.

Move to RS/6000 & RIOS (large multi-chip) doubled the 12bit segment-id to 24bit segment-id (and some left-over description talked about it being 52bit addressing) and eliminated the VRM ... and adding in some amount of BSDisms.

AWD had done their own cards for PC/RT (16bit AT) bus, including a 4mbit token-ring card. Then for RS/6000 microchannel, AWD was told they couldn't do their own card, but had to do PS2 microchannel cards. The communication group was fiercely fighting off client/server and distributed computing and had seriously performance knee-capped PS2 cards, including ($800) 16mbit token-ring card (the PS2 microchannel which had lower card throughput than the PC/RT 4mbit TR card). There was joke that PC/RT 4mbit TR server having higher throughput than RS/6000 16mbit TR server. There was also joke that the RS6000/730 with VMEbus was a work around corporate politics and being able to install high-performance workstation cards

We got the HA/6000 project in 1988 (approved by Nick Donofrio), originally for NYTimes to move their newspaper system off VAXCluster to RS/6000. I rename it HA/CMP.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had vaxcluster support in same source base with unix). The S/88 product administrator then starts taking us around to their customers and also has me do a section for the corporate continuous availability strategy document ... it gets pulled when both Rochester/AS400 and POK/(high-end mainframe) complain they couldn't meet the requirements.

Early Jan1992 have a meeting with Oracle CEO and IBM/AWD Hester tells Ellison we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later). Contributing was the mainframe DB2 DBMS group were complaining if we were allowed to coninue, it would be at least five years ahead of them.

Neither ROMP or RIOS supported bus/cache consistency for multiprocessor operation. The executive we reported to, went over to head up ("AIM" - Apple, IBM, Motorola) Somerset for single chip 801/risc ... but also adopts Motorola 88k bus enabling multiprocessor configurations. He later leaves Somerset for president of (SGI owned) MIPS.

trivia: I also had HSDT project (started in early 80s), T1 and faster computer links, both terrestrial and satellite ... which included custom designed TDMA satellite system done on the other side of the pacific ... and put in 3-node system. two 4.5M dishes, one in San Jose and one in Yorktown Research (hdqtrs, east coast) and a 7M dish in Austin (where much of the RIOS design was going on). San Jose also got an EVE, a superfast hardware VLSI logic simulator (scores of times faster than existing simultion) ... and it was claimed that Austin being able to use the EVE in San Jose, helped bring RIOS in a year early.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
virtualization experience starting Jan1968, online at home since Mar1970

Clone 370 System Makers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Clone 370 System Makers
Date: 02 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#121 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#122 Clone 370 System Makers

Note: several years after the Amdahl incident (and being told goodby to career, promotions, raises), I wrote an IBM "speakup" about being underpaid with some supporting documents. Got a written reply from head of HR that after a detail review of my whole career, I was being paid exactly what I was suppose to be paid. I then made copy of original "speakup" and head of HR's reply and wrote a cover stating that I recently was asked to help interview some number of students that would be shortly graduating, for positions in new group that I would be technically directing ... and found out that they were being offered starting salaries that were 1/3rd more than I was currently making. I never got a written reply, but a few weeks later I got a 33% raise (putting me on level playing field with new graduate hires). Several people then reminded me that "Business Ethics" was an oxymoron.

some past posts mentioning the speakup
https://www.garlic.com/~lynn/2023c.html#89 More Dataprocessing Career
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2022h.html#24 Inventing the Internet
https://www.garlic.com/~lynn/2022f.html#42 IBM Bureaucrats
https://www.garlic.com/~lynn/2022e.html#59 IBM CEO: Only 60% of office workers will ever return full-time
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021j.html#39 IBM Registered Confidential
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021i.html#82 IBM Downturn
https://www.garlic.com/~lynn/2021h.html#61 IBM Starting Salary
https://www.garlic.com/~lynn/2021e.html#15 IBM Internal Network
https://www.garlic.com/~lynn/2021d.html#86 Bizarre Career Events
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#12 IBM "811", 370/xa architecture
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2017.html#78 IBM Disk Engineering
https://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014h.html#81 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014c.html#65 IBM layoffs strike first in India; workers describe cuts as 'slaughter' and 'massive'
https://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
https://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2011g.html#12 Clone Processors
https://www.garlic.com/~lynn/2011g.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
https://www.garlic.com/~lynn/2010c.html#82 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2009h.html#74 My Vintage Dream PC
https://www.garlic.com/~lynn/2007j.html#94 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#83 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#75 IBM Unionization
https://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer

--
virtualization experience starting Jan1968, online at home since Mar1970

Why VAX Was the Ultimate CISC and Not RISC

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why VAX Was the Ultimate CISC and Not RISC
Newsgroups: comp.arch
Date: Sun, 02 Mar 2025 09:03:53 -1000
Robert Swindells <rjs@fdy2.co.uk> writes:
You could look at the MIT Lisp Machine, it used basically the same chips as a VAX 11/780 but was a pipelined load/store architecture internally.

re:
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC

from long ago and far away:
Date: 79/07/11 11:00:03
To: wheeler

i heard a funny story: seems the MIT LISP machine people proposed that IBM furnish them with an 801 to be the engine for their prototype. B.O. Evans considered their request, and turned them down.. offered them an 8100 instead! (I hope they told him properly what they thought of that)

... snip ... top of post, old email index

... trivia: Evans had asked my wife to review/audit 8100 (had really slow, anemic processor) and shortly later it was canceled ("decomitted").

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

misc past posts with same email
https://www.garlic.com/~lynn/2023e.html#84 memory speeds, Solving the Floating-Point Conundrum
https://www.garlic.com/~lynn/2006t.html#9 32 or even 64 registers for x86-64?
https://www.garlic.com/~lynn/2006o.html#45 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006c.html#3 Architectural support for programming languages
https://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics

--
virtualization experience starting Jan1968, online at home since Mar1970

RDBMS, SQL/DS, DB2, HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: RDBMS, SQL/DS, DB2, HA/CMP
Date: 02 Mar, 2025
Blog: Facebook
Vern Watts responsible for IMS
https://www.vcwatts.org/ibm_story.html

SQL/Relational started 1974 at San Jose Research (main plant site) as System/R, implementing on VM370

Some of the MIT CTSS/7094 people went to the 5th flr to do Multics, others went to to the IBM Cambridge Science Center ("CSC") on the 4th flr, did virtual machines (initially CP40/CMS on 360/40 with virtual memory hardware mods, morphs into CP67/CMS when 360/67 standard with virtual memory becomes available), internal network, invented GML in 1969, lots of online apps, When decision was made to add virtual memory to all 370s, some of the CSC people split off and take-over the IBM Boston Programming Center on the 3rd flr for the VM370 development group (and CP67/CMS morphs into VM370/CMS).

Multics releases the 1st relational RDBMS (non-SQL) in June 1976
https://www.mcjones.org/System_R/mrds.html

STL (since renamed SVL) didn't appear until 1977, it was originally going to be called Coyote after the convention naming for the closest Post Office. However that spring the San Francisco Coyote Organization demonstrated on the steps of the capital and it was quickly decided to choose a different name (prior to the opening), eventually the closest cross street. Vern and IMS move up from LA area to STL. It was same year that I transferred from CSC to San Jose Research and would work on some of System/R with Jim Gray and Vera Watson. Some amount of criticism from IMS group about System/R, including index requiring lots more I/O and double the disk space.

First SQL/RDBMS ships, Oracle
https://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Oracle.html

STL was in the process of doing the next great DBMS, "EAGLE" and we were able to do technology transfer to Endicott (under the "radar", while company pre-occupied with "EAGLE") for SQL/DS
https://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-SQL_DS.html

Later "EAGLE" implodes and there is a request for how fast could System/R be ported to MVS ... eventually released as DB2, originally for decision-support.
https://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-DB2.html

Trivia: Jim Gray departs for Tandem fall 1980, palming of some things on me. The last product at IBM was HA/6000 starting 1988, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had vaxcluster support in same source base with unix). The S/88 product administrator then starts taking us around to their customers and also has me do a section for the corporate continuous availability strategy document ... it gets pulled when both Rochester/AS400 and POK/(high-end mainframe) complain they couldn't meet the requirements.

Early Jan1992 have a meeting with Oracle CEO and IBM/AWD Hester tells Ellison we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

posts mentioning HA/CMP, S/88, Continuous Availability Strategy document:
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#119 Consumer and Commercial Computers
https://www.garlic.com/~lynn/2025.html#106 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#104 Mainframe dumps and debugging
https://www.garlic.com/~lynn/2025.html#89 Wang Terminals (Re: old pharts, Multics vs Unix)
https://www.garlic.com/~lynn/2025.html#57 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#24 IBM Mainframe Comparison
https://www.garlic.com/~lynn/2024g.html#82 IBM S/38
https://www.garlic.com/~lynn/2024g.html#5 IBM Transformational Change
https://www.garlic.com/~lynn/2024f.html#67 IBM "THINK"
https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#25 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024f.html#3 Emulating vintage computers
https://www.garlic.com/~lynn/2024e.html#138 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#55 Article on new mainframe use
https://www.garlic.com/~lynn/2024d.html#12 ADA, FAA ATC, FSD
https://www.garlic.com/~lynn/2024d.html#4 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#105 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#79 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#60 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024c.html#54 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#7 Testing
https://www.garlic.com/~lynn/2024c.html#3 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#84 IBM DBMS/RDBMS
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024b.html#29 DB2
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#93 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#115 IBM RAS
https://www.garlic.com/~lynn/2023f.html#72 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#38 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2022b.html#55 IBM History
https://www.garlic.com/~lynn/2021d.html#53 IMS Stories
https://www.garlic.com/~lynn/2021.html#3 How an obscure British PC maker invented ARM and changed the world

--
virtualization experience starting Jan1968, online at home since Mar1970

2301 Fixed-Head Drum

From: Lynn Wheeler <lynn@garlic.com>
Subject: 2301 Fixed-Head Drum
Date: 05 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#112 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#113 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#115 2301 Fixed-Head Drum

late 70s, I tried to get 2305-like "multiple exposure" (aka multiple subchannel addresses, where controller could do real-time scheduling of requests queued at the different subchannel addresses) for 3350 fixed-head feature, so I could do (paging) data transfer overlapped with 3350 arm seek. There was group in POK doing "VULCAN", an electronic disk ... and they got 3350 "multiple exposure" work vetoed. Then VULCAN was told that IBM was selling every memory chip it made as (higher markup) processor memory ... and canceled VULCAN, however by then it was too late to resurrect 3350 multiple exposure (and went ahead with non-IBM 1655).

trivia: after decision to add virtual memory to all 370s, some of science center (4th flr) splits off and takes over the IBM Boston Programming Center (3rd flr) for the VM370 Development group (morph CP67->VM370). At the same time there was joint effort between Endicott and Science Center to add 370 virtual machines to CP67 ("CP67H", the new 370 instructions and the different format for 370 virtual memory). When that was done there was then further CP67 mods for CP67I which ran on 370 architecture (in CP67H 370 virtual machines for a year before the first engineering 370 with virtual memory was ready to test ... by trying to IPL CP67I). As more and more 370s w/virtual memory became available, three engineers from San Jose came out to add 3330 and 2305 device support to CP67I for CP67SJ. CP67SJ was in regular use inside IBM, even after VM370 became available.

CSC posts:
https://www.garlic.com/~lynn/subtopic.html#545tech
getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

recent posts mentioning cp/67h, cp/67i cp/67sj
https://www.garlic.com/~lynn/2025.html#122 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#121 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#10 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM

--
virtualization experience starting Jan1968, online at home since Mar1970

Why VAX Was the Ultimate CISC and Not RISC

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why VAX Was the Ultimate CISC and Not RISC
Newsgroups: comp.arch
Date: Thu, 06 Mar 2025 16:11:15 -1000
John Levine <johnl@taugh.com> writes:
I'm not so sure. The IBM Fortran H compiler used a lot of the 360's instruction set and it is my recollection that even the dmr C compiler would generate memory to memory instructions when appropriate. The PL.8 compiler generated code for 5 architectures including S/360 and 68K, and I think I read somewhere that its S/360 code was considrably better than the native PL/I compilers.

I get the impression that they found that once you have a reasonable number of registers, like 16 or more, the benefit of complex instructions drops because you can make good use of the values in the registers.


re:
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025b.html#4 Why VAX Was the Ultimate CISC and Not RISC

long ago and far away ... comparing pascal to pascal front-end with pl.8 back-end (3033 is 370 about 4.5MIPS)
Date: 8 August 1981, 16:47:28 EDT
To: wheeler

the 801 group here has run a program under several different PASCAL "systems". The program was about 350 statements and basically "solved" SOMA (block puzzle..). Although this is only one test, and all of the usual caveats apply, I thought the numbers were interesting... The numbers given in each case are EXECUTION TIME ONLY (Virtual on 3033).

6m 30 secs               PERQ (with PERQ's Pascal compiler, of course)
4m 55 secs               68000 with PASCAL/PL.8 compiler at OPT 2
0m 21.5 secs             3033 PASCAL/VS with Optimization
0m 10.5 secs             3033 with PASCAL/PL.8 at OPT 0
0m 5.9 secs              3033 with PASCAL/PL.8 at OPT 3

... snip ... top of post, old email index

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

The joy of FORTRAN

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The joy of FORTRAN
Newsgroups: alt.folklore.computers, comp.os.linux.misc
Date: Fri, 07 Mar 2025 06:46:48 -1000
cross@spitfire.i.gajendra.net (Dan Cross) writes:
VAX was really meant to unify the product line, offering PDP-10 class performance in something that was architecturally descended from the PDP-11, which remained attractive at the low end or embedded/industrial applications.

DEC in the 80s and 90s had a very forward-looking vision of distributed computing; sadly they botched it on the business side.


re:
https://www.garlic.com/~lynn/2024e.html#142 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#144 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#145 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#2 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#7 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#16 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#17 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#22 stacks are not hard, The joy of FORTRAN-like languages
https://www.garlic.com/~lynn/2025.html#124 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#125 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#131 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#132 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#133 The joy of FORTRAN

IBM 4300s competed with VAX in the mid-range market and sold in approx same numbers in small unit orders ... bit difference was large corporations with orders for hundreds of vm/4300s (in at least one case almost 1000) at a time for placing out in departmental areas (sort of the leading edge of distributed computing tsunami). old afc post with decade of VAX sales, sliced&diced by year, model, US/non-US.
https://www.garlic.com/~lynn/2002f.html#0

Inside IBM, conference rooms were becoming scarce since so many were being converted to vm4341 rooms. IBM was expecting to see same explosion in 4361/4381 orders (as 4331/4341), but by 2nd half of 80s, market was moving to workstations and large PCs, 30rs of pc market share (original articles were separate URLs, now condensed to single web page (original URLs remapped to displacements)
https://arstechnica.com/features/2005/12/total-share/

I got availability of early engineering 4341 in 1978 and IBM branch heard about it and in jan1979 con me into doing national lab benchmark (60s cdc6600 "rain/rain4" fortran) looking at getting 70 for compute farm (sort of leading edge of the coming cluster supercomputing tsunami). Then BofA was getting 60 VM/4341s for distributed System/R (original SQL/relational) pilot.

upthread mentioned doing HA/CMP (targeted for both technical/scientific and commercial) cluster scale-up (and then it is transferred for announce as IBM Supercomputer for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors.

801/risc (PC/RT, RS/6000) didn't have coherent cache so didn't have SMP scale-up ... only scale-up method was cluster ...

1993 large mainframe compared to RS/6000
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU • RS6000/990 : 126MIPS; 16-sysem: 2BIPS; 128-systemr: 16BIPS

executive we reported to went over to head up AIM/Somerset to do single-chip power/pc ... and picked up Motorola 88k bus ... so could then do SMP configs (and/or clusters of SMP)

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT

From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT
Date: 07 Mar, 2025
Blog: Facebook
long winded:

In early 80s, got IBM HSDT project, T1 and faster computer links (terrestrial and satellite) and some amount of conflicts with the communication group (note in 60s, IBM had 2701 controller that supported T1 computer links, but going into 70s and uptake of SNA/VTAM, issues appeared to cap controller links at 56kbits/sec). I was working with NSF director and was to get $20M to interconnect NSF Supercomputer Centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already have running).

NSF 28Mar1986 Preliminary Announcement
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore was when corporate executive committee was told, 5of6 wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

in between, NSF was asking me to do presentations at some current and/or possible future NSF Supercomputer locations (old archived email)
https://www.garlic.com/~lynn/2011b.html#email850325
https://www.garlic.com/~lynn/2011b.html#email850325b
https://www.garlic.com/~lynn/2011b.html#email850326
https://www.garlic.com/~lynn/2011b.html#email850402
https://www.garlic.com/~lynn/2015c.html#email850408
https://www.garlic.com/~lynn/2011c.html#email850425
https://www.garlic.com/~lynn/2011c.html#email850425b
https://www.garlic.com/~lynn/2006w.html#email850607
https://www.garlic.com/~lynn/2006t.html#email850930
https://www.garlic.com/~lynn/2011c.html#email851001
https://www.garlic.com/~lynn/2011b.html#email851106
https://www.garlic.com/~lynn/2011b.html#email851114
https://www.garlic.com/~lynn/2006t.html#email860407
https://www.garlic.com/~lynn/2007.html#email860428
https://www.garlic.com/~lynn/2007.html#email860428b
https://www.garlic.com/~lynn/2007.html#email860430

had some exchanges with Melinda (at princeton)
https://www.leeandmelindavarian.com/Melinda#VMHist

from or to Melinda/Princeton (pucc)
https://www.garlic.com/~lynn/2007b.html#email860111
https://www.garlic.com/~lynn/2007b.html#email860113
https://www.garlic.com/~lynn/2007b.html#email860114
https://www.garlic.com/~lynn/2011b.html#email860217
https://www.garlic.com/~lynn/2011b.html#email860217b
https://www.garlic.com/~lynn/2011c.html#email860407

related
https://www.garlic.com/~lynn/2011c.html#email850426
https://www.garlic.com/~lynn/2006t.html#email850506
https://www.garlic.com/~lynn/2007b.html#email860124

earlier IBM branch brings me into Berkeley "10M" looking at doing remote viewing
https://www.garlic.com/~lynn/2004h.html#email830804
https://www.garlic.com/~lynn/2004h.html#email830822
https://www.garlic.com/~lynn/2004h.html#email830830
https://www.garlic.com/~lynn/2004h.html#email841121
https://www.garlic.com/~lynn/2011b.html#email850409
https://www.garlic.com/~lynn/2004h.html#email860519

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 08 Mar, 2025
Blog: Facebook
The communication group dumb 3270s (and PC 3270 emulators) had point-to-point coax from machine room to each terminal. Several large corporations were starting to exceed building load limits from the weight of all that coax, so needed much lighter and easier to manage solution ... and CAT (shielded twisted pair) and token-ring LAN technology (trivia: my wife was co-inventor on early token passing patent used for the IBM Series/1 "chat ring")

IBM workstation division did their own cards for the PC/RT (16bit PC/AT bus), including 4mbit token-ring card. Then for RS/6000 w/microchannel, they were told they couldn't do their own cards and had to use PS2 microchannel cards. The communication group was fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm and install base) and had severely performance kneecapped microchannel cards. The PS2 microchannel 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card (joke was PC/RT 4mbit T/R server would have higher throughput than RS/6000 16mbit T/R server) ... PS2 microchannel 16mbit T/R card design point was something like 300 dumb terminal stations sharing single LAN.

The new IBM Almaden research bldg had been heavily provisioned with IBM wiring, but they found a $69 10mbit ethernet card had higher throughput than the $800 16mbit T/R card (same IBM wiring) ... and 10mbit ethernet LAN also had higher aggregrate throughput and lower latency. For the price difference for 300 stations, could get several high-performance TCP/IP routers with channel interfaces, dozen or more ethernet interfaces along with FDDI and telco T1 & T3 options.

1988 ACM SIGCOMM had article analyzing 30 station ethernet getting aggregate 8.5mbit throughput, dropping to effective 8mbit throughput when all device drivers were put in low level loop constantly transmitting minimum size packets.

posts about communication group dumb terminal strategies
https://www.garlic.com/~lynn/subnetwork.html#terminal
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

recent posts mentioning token-ring:
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025.html#106 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#97 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#96 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#23 IBM NY Buildings
https://www.garlic.com/~lynn/2024g.html#101 IBM Token-Ring versus Ethernet
https://www.garlic.com/~lynn/2024g.html#53 IBM RS/6000
https://www.garlic.com/~lynn/2024g.html#18 PS2 Microchannel
https://www.garlic.com/~lynn/2024f.html#42 IBM/PC
https://www.garlic.com/~lynn/2024f.html#39 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024f.html#6 IBM (Empty) Suits
https://www.garlic.com/~lynn/2024e.html#138 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#102 Rise and Fall IBM/PC
https://www.garlic.com/~lynn/2024e.html#81 IBM/PC
https://www.garlic.com/~lynn/2024e.html#71 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#64 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#56 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024d.html#30 Future System and S/38
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#57 IBM Mainframe, TCP/IP, Token-ring, Ethernet
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024c.html#47 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#47 OS2
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024b.html#0 Assembler language and code optimization
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 08 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring

other trivia: In early 80s, I got HSDT, T1 and faster computer links (both terrestrial and satellite) with lots of conflict with SNA/VTAM org (note in the 60s, IBM had 2701 controller that supported T1 links, but the transition to SNA/VTAM in the 70s and associated issues appeared to cap all controllers at 56kbit/sec links).

2nd half of 80s, I was on Greg Chesson's XTP TAB and there were some gov. operations involved ... so we took XTP "HSP" to ISO chartered ANSI X3S3.3 for standardization ... eventually being told that ISO only did network standards work on things that corresponded to OSI ... and "HSP" didn't because 1) was internetworking ... not in OSI sitting between layer 3&4 (network & transport), 2) bypassed layer 3/4 interface and 3) went directly to MAC LAN interface also not in OSI, sitting in middle of layer3. had a joke that while IETF required two interoperable implementations for standards progression, that ISO didn't even require a standard to be implementable.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880, 3380, Data-streaming

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880, 3380, Data-streaming
Date: 08 Mar, 2025
Blog: Facebook
when I transfer to San Jose Research in 2nd half of 70s, was allowed to wander around IBM (& non-IBM) datacenters in silicon valley, including disk bldg14/engineering & bldg15/product test across the street. They were running 7x24, prescheduled, stand-alone testing and mentioned they had recently tried MVS for concurrent testing, but it had 15min MTBF (requiring manual re-ipl) in that environment. I offered to rewrite I/O supervisor to make it bullet proof and never fail so they can do any amount of on-demand, concurrent testing, greatly improving productivity. Downside they started calling me anytime they had a problem and I had to increasingly spend time playing disk engineer.

Bldg15 got early engineering systems for I/O product testing, including 1st engineering 3033 outside POK processor development flr. Testing was only taking a couple percent of 3033 CPU, so we scrounge up 3830 and 3330 string and setup our own private online service. One morning I get a call asking what I had done over the weekend to completely destroy online response and throughput. I said nothing, and asked what had they done. They say nothing, but eventually find out somebody had replaced the 3830 with early 3880. Problem was the 3880 had replaced the really fast 3830 horizontal microcode processor with a really slow vertical microcode processor (the only way it could handle 3mbyte/sec transfer was when switched to data streaming protocol channels (instead end-to-end handshake for every byte transferred, it transferred multiple bytes per end-to-end handshake). There was then something like six months of microcode hacks to try to do a better masking of how slow 3880 actually was.

Then 3090 was going to have all data streaming channels and initially figured that 3880 was just like 3830 but with data streaming 3mbyte/sec transfer and configured number of channels based on that assumption to meet target system throughput. When they found out how bad channel busy (increase) was (unable to totally mask), they realized they would have to significantly increase the number of channels (which required extra TCM, they semi-facetiously claimed they would bill the 3880 group for the extra 3090 manufacturing cost).

Bldg15 also get early engineering 4341 in 1978 ... and with some tweaking of the 4341 integrated channels, it was fast enough to handle 3380 3mbyte/sec data streaming testing (the 303x channel directors were slow 158 engines with just the 158 integrated channel microcode and no 370 microcode). To otherwise allow 3380 3mbyte/sec to be attached to 370 block-mux 1.5mbyte/sec channels, the 3880 "Calypso" speed matching & ECKD channel programs were created.

Other trivia: people doing air bearing thin-film head simulation were only getting a couple turn arounds/month on the SJR 370/195. We set things up on the bldg15 3033 where they could get multiple turn arounds/day (even though 3033 was not quite half the MIPs of the 195).
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_3370

trivia: there haven't been any CKD DASD made for decades, all being simulated on industry standard fixed-block devices (dating back to 3375 on 3370 and can be seen in 3380 records/track formulas where record size is rounded up to 3380 fixed cell size).

posts getting to play disk engineering in 14&15:
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

posts mentioning Calypso and ECKD
https://www.garlic.com/~lynn/2024g.html#3 IBM CKD DASD
https://www.garlic.com/~lynn/2024c.html#74 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2023d.html#117 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2018.html#81 CKD details
https://www.garlic.com/~lynn/2015g.html#15 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2015f.html#86 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2012o.html#64 Random thoughts: Low power, High performance
https://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2010h.html#30 45 years of Mainframe
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2009p.html#11 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2007e.html#40 FBA rant

--
virtualization experience starting Jan1968, online at home since Mar1970

Learson Tries To Save Watson IBM

From: Lynn Wheeler <lynn@garlic.com>
Subject: Learson Tries To Save Watson IBM
Date: 08 Mar, 2025
Blog: Facebook
Learson tried (& failed) to block the bureaucrats, careerists, and MBAs from destroying Watson culture&legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

20 yrs later, IBM has one of the largest losses in the history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 09 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring
https://www.garlic.com/~lynn/2025b.html#11 IBM Token-Ring

In the 60s, there were a couple commercial online CP67-based spin-offs of the science center, also the science center network morphs into the corporate internal network and technology also used for the corporate sponsored univ. BITNET).
https://en.wikipedia.org/wiki/BITNET

Quote from one of the 1969 inventors of "GML" at the Science Center
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

Science-Center/corporate network was larger than ARPANET/Internet from just about the beginning until sometime mid/late 80s (about the time it was forced to move to SNA/VTAM). At the 1Jan1983 morph of ARPANET to internetworking, there were approx 100 IMPs and 255 hosts ... at a time the internal network was rapidly approaching 1000. I've periodically commented that ARPANET was somewhat limited by requirement for IMPs and associated approvals. Somewhat equivalent for the corporate network was requirement that all links be encrypted and various gov. resistance especially when links crossed national boundaries. Old archive post with list of corporate locations that added one or more nodes during 1983:
https://www.garlic.com/~lynn/2006k.html#8

After decision was made to add virtual memory to all IBM 370s, CP67 morphs into VM370 ... and TYMSHARE is providing commercial online VM370 services
https://en.wikipedia.org/wiki/Tymshare
and in Aug1976 started offering its CMS-based online computer conferencing for free to the (user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as "VMSHARE" ... archives here
http://vm.marist.edu/~vmshare
accessed via Tymnet:
https://en.wikipedia.org/wiki/Tymnet
after M/D buys TYMSHARE in the early 80s and discontinues some number of things, VMSHARE service is moved to a univ. computer.

co-worker at science center responsible for early CP67-based wide-area network and early days of the corporate internal network through much of the 70s
https://en.wikipedia.org/wiki/Edson_Hendricks

Trivia: a decade after "GML" was invented, it morphs into ISO standard "SGML", and after another decade morphs into "HTML" at CERN; first webserver in the US is at CERN-sister institution, Stanford SLAC on their VM370 system:
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
commercial, online virtual machine based services
https://www.garlic.com/~lynn/submain.html#online
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

1000th node globe

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 10 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring
https://www.garlic.com/~lynn/2025b.html#11 IBM Token-Ring
https://www.garlic.com/~lynn/2025b.html#14 IBM Token-Ring

also on the OSI subject:

OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ...

On the 60s 2701 T1 subject, IBM FSD (Federal System Division) had some number of gov. customers that had 2701 that were failing in the 80s and came up with (special bid) "T1 Zirpel" card for the IBM Series/1.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning "OSI: The Internet That Wasn't"
https://www.garlic.com/~lynn/2025.html#33 IBM ATM Protocol?
https://www.garlic.com/~lynn/2025.html#13 IBM APPN
https://www.garlic.com/~lynn/2024b.html#113 EBCDIC
https://www.garlic.com/~lynn/2024b.html#99 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2013j.html#65 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2013j.html#64 OSI: The Internet That Wasn't

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VM/CMS Mainframe

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VM/CMS Mainframe
Date: 10 Mar, 2025
Blog: Facebook
Predated VM370, originally 60s CP67 wide-area science center network (RSCS/VNET) .... comment by one of the cambridge science center inventors of GML in 1969 ...
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

It then morphs into the corporate internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s, about the time internal network was forced to convert to SNA/VTAM) ... technology also used for the corporate sponsored univ BITNET
https://en.wikipedia.org/wiki/BITNET
when decision was made to add virtual memory to all 370s, then CP67 morphs into VM370.

Co-worker at science center responsible for RSCS/VNET
https://en.wikipedia.org/wiki/Edson_Hendricks

RSCS/VNET used CP internal synchronous "diagnose" interface to spool file system transferring 4Kbyte blocks ... on large loaded system, spool file contention could limit it to 6-8 4k blocks/sec ... or 24k-32k bytes (240k-320k bits). I got HSDT in early 80s, with T1 and faster computer links (and lots of battles with communication group, aka 60s IBM had 2701 controller, but 70s transition to SNA/VTAM, issues capped controllers at 56kbit/sec links) .... supporting T1 links needed 3mbits (300kbytes) for each RSCS/VNET full-duplex T1. I did a rewrite of CP spool file system in VS/Pascal running in a virtual memory supporting asynchronous interface, contiguous allocation, write-behind, and read-ahead able to provide RSCS/VNET with multi-mbyte/sec throughput.

Also, releasing internal mainframe TCP/IP (implemented in VS/Pascal) was being blocked by the communication group. When that eventually is overturned, they changed their strategy ... because the communication group had corporate strategic ownership of everything that crossed datacenter walls, it had to be release through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 CPU. I do RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times throughput in bytes moved per instruction executed). Later in the 90s, communication group hired a silicon valley contractor to implement TCP/IP support directly in VTAM. What he demo'ed had TCP running much faster than LU6.2. He was then told that everbody knows that a "proper" TCP/IP implementation is much slower than LU6.2 and they would only be paying for a "proper" implementation.

trivia: The Pisa Science Center had done "SPM" for CP67 (inter virtual machine protocol, a superset of later VM370 VMCF, IUCV and SMSG combination) which was ported to (internal VM370 ... which was also supported by the product RSCS/VNET (even though "SPM" never shipped to customers). Late 70s, there was multi-user spacewar client/server game done using "SPM" between CMS 3270 users and the server ... and since RSCS/VNET supported the protocol, users didn't have to be on same system as the server. An early problem was people started doing robot players beating human players (and server was modified to increase power use non-linear as interval between user moves dropped below human threashold).

some VM (customer/product) history at Melinda's site
https://www.leeandmelindavarian.com/Melinda#VMHist

trivia, most of JES2 network came from HASP that had "TUCC" in cols68-71 of the source, problem was it defined network nodes in unused entries in the 255 pseudo spool device table ... typically limit of 160-180 definitions ... and somewhat intermixed network fields with job control fields in the header. RSCS/VNET had clean layered implementation so was able to do a JES2 emulation driver w/o much trouble. However the internal corporate network had quickly/early passed 256 nodes and JES2 would trash traffic for origin or destination wasn't in local table ... so JES2 systems typically had to be restricted to boundary nodes behind protective RSCS/VNET nodes. Also because of intermixing of fields, traffic between JES2 systems at different release levels could crash the destination MVS system. As a result a large body of RSCS/VNET JES2 emulation driver code grew up that understood different origin and destination JES2 formats and adjust fields for the directly connected JES2 destination.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HASP, ASP, JES2, JES3, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VM/CMS Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VM/CMS Mainframe
Date: 10 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#16 IBM VM/CMS Mainframe

Aka ... some of the MIT CTSS/7094 people went to the 5th flr for MULTICS and others went to the IBM science center on 4th flr and did virtual machines (initially CP40 on 360/40 with virtual memory hardware mods, morphs into CP67 when 360/67 standard with virtual memory became available), internal network, lots of online apps, inventing "GML" in 1969, etc. When decision was made to add virtual memory to all 370s, some of the people split off from CSC and take-over the IBM Boston Programming Center on the 3rd flr for VM370 (and cambridge monitor system becomes conversational monitor system).

I had taken 2hr credit hr intro to fortran/computers and at the end of semester was hired to do some 360 assembler on 360/30. The univ was getting 360/67 for tss/360 replacing 709/1401; got a 360/30 temporary replacing 1401 until 360/67 arrives (univ shutdown datacenter on weekends and I had it all dedicated, but 48hrs w/o sleep made monday classes hard). Within a year of taking intro class, the 360/67 comes in and I was hired fulltime responsible for os/360 (tss/360 didn't make it to production) and I still had the whole datacenter dedicated weekends. CSC comes out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I rewrite large amounts of CP67 code ... as well as adding TTY/ASCII terminal support (all picked up and shipped by CSC). Tale of CP67 across tech sq quad at MIT Urban lab ... my TTY/ASCII had max line length of 80chars ... they do a quick hack for 1200 chars (new ASCII device down at harvard) but don't catch all dependencies ... and CP67 crashes 27 times in single day.
https://www.multicians.org/thvv/360-67.html
other history by Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning Urban lab and 27 crashes
https://www.garlic.com/~lynn/2024g.html#92 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024c.html#16 CTSS, Multicis, CP67/CMS
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2023f.html#66 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#61 The Most Important Computer You've Never Heard Of
https://www.garlic.com/~lynn/2022c.html#42 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022.html#127 On why it's CR+LF and not LF+CR [ASR33]
https://www.garlic.com/~lynn/2016e.html#78 Honeywell 200
https://www.garlic.com/~lynn/2015c.html#57 The Stack Depth
https://www.garlic.com/~lynn/2013c.html#30 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2010c.html#40 PC history, was search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2006c.html#28 Mount DASD as read-only
https://www.garlic.com/~lynn/2004j.html#47 Vintage computers are better than modern crap !

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VM/CMS Mainframe

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VM/CMS Mainframe
Date: 11 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#16 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#17 IBM VM/CMS Mainframe

In the 60s, IBM had 2701 that supported T1, then in 70s move to SNA/VTAM, issues caped the controllers at 56kbits. I got HSDT project in the early 80s, T1 and faster computer links (both terrestrial and satellite) and lots of conflict with the communication group. Mid-80s, communication group prepared report for corporate executive committee that customers wouldn't be needing T1 before sometimes in the 90s. What they had done was survey of 37x5 "fat pipes", multiple parallel 56kbits links treated as single logical link ... declining number customers from 2-5 parallel links, dropping to zero by 6 or 7. What they didn't know (or didn't want to tell corporate executive committee) was typical telco tariff for T1 was about the same as six or seven 56kbit links. HSDT trivial survey found 200 customers with T1 links, just moved to non-communication group hardware & software (mostly non-IBM, but for gov. customers with failing 2701, FSD had Zirpel T1 cards for Series/1s).

Later in the 80s, communication group had 3737 that ran T1 link, whole boatload of Motorola 68k processors and memories, had a mini-VTAM emulation simulating CTCA to real local host VTAM. 3737 would immediately reflect ACK to the local host (to keep transmission flowing) before transmitting traffic to remote 3737, which reversed at the remote end to remote host. The trouble was host VTAM would hit max outstanding transmission, long before ACKs started coming back. Even with short-haul, terrestrial T1, the latency for returning ACKs resulting in VTAM only able to use trivial amount of the T1. HSDT had early gone to dynamic adaptive rate-based pacing, easily adapting to much higher transmission than T1, including much longer latency satellite links (and gbit terrestrial cross-country links).

Trivia: 1988, IBM branch office asks me if I could help LLNL standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980, initially 1gbit/sec, full-duplex, 200mbyte/sec aggregate). Eventually IBM releases their serial channel with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define a protocol that radically limits throughput, eventually released as FICON. The most recent public benchmark I've found was z196 "Peak I/O" getting 2M IOPS using 104 FICONs (20K IOPS/FICON). At the same time there was (native) FCS announced for E5-2600 server blades claiming over million IOPS (two such FCS higher throughput than 104 FICONs).

Even if SNA was saturating T1, it would be about 150kbytes/sec (late 80s w/o 3737 spoofing host VTAM, lucky to be 10kbytes/sec)... HSDT saturating cross-country 80s native 1gbit FCS would be 100mbytes/sec ... IBM 3380 3mbyte/sec ... would need 33 3380 drive disk RAID at both ends. Native FCS 3590 tape 42mbyte/sec (with 3:1 compression).

2005 TS1120, IBM & non-IBM, "native" data transfer up to 104mbytes/sec (up to 1.5tbytes at 3:1 compressed)
https://asset.fujifilm.com/www/us/files/2020-03/71d28509834324b81a79d77b21af8977/359X_Data_Tape_Seminar.pdf

other trivia: Internal mainframe tcp/ip implementation was done in vs/pascal ... and mid-80s communication group was blocking release. When that got overturned, they changed their tactic and said that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbyes/sec using nearly whole 3090 CPU. I then did the changes to support RFC1044 and in some tuning tests at Cray Research between a Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
FICON and FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

some old 3737 email:
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2018f.html#email880715
https://www.garlic.com/~lynn/2011g.html#email881005

some posts mentioning 3737:
https://www.garlic.com/~lynn/2025.html#35 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024f.html#116 NASA Shuttle & SBS
https://www.garlic.com/~lynn/2024e.html#95 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#91 When Did "Internet" Come Into Common Use
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023d.html#120 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023c.html#57 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021j.html#32 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#31 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021d.html#14 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#97 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2019d.html#117 IBM HONE
https://www.garlic.com/~lynn/2019c.html#35 Transition to cloud computing
https://www.garlic.com/~lynn/2019b.html#16 Tandem Memo
https://www.garlic.com/~lynn/2018f.html#110 IBM Token-RIng
https://www.garlic.com/~lynn/2018b.html#9 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017.html#57 TV Show "Hill Street Blues"
https://www.garlic.com/~lynn/2016b.html#82 Qbasic - lies about Medicare
https://www.garlic.com/~lynn/2015g.html#42 20 Things Incoming College Freshmen Will Never Understand
https://www.garlic.com/~lynn/2015e.html#31 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015e.html#2 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015d.html#47 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014j.html#66 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014b.html#46 Resistance to Java
https://www.garlic.com/~lynn/2013n.html#16 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013j.html#66 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2012o.html#47 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012m.html#24 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012j.html#89 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2012j.html#87 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2012g.html#57 VM Workshop 2012
https://www.garlic.com/~lynn/2012f.html#92 How do you feel about the fact that India has more employees than US?
https://www.garlic.com/~lynn/2012d.html#20 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2011p.html#103 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011g.html#77 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011g.html#75 We list every company in the world that has a mainframe computer

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VM/CMS Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VM/CMS Mainframe
Date: 11 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#16 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#17 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe

trivia: source code card sequential numbers were used by source update system (CMS update program). as undergraduate in 60s, I was changing so much code, I created the "$" convention ... preprocessor (to update program) would generate the sequence numbers for new statements before passing a work/temp file to update command. after joining the science center and the decision to add virtual memory to all 370s, joint project with endicott was to 1) add virtual 370 machine support to CP67 (running on real 360/67) and 2) modify CP67 to run on virtual memory 370 ... which included implementing multi-level source update (originally done in EXEC recursively applying source updates) ... was running in CP67 370 virtual machine for a year before 1st engineer 370 (w/virtual memory) was operational (ipl'ing the 370 CP67 was used to help verify that machine)

trivia: mid-80s, got a request from Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist
asking for a copy of the original multi-level source update done in exec. I had triple-redundant tape of archived files from 60s&70s ... and was able to pull it off from archive tape. It was fortunate because because not long later, Almaden Research had an operational problem mounting random tapes as scratch and I lost nearly dozen tapes ... including all three replicated tapes with 60s&70s archive.


Internet trivia: one of the people that worked on multi-level update implementation at CSC was MIT student ... that went on later to do DNS.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts mentioning CSC, MIT student, multi-level update
https://www.garlic.com/~lynn/2024b.html#74 Internet DNS Trivia
https://www.garlic.com/~lynn/2019c.html#90 DNS & other trivia
https://www.garlic.com/~lynn/2017i.html#76 git, z/OS and COBOL
https://www.garlic.com/~lynn/2014e.html#35 System/360 celebration set for ten cities; 1964 pricing for oneweek
https://www.garlic.com/~lynn/2013f.html#73 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013e.html#85 Sequence Numbrs (was 32760?
https://www.garlic.com/~lynn/2013b.html#61 Google Patents Staple of '70s Mainframe Computing
https://www.garlic.com/~lynn/2011p.html#49 z/OS's basis for TCP/IP
https://www.garlic.com/~lynn/2007k.html#33 Even worse than UNIX

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose and Santa Teresa Lab

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose and Santa Teresa Lab
Date: 13 Mar, 2025
Blog: Facebook
IBM convention was to name after closest post office ... which was "coyote" ... it was quickly changed after spring demonstration on capital steps by the San Fran Women's "COYOTE" union. By 1980, it was bursting at the seams and 300 people (and terminals) from the IMS organization were being moved to offsite bldg (just south of main plant site) with dataprocessing back to STL machine room.

I had transferred to SJR (bldg28 on plant site) and got to wander around IBM (and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test across the street. They were running 7x24, prescheduled, stand alone testing and mentioned that they had recently tried MVS but it had 15min MTBF (requiring manual re-ipl). I offer to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand testing ... greatly improving productivity. Downside I would get sucked into any kind of problem they might have and had to increasingly play disk engineer.

Then in 1980, STL cons me into doing channel-extender support for the IMS people being moved offsite. They had tried "remote 3270" and found human factors totally unacceptable ... channel-extender allowed channel attached 3270 controllers to be placed at the offsite bldg, resulting in no perceived difference in human factors between offsite and inside STL.

Then they found that the systems with channel-extenders had 10-15% greater throughput than systems w/o. STL had spread all the channel attached 3270 controllers across all block-mux channels with 3830 disk controllers. The channel-extender boxes had significantly less channel busy (than native channel attached 3270 controllers) for same amount of 3270 terminal traffic ... improving disk I/O throughput.

In SJR, I worked with Jim Gray and Vera Watson on original SQL/relational, System/R ... and while STL (and rest of company) was preoccupied with the next, new, greatest DBMS "EAGLE", managed to do tech transfer (under the "radar") to Endicott for SQL/DS. Then when Jim left IBM for Tandem in fall of 1980, he palms off DBMS consulting for w/STL IMS (Vern Watts)
https://www.vcwatts.org/ibm_story.html

Then when "EAGLE" implodes, request was made for how fast could System/R be ported to MVS ... which eventually ships as "DB2" (originally for decision support only).

getting to play disk enginneer
https://www.garlic.com/~lynn/subtopic.html#disk
channel extender support
https://www.garlic.com/~lynn/submisc.html#channel.extender
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

STL T3 microwave to bldg12

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose and Santa Teresa Lab

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose and Santa Teresa Lab
Date: 14 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#20 IBM San Jose and Santa Teresa Lab

BofA was also getting 60 vm/4341s for System/R

RIP
https://www.mercurynews.com/obituaries/vernice-lee-watts/
for some reason one of couple connections still on linkedin

Note, also did similar channel-extender for Boulder ... then got HSDT project in early 80s, T1 and faster computer links (both terrestrial and satellite) and many conflicts with the communication group. Note in 60s, IBM had 2701 controller supporting T1, then in 70s the move to SNA/VTAM and the issues appeared to cap controllers at 56kbits/sec.

HSDT first long-haul T1 satellite was between Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi

E&S lab in Kingston. Los Gatos had collins digital radio (C-band microwave) to San Jose bldg12 (similar to STL microwave pictured previously). Both Kingston and San Jose had T3 C-band 10M satellite dishes. Later HSDT got its own Ku-band TDMA satellite system with 4.5M dishes in Los Gatos and Yorktown and 7M dish in Austin (and I got part of Los Gatos wing with offices and labs).

Before research moved up the hill to Almaden, bldg28 had earthquake remediation ... adding new bldg around the old bldg. Then bldg14 got earthquake remediation and engineering (temporarily) moved to bldg86 (offsite, near the moved IMS group). Bldg86 engineering also got an EVE (endicott verification engine, custom hardware used to verify VLSI chip design, something like 50,000 times faster than software on 3033). Did a T1 circuit from Los Gatos to bldg12 to bldg86 ... so Austin could use the EVE to verify RIOS (chip set for RS/6000), claims it help bring RIOS in a year early. Since then bldgs 12, 15, 28, 29 and several others, have all been plowed under.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
channel extender support
https://www.garlic.com/~lynn/submisc.html#channel.extender
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
getting to play disk enginneer
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose and Santa Teresa Lab

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose and Santa Teresa Lab
Date: 14 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#20 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025b.html#21 IBM San Jose and Santa Teresa Lab

After Future System imploded and mad rush to get products back into the 370 product pipelines, including quck&dirty 3033&3081
http://www.jfsowa.com/computer/memo125.htm

(and before transferring to SJR on west coast), was con'ed into helping with a SMP 16-CPU 370 and we got the 3033 processor engineers to help in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips), which everybody thought was really great until somebody tells head of POK that it could be decades before POK favorite son operating system (MVS) had (effective) 16-CPU support (IBM MVS pubs claiming 2-CPU support only getting 1.2-1.5 times throughput of single processor), POK doesn't ship 16-CPU machine until turn of century. Then head of POK invites some of us to never visit POK again, and directs 3033 processor engineers heads down and no distractions.

After transfering to SJR, bldg15 (across the street) gets 1st engineering 3033 outside POK processor engineering for I/O testing (testing only takes percent or two of CPU, so we scrounge up 3830 and string of 3330s for private online service). Then 1978, they also get an engineering 4341 and in Jan1979, branch office cons me into doing 4341 benchmark for national labs looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).

disclaimer: last product we did at IBM was HA/6000, started out for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scaleup with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scaleup with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster support in same source base with Unix, I also do a distributed lock manager supporting VAXCluster semantics to ease ATEX port). Early JAN92, have meeting with Oracle CEO where IBM/AWD Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan92, cluster scaleup is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than four processors, we leave a few months later.

1993 mainframe/RS6000 (industry benchmark; no. program iterations compared to reference platform)
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS; 16-system: 2BIPS; 128-system: 16BIPS


Executive that we had been reporting to, moves over to head-up Somerset/AIM (Apple, IBM, Motorola) doing single chip 801/risc power/pc ... also motorola 88k risc bus enabling SMP multiprocessor configurations. However, i86/Pentium new generation where i86 instructions are hardware pipelined translated to RISC micro-ops (on the fly) for actual execution (negating RISC throughput advantage compared to i86).
• 1999 single IBM PowerPC 440 hits 1,000MIPS
• 1999 single Pentium3 hits 2,054MIPS (twice PowerPC 440)
• Dec2000 z900, 16 processors, 2.5BIPS (156MIPS/proc)

• 2010 E5-2600, two XEON 8core chips, 500BIPS (30BIPS/proc)
• 2010 z196, 80 processors, 50BIPS (625MIPS/proc)


Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HA/CMP processor
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

Forget About Cloud Computing. On-Premises Is All the Rage Again

From: Lynn Wheeler <lynn@garlic.com>
Subject: Forget About Cloud Computing. On-Premises Is All the Rage Again
Date: 15 Mar, 2025
Blog: Facebook
Forget About Cloud Computing. On-Premises Is All the Rage Again. From startups to enterprise, companies are lowering costs and regaining control over their operations
https://towardsdatascience.com/forget-about-cloud-computing-on-premises-is-all-the-rage-again/

90s, cluster computing started to become the "rage" ... similar technologies for both cloud and supecomputing .... large scale assembly of commodity parts for (at most) 1/3rd the price of brand name computers. then started to have some brand name vendors doing "white box" assembly of commodity parts for customer on-site cluster computing (at reduced price). A decade ago, industry news was claiming open system server part vendors were shipping at least half their product directly to large cloud computing operations (that would assemble their own systems) and IBM sells off its brand name open system server business. A large cloud operation can have multiple score megadatacenters around the world, each megadatacenter with at least half million blade servers.

These operations had so radically reduced their server costs that things like power consumption were increasingly becoming major cost and they were putting heavy pressure on server part makers to optimize computing power consumption ... threatening to move to chips optimized for battery operation (reduced individual system peak computer power, compensated for by larger number of systems that had equivalent aggregate computation at lower aggregate power consumption). System costs had so radically been reduced, that any major improvement in part power consumption could easily justify swamping out old systems for new.

There were stories of cloud operations that provided for service that supported use of a credit card to spin up on-demand cluster supercomputer that would rank in one of the largest in the world.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

Forget About Cloud Computing. On-Premises Is All the Rage Again

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Forget About Cloud Computing. On-Premises Is All the Rage Again
Date: 16 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#23 Forget About Cloud Computing. On-Premises Is All the Rage Again

I had taken 2 credit hr intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30. Univ. shutdown datacenter on the weekend and I had the whole datacenter dedicated, although 48hrs w/o sleep made monday classes hard. Univ was getting 360/67 for tss/360 replacing 709/1401 and got a 360/30 replacing 1401 temporary pending arrival of 360/67. 360/67 arrived within yr of taking intro class and I was hired fulltime responsible for os/360 (tss/360 never came to production). Then CSC comes out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly play with it during my dedicated weekend time.

Then before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with consolidating all dataprocessing into an independent business unit (including offering services to non-Boeing entities). I think Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly being staged in the hallways around the machine room (joke that Boeing was getting 360/65s like other companies got keypunches, precursor to cloud megadatacenters). Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I wasn't doing other stuff).

During the 60s, there were also two spin-offs of CSC that began offering CP67/CMS commercial online services (specializing in wallstreet financial industry). This was in the period when IBM rented/leased 360 computers and charges were based on the "system meter" ... which ran whenever the CPU(s) and/or any I/O channels were busy (CPU(s) and all channels had to be idle for 400ms before system meter stopped). To reduce the IBM billing and people costs, CSC and the commercial spinoffs modified CP67 for offshift "dark room" operation with no humans present and terminal channel programs that allowed channels to stop (but were instantly "on" whenever any characters arrived) part of 7x24 availability (sort of cloud equivalent of systems that would go dormant drawing no power when idle, but instantly operational "on-demand"). Trivia: long after IBM had switched to selling computers, IBM's "MVS" operating system still had a 400ms timer event that would have guaranteed that the system meter never stopped.

IBM CSC
https://www.garlic.com/~lynn/subtopic.html#545tech
online computer services posts
https://www.garlic.com/~lynn/submain.html#online
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880, 3380, Data-streaming

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880, 3380, Data-streaming
Date: 16 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming

When I transferred to SJR on the west coast, got to wander around IBM (and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test across the street. They were running 7x24, pre-scheduled, stand-alone testing and mentioned that they had recently tried MVS, but it had 15min MTBF (in that environment) requiring manual re-ipl. I offer to rewrite I/O system to make it bullet proof and never fail, allowing any amount of on-demand testing, greatly improving productivity. Bldg15 then got 1st engineering 3033 (outside POK cpu engineering) and since testing only used a percent or two of CPU, we scrounge up a 3830 and a 3330 string for our own, private online service. At the time, air bearing simulation (for thin film head design) was only getting a couple turn arounds a month on SJR 370/195. We set it up on bldg15 3033 (slightly less than half 195 MIPS) and they could get several turn arounds a day.

A couple years later when 3380 was about to ship, FE had a test stream of a set of 57 hardware simulated errors that were likely to occur and in all 57 cases, MVS was (still) crashing and in 2/3rds of the cases, no indication what caused the failure
https://www.garlic.com/~lynn/2007.html#email801015

first thin film head was 3370
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

then used for 3380; original 3380 had 20 track spacings between each data track, then cut the spacing in half for double the capacity, then cut the spacing again for triple the capacity (3380K). The "father of 801/risc" then talks me into helping with a "wide" disk head design, read/write 16 closely spaced data tracks in parallel (plus follow two servo tracks, one on each side of 16 data track groupings). Problem was data rate would have been 50mbytes/sec at a time when mainframe channels were still 3mbytes/sec. However 40mbyte/sec disk arrays were becoming common and Cray channel had been standardized as HIPPI.
https://en.wikipedia.org/wiki/HIPPI

1988, IBM branch asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre channel standard ("FCS", initially 1gbit, full-duplex, got RS/6000 cards capable of 200mbytes/sec aggregate for use with 64-port FCS switch). In 1990s, some serial stuff that POK had been working with for at least the previous decade is released as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define heavy weight protocol that significantly reduces ("native") throughput, which ships as "FICON". Latest public benchmark I've seen was z196 "Peak I/O" getting 2M IOPS using 104 FICON. About the same time a FCS is announced for E5-2600 blades claiming over a million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommended that SAPs (system assist processors that do actual I/O) be held to 70% CPU (or around 1.5M IOPS).
https://en.wikipedia.org/wiki/Fibre_Channel

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880, 3380, Data-streaming

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880, 3380, Data-streaming
Date: 16 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming

were problems 3081 with data streaming 3mbyte channels or earlier 370s? Allowing 3mbyte 3380s to be used with 370 1.5mbyte channels, had 3880 "Calypso" speed matching buffer & (original) ECKD CCWs ... but had enormous problems (old email from 07Sep1982 mentions large number of severity ones, engineers on site for the hardware problems ... but claims that ECKD software was in much worse shape).
https://www.garlic.com/~lynn/2007e.html#email820907b

selector & block mux channels did end-to-end hand-shake for every byte transferred and aggregate channel length caped at 200ft. data streaming channels (for 3mbyte/sec 3380s) did multiple byte transfer for each end-to-end handshake and increase aggregate channel to 400ft.

1978, bldg15 (also) got engineering 4341/E5 and in jan1979, a branch office gets me to do a benchmark for national lab that was looking at getting 70 for compute farm (sort of leading edge of the coming cluster supercomputing tsunami)

decade later, last product we did at IBM was HA/6000, started out for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scaleup with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scaleup with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster support in same source base with Unix, I also do a distributed lock manager supporting VAXCluster semantics to ease ATEX port). Early JAN92, have meeting with Oracle CEO where IBM/AWD Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan92, cluster scaleup is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than four processors, we leave a few months later.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880, 3380, Data-streaming

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880, 3380, Data-streaming
Date: 16 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#26 IBM 3880, 3380, Data-streaming

Future System was completely different from 370 and was going to completely replace (internal politics was killing off 370 efforts and lack of new 370 stuff during the period is credited with giving clone 370 makers their market foothold). When FS imploded, there was mad rush to get stuff back into 370 product pipelines, including quick&dirty 3033&3081
http://www.jfsowa.com/computer/memo125.htm

about the same time the head of POK managed to convince corporate to kill the vm370 project, shutdown the development group and transfer all the people to POK for MVS/XA. (Endicott managed to save the VM370 product mission for the mid-range, but had to recreate a development group from scratch). Some of the people that went to POK developed the primitive virtual machine VMTOOL (in 370/xa architecture, required the SIE instruction to move in/out virtual machine mode) in support of MVS/XA development.

Then customers weren't moving to MVS/XA as fast as predicted, however Amdahl was having better success being able to run both MVS and MVS/XA concurrently on the same machine with their (microcode hypervisor) "Multiple Domain". As a result, VMTOOL was packaged 1st as VM/MA (migration aid) and then VM/SF (system facility) able to run MVS and MVS/XA concurrently on 3081. However, because originally VMTOOL and SIE was never intended for production operation and limited microcode memory, SIE microcode had to be "paged" (part of the 3090 claim that 3090 SIE was designed for performance from the start).

Then POK decided they wanted a few hundred people to create VM/XA, bring VMTOOL up to the feature, function and performance of VM370 ... counter from Endicott was sysprog in IBM Rochester had added full 370/XA to VM/370 ... POK wins.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

some vmtool, vm/ma, vm/sf, vm/xa posts
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2021c.html#56 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM WatchPad

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM WatchPad
Date: 16 Mar, 2025
Blog: Facebook
IBM WatchPad
https://en.m.wikipedia.org/wiki/IBM_WatchPad

... before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, kildall worked on IBM CP/67 (precursor to VM370) at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

... aka CP/M "Microprocessor" rather than "67"

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.
... snip ...

then communication group was fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm (aka PCs limited to 3270 emulation)

late 80s, senior disk engineer got talk scheduled at communication group internal annual world-wide conference supposedly on 3174 performance, but opened the talk that communication group was going to be responsible for demise of disk division. The disk division was seeing data fleeing mainframe datacenters to more distributed computing friendly platforms with drop in disk sales. The disk division had come up with number of solutions but were constantly vetoed by the communication group with their corporate strategic ownership of everything that crossed datacenter walls. GPD/Adstar software executive partially compensated by investing in distributed computing startups that would use IBM disks (and periodically asked us to drop by his investments to lend a hand).

It wasn't just disks and a couple years later IBM has one of the largest losses in the history of US companies and was being reorged into the 13 "baby blues" in preparation for breakup of the company (takeoff on "baby bell" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

The communication group was also performance kneecaping PS2 microchannel cards. IBM AWD workstation division had done their own cards for the PC/RT (PC/AT 16bit bus), including 4mbit token-ring card. For RS/6000 microchannel, they were told they couldn't do their own cards, but had to use standard PS2 cards. It turns out the PS2 microchannel 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card (joke that PC/RT 4mbit TR server would have higher throughput than RS/6000 16mbit TR server).

Mid-80s, communication group was trying to block release of mainframe TCP/IP support. When they lost, they change their strategy, since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then do the changes to support RFC1044 and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times increase in bytes moved per instruction executed).

Communication Group preserving dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
IBM Cambridge Science Center
https://www.garlic.com/~lynn/subtopic.html#545tech
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

Learson Tries To Save Watson IBM

From: Lynn Wheeler <lynn@garlic.com>
Subject: Learson Tries To Save Watson IBM
Date: 17 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~/lynn/2025b.html#13 Learson Tries To Save Watson IBM
other recent Financial Engineering
https://www.garlic.com/~/lynn/2025b.html#0 Financial Engineering

Note AMEX and KKR were in competition for private-equity, reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help. Then IBM has one of the largest losses in the history of US companies and was preparing to breakup the company when the board hires the former president of AMEX as CEO to try and save the company, who uses some of the same techniques used at RJR.
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Stockman and financial engineering company
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall Street momo traders. It was actually a stock buyback contraption on steroids. During the five years ending in fiscal 2011, the company spent a staggering $67 billion repurchasing its own shares, a figure that was equal to 100 percent of its net income.

pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82 billion, or 122 percent, of net income over this five-year period. Likewise, during the last five years IBM spent less on capital investment than its depreciation and amortization charges, and also shrank its constant dollar spending for research and development by nearly 2 percent annually.
... snip ...

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases have come to a total of over $159 billion since 2000.
... snip ...

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
former AMEX president and IBM CEO
https://www.garlic.com/~lynn/submisc.html#gerstner
retirement/pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback

--
virtualization experience starting Jan1968, online at home since Mar1970

Some Career Highlights

From: Lynn Wheeler <lynn@garlic.com>
Subject: Some Career Highlights
Date: 18 Mar, 2025
Blog: Facebook
Last product did at IBM was HA/CMP. It started out HA/6000 for the NYTimes to migrate their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster support in same source base with Unix (I do a distributed lock manager supporting VAXCluster semantics to ease the migration). We did several marketing trips in Europe (a different city every day and several customers/day) and Far East. One trip to Hong Kong, we were riding elevator up in large bank building/skyscraper, with the local marketing team, when a newly minted marketing rep in the back, asks if I was the "wheeler" of the "wheeler scheduler", I said yes, he said we studied you at the Univ. of Waterloo (I asked if there was any mention of the joke I had included in the code).

As undergraduate in the 60s, univ had hired me fulltime responsible for OS/360 and then CSC came out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly got to play with it during my dedicated weekend time (although 48hrs w/o sleep made Monday classes hard). I redid a lot of CP67, including implementing dynamic adaptive resource management ("wheeler scheduler") which a lot of IBM and customers ran. After joining IBM one of my hobbies was enhanced production operating systems for internal datacenters (one of the first and long time customer was the world-wide online sales&marketing support HONE systems). After decision to add virtual memory to 370s, a VM370 development group was formed and in the morph from CP67->VM370, lots of stuff was dropped and/or greatly simplified. Starting with VM370R2, I was integrating lots of stuff from my CP67 into VM370 for (internal) "CSC/VM" .... and the SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
organization passed resolution asking that the VM370 "wheeler scheduler" be released to customers.

Trivia: Early JAN1992 had HA/CMP meeting with Oracle CEO, where IBM/AWD Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late JAN1992, cluster scaleup was transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later).

Late 1992, IBM has one of the largest losses in the history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" break-up a decade earlier):
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup.

Note: 20yrs before 1992 loss, Learson tried (and failed) to block the bureaucrats, careerists, and MBAs from destroying Watson culture&legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

misc
https://www.enterprisesystemsmedia.com/mainframehalloffame
http://mvmua.org/knights.html
... and IBM Systems mag article gone 404
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
... other details
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Dynamic Adaptive Resource Management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

Some Career Highlights

From: Lynn Wheeler <lynn@garlic.com>
Subject: Some Career Highlights
Date: 19 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#30 Some Career Highlights

Early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM (in 50s, instructor at USAF weapons school, he was considered possibly best jet fighter pilot in the world, then he went on to redo the original F-15 design, cutting weight nearly in half, then responsible for Y16 and Y17 that become F-16 and F-18). In 89/90, the commandant of the Marine Corps leverages Boyd for make-over of the Marine Corps (at the time IBM was desperately in need of make-over). When Boyd passed in 1997, the USAF had pretty much disowned him and it was the Marines at Arlington and his effects go to the (Marine) Gray research and library. Former commandant continues to sponsor Boyd conferences at Marine University in Quantico. One year, the former commandant comes in after lunch and speaks for two hrs, totally throwing schedule off, but nobody was going to complain. I'm sitting in the back opposite corner of the room and when he was done, he makes a straight line for me, as he was approaching and all I could think of was all the Marines I had offended in the past and somebody had set me up (he passed last April).

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Forget About Cloud Computing. On-Premises Is All the Rage Again

From: Lynn Wheeler <lynn@garlic.com>
Subject: Forget About Cloud Computing. On-Premises Is All the Rage Again
Date: 19 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#23 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again

Jan1979, got con'ed by IBM branch office into doing engineering 4341 benchmark for national lab looking at getting 70 4341s for compute farm (sort of leading edge of cluster supercomputing tsunami). Decade later (1988) last product did at IBM was HA/6000, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster support in same source base with Unix, I did a distributed block manager supporting VAXCluster semantics to ease the port).

Early Jan1992, there is HA/CMP meeting with Oracle CEO where IBM/AWD/Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan1992, cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific "ONLY") and we are told we couldn't work on anything with more than four processors (we leave IBM a few months later).

Not longer later, we are bought in as consultants for small client/server startup, two former Oracle people (that had been in the Ellison meeting) are there responsible for something called "commerce" server and they want to do payment transactions on the server, the startup had also invented this stuff they called "SSL" they want to use, it is now frequently called "electronic commerce". I had responsibility for everything between "commerce" servers and the payment networks.

Was also working with some other consultants that were doing stuff at a nearby company called GOOGLE. As a service they were collecting all web pages they could find on the Internet and supporting a search service for finding things. The consultants first were using rotating ip-addresses with DNS A-records for load balancing (but one of the problems were DNS responses tended to be cached locally and were very poor at load-balancing). They then modified the GOOGLE boundary routers to maintain information about back-end server workloads and provided dynamic adaptive workload balancing.

One of the issue was as number of servers exploded and they start assembling their own as part of enormous megadatacenters, they had also so dramatically reduced system costs that they could provision number of servers that were possibly ten times the normal workload, but available for peak "on-demand" requirements.

Based on work done for "e-commerce", I do a talk "Why Internet Isn't Business Cricial Dataproceessing" that (among others), Postel (IETF/Internet RFC editor) sponsored at ISI/USC.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
e-commerce payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

some recent. posts mentioning "Why Internet Isn't Business Critical Dataprocessing"
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024g.html#80 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#25 Taligent
https://www.garlic.com/~lynn/2024g.html#20 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024f.html#47 Postel, RFC Editor, Internet
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career

--
virtualization experience starting Jan1968, online at home since Mar1970

3081, 370/XA, MVS/XA

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3081, 370/XA, MVS/XA
Date: 20 Mar, 2025
Blog: Facebook
Future System was completely different from 370 and was going to replace it (internal politics was killing off 370 efforts during FS and claim is the lack of new 370 during the period is responsible for giving the clone makers their market foothold, also ibm marketing had to fine tune their FUD skills). Then when FS implodes there is made rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081.
http://www.jfsowa.com/computer/memo125.htm

308x were going to be multiprocessor only and original two processor 3081D had lower aggregate processing than single processor Amdahl. IBM then quickly doubled the processor cache size for 3081K ... which was about same aggregate MIPS as single processor Amdahl, although MVS documents said that its 2-CPU multiprocessor support only had 1.2-1.5 throughput of single processor, aka 2-CPU 3081K with same aggregate MIPS as single processor Amdahl, 3081K only .6-.75 throughut of Amdahl single processor.

Also ACP/TCP (airline, reservation, transaction) systems didn't have multiprocessor support, and IBM was concerned that whole market would transition to the latest Amdahl single processor. Eventually IBM did offer a 1-CPU 3083 (3081 with one of the processors removed) for ACP/TCP market.

There is story of IBM having flow sensor between the heat exchange unit and TCMs ... but not external to heat exchange unit. One customer lost flow external to the heat exchange unit ... so the only sensor left was thermal, by the time the thermal tripped, it was too late and all the TCMs fried. IBM then retrofitted flow sensors external to the heat exchange unit.

Trivia: as undergraduate in the 60s, univ. hired me fulltime responsible for OS/360, then before I graduate I'm hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Computing Services (consolidate all dataprocessing into independent business unit, including offering service to non-Boeing entities).

A decade ago, I was asked to track down decision to add virtual memory to all 370s and found staff to executive making decision. Basically MVT storage management was so bad, that regions sizes were being specified at four times larger than used ... so typical 1mbyte 370/165 only would run four concurrent regions, insufficient to keep machine busy and justified. Going to 16mbyte virtual memory (SVS) allowed increasing number of regions by factor of four (caped at 15 because 4bit storage protect keys) with little or no paging (as systems got larger then ran into the 15 limit and spawned MVS)

Turns out Boeing Huntsville had run into similar with MVT. They had gotten a two processor 360/67 TSS/360 with lot of 2250s for CAD work ... but TSS wasn't production ... so ran it as two 360/65s with MVT ... and ran into the MVT problem. They modified MVT release 13 to run in virtual memory mode (but no paging) that partially addressed MVT's storage management problems.

I had been writing articles in the 70s about needing increasing number of concurrently running applications. In the early 80s, I wrote that since beginning of 360s, disk relative system performance had declined by an order of magnitude (disks got 3-5 faster, systems got 40-50 times faster, so the only way to keep up is to have huge number of concurrent i/o). Disk division executives took exception and assigned the division performance group to refute the claim, after a couple weeks, they came back and said I had slightly understated the problem.

Note article about IBM was finding that customers weren't converting to MVS as planned
http://www.mxg.com/thebuttonman/boney.asp

something similar in the 80s with getting customers to migrate from MVS->MVS/XA. Amdahl was having better success since they had (microcode hypervisor) "multiple domain" (similar to LPAR/PRSM a decade later on 3090) being able to run MVS & MVS/XA concurrently.

After FS imploded, the head of POK had also convinced corporate to kill the vm/370 product, shutdown the development group and transfer all people to POK for MVS/XA (Endicott managed to save the VM/370 product mission for the mid-range, but had to recreate a development group from scratch) ... so there was no equivalent IBM capability for running MVS/XA and MVS concurrently

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some posts mentioning Boeing Huntsville, CSA, MVS
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2012h.html#57 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2010m.html#16 Region Size - Step or Jobcard

The analysis that DASD relative system performance was respun into SHARE presentation on configuring filesystems for improved throughput:


SHARE 63 Presentation B874

DASD Performance Review
8:30 August 16, 1984
Dr. Peter Lazarus

IBM Tie Line 543-3811
Area Code 408-463-3811
GPD Performance Evaluation
Department D18
Santa Teresa Laboratory
555 Bailey Avenue
San Jose, CA., 95150

.... snip ...

posts mentioning getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk

a few recent posts reference relative DASD performance
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#107 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#65 Where did CKD disks come from?
https://www.garlic.com/~lynn/2024f.html#9 Emulating vintage computers
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#109 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#92 IBM DASD 3380
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370/125

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370/125
Date: 20 Mar, 2025
Blog: Facebook
1975, I got asked to try and get VM/370 running on 256kbyte 125-II that was in the office of a foreign shipping company in Manhattan (not supported). There were two problems,

1) there was microcode bug in 125 370 "long instructions" that prechecked ending addresses before executing instructions (all 360 instructions, and most 370 instructions, but "long" instructions which were to incrementally execute until address ran into problem (storage protect or end of memory). At boot VM/370 executed MVCL to clear memory and check for end of memory, since precheck prevented instruction from executing, machine effectively appeared as if it had zero memory.

2) CP67->VM370 resulted in lots of kernel bloat ... well over 100kbytes and not supported on less than 384Kbytes. I had done some CP67 optimization including reducing fixed kernel size to under 70kbytes ... so asked to see how close I could get VM370 to that for 125.

Then the 125 group asked if I could do multiprocessor support for a five processor 125. 115&125 had nine position memory bus for up to nine microprocessors. All 115 microprocessors were the same (integrated controllers, 370 cpu, etc) with the microprocessor running 370 microcode getting about 80KIPS. The 125 was the same but had a faster microprocessors for the one running 370 microcode getting about 120KIPS. 125 systems rarely had more than four controller microprocessors and the single 370 microprocessor, so at least four were empty (five 120KIPS would get close to 600KIPS, .6MIPS).

At the time Endicott also asked me to help in doing ECPS microcode assist for 138/148 ... and I would also implement the same for 125 multiprocessor with some fancy tweaks that put a lot of multiprocessor support also moved into into microcode. Then Endicott objected to releasing 125 5-CPU processor since it would have higher throughput than 148. In the escalation meetings, I had to argue for both sides of the table ... and Endicott eventually won.

So I then did a 370 software implementation of the multiprocessor support for internal VM370s, initially for HONE (internal world-wide sales & marketing support online systems) so they could add 2nd processors to their 168 and 158 systems.

posts mentioning 125 5-CPU multiprocessor
https://www.garlic.com/~lynn/submain.html#VAMPS
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
internal IBM CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

old archived post with initial analysis for 138/148 ECPS
https://www.garlic.com/~lynn/94.html#21

some recent posts mentioning ECPS
https://www.garlic.com/~lynn/2024e.html#33 IBM 138/148
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023f.html#57 Vintage IBM 370/125
https://www.garlic.com/~lynn/2023d.html#95 370/148 masthead/banner
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#24 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#79 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023b.html#64 Another 4341 thread
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022e.html#102 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#87 370/195
https://www.garlic.com/~lynn/2021k.html#38 IBM Boeblingen
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021c.html#62 MAINFRAME (4341) History

--
virtualization experience starting Jan1968, online at home since Mar1970

3081, 370/XA, MVS/XA

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3081, 370/XA, MVS/XA
Date: 20 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#33 370/XA, MVS/XA

Shortly after graduating and joining IBM, the 370/195 group cons me into helping with multithreading the machine. 195 had 64 entry pipeline and out-of-order execution, but no branch prediction and no speculative execution (conditional branches drained the pipeline), so most codes ran machine at half throughput. Running two threads, simulating 2-CPU multiprocessor, could have higher aggregate throughput (modulo MVT/MVS 2-CPU multiprocessor support throughput only having 1.2-1.5 times throughput of single processor, aka 3081K only .6-.75 throughut of Amdahl single processor).

this has Amdahl winning the battle to make ACS, 360 compatible, shortly after it gets killed, he leaves IBM and starts his own company. It also mentions multithreading (and some of ACS/360 features that show up more than 20yrs later with ES/9000)
https://people.computing.clemson.edu/~mark/acs_end.html
Sidebar: Multithreading: In summer 1968, Ed Sussenguth investigated making the ACS/360 into a multithreaded design by adding a second instruction counter and a second set of registers to the simulator. Instructions were tagged with an additional "red/blue" bit to designate the instruction stream and register set; and, as was expected, the utilization of the functional units increased since more independent instructions were available.
... snip ...

However, new 195 work got shutdown with the decision to add virtual memory to all 370s (wasn't worth the effort to add virtual memory to 195).

Then came Future System effort, completely different from 370 and was going to replace 370 (during FS, internal politics was killing off 370 activities and the lack of new 370 during FS is credited with giving clone 370 system makers their market foothold)
http://www.jfsowa.com/computer/memo125.htm

Later after FS implodes and the mad rush to get stuff back into the 370 product pipelines (including kicking off quick&dirty 3033&30381), I was asked to help with 16-CPU multiprocessor (and we con the 3033 processor engineers into working on it in their spare time, lot more interesting than remapping 168 logic to 20% faster chips).. this was after work on 5-CPU 370/125 was killed and doing 2-CPU VM370 for internal datacenters ... HONE 2-CPU was getting twice the throughput of 1-cpu (compared to MVS 2-CPU getting only 1.2-1.5 times).
https://www.garlic.com/~lynn/2025b.html#34 IBM 370/125

Everybody thought 16-cpu 370 was really great until somebody told the head of POK that it could be decades before the POK favorite son operating system (MVS) had (effective) 16-CPU support (i.e. at the time MVS docs claimed that MVS 2-CPU support only had 1.2-1.5 times throughput of a 1-CPU ... multiprocessor overhead increased as number of CPUs increased; POK doesn't ship a 16-CPU system until after turn of century). The head of POK then invites some of us to never visit POK again and directs the 3033 processor engineers, heads down and no distractions.

SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

some recent posts mentioning 370/195 multithreading
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#24 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024e.html#115 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024d.html#101 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#66 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#52 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2023f.html#89 Vintage IBM 709
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2022h.html#32 do some Americans write their 1's in this way ?
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#12 Computer Server Market
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#60 370/195
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2021k.html#46 Transaction Memory
https://www.garlic.com/~lynn/2021h.html#51 OoO S/360 descendants
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195

--
virtualization experience starting Jan1968, online at home since Mar1970

FAA ATC, The Brawl in IBM 1964

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: FAA ATC, The Brawl in IBM 1964
Date: 21 Mar, 2025
Blog: Facebook
FAA ATC, The Brawl in IBM 1964
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514
Two mid air collisions 1956 and 1960 make this FAA procurement special. The computer selected will be in the critical loop of making sure that there are no more mid-air collisions. Many in IBM want to not bid. A marketing manager with but 7 years in IBM and less than one year as a manager is the proposal manager. IBM is in midstep in coming up with the new line of computers - the 360. Chaos sucks into the fray many executives- especially the next chairman, and also the IBM president. A fire house in Poughkeepsie N Y is home to the technical and marketing team for 60 very cold and long days. Finance and legal get into the fray after that.
... snip ...

Didn't deal with Fox & people while in IBM, but did a project with them (after we left IBM) in the company they had formed (after they left IBM)

The last project we did at IBM was HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
and had lots of dealings with TA to the FSD president ... who was also filling in 2nd shift writing ADA code for the latest IBM FAA modernization project ... and we were also asked to review the overall design.

HA/6000 was started 1988, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when started doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commerical cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster support in same source base with Unix). Early Jan1992, in meeting with Oracle CEO, IBM/AWD/Hester told Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92.

Mid Jan92, gave FSD lastest update on HA/CMP scale-up ("MEDUSA") and they decided to use it for gov. supercomputers and the TA informed the Kingston supercomputer group. old email from him:
Date: Wed, 29 Jan 92 18:05:00
To: wheeler

MEDUSA uber alles...I just got back from IBM Kingston. Please keep me personally updated on both MEDUSA and the view of ENVOY which you have. Your visit to FSD was part of the swing factor...be sure to tell the redhead that I said so. FSD will do its best to solidify the MEDUSA plan in AWD...any advice there?

Regards to both Wheelers...

... snip ... top of post, old email index

Then day or two later, cluster scale-up is transferred to Kingston for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later). Less than 3weeks later, Computerworld news 17feb1992 ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

After leaving IBM, also did some consulting with Fundemental (& Sequent), FAA was using FLEX-ES (on Sequent) for 360 emulation; gone 404, but lives on at wayback machine
https://web.archive.org/web/20241009084843/http://www.funsoft.com/
https://web.archive.org/web/20240911032748/http://www.funsoft.com/index-technical.html
and Steve Chen (CTO at Sequent) before IBM bought them and shut it down
https://en.wikipedia.org/wiki/Sequent_Computer_Systems

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

past posts referencing FAA & "Brawl"
https://www.garlic.com/~lynn/2025.html#99 FAA And Simulated IBM Mainframe
https://www.garlic.com/~lynn/2024e.html#100 360, 370, post-370, multiprocessor
https://www.garlic.com/~lynn/2024d.html#12 ADA, FAA ATC, FSD
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2024.html#114 BAL
https://www.garlic.com/~lynn/2023f.html#84 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2023d.html#82 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#73 Some Virtual Machine History
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022b.html#97 IBM 9020
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021i.html#20 FAA Mainframe
https://www.garlic.com/~lynn/2021f.html#9 Air Traffic System
https://www.garlic.com/~lynn/2021e.html#13 IBM Internal Network
https://www.garlic.com/~lynn/2021.html#42 IBM Rusty Bucket
https://www.garlic.com/~lynn/2019b.html#88 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2019b.html#73 The Brawl in IBM 1964

--
virtualization experience starting Jan1968, online at home since Mar1970

FAA ATC, The Brawl in IBM 1964

From: Lynn Wheeler <lynn@garlic.com>
Subject: FAA ATC, The Brawl in IBM 1964
Date: 21 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#36 FAA ATC, The Brawl in IBM 1964

trivia: I was doing stuff for disk bldg15/product test lab that got an engineering 4341 in 1978. In Jan1979, branch office cons me into doing benchmark for national lab that was looking at getting 70 4341s for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).
https://www.garlic.com/~lynn/2001n.html#6000clusters2
GOVERNMENT AGENCIES GO CRAZY FOR RISC SYSTEM/6000 CLUSTERS (From Ed Stadnick-Competitive Consultant)

Federal agencies have caught clustermania. Over the last six months, an increasing number of sites have strung together workstations with high-speed networks to form powerful replacements or augmentations to their traditional supercomputers.

At least nine federally funded supercomputer centers recently have installed workstation clusters, and virtually all of these clusters are based on IBM Corp.'s RISC System/6000 workstation line.

Growing Interest - In fact, the interest in clusters at federal and university sites has grown so much that IBM announced last month a cluster product that it would service and support. IBM's basic cluster configuration consists of at least two RISC System/6000 workstations, AIX, network adapters, cables and software packages.

The interest in clusters caught us by surprise, said Irving Wladawsky-Berger, IBM's assistant general manager of supercomputing systems. "It is one of these events where the users figured out what to do with our systems before we did."

Jeff Mohr, the chief scientist at High-Performance Technologies Inc., a federal systems integrator, said: "If you look at a Cray Y-MP 2E and a high-end RISC System/6000... the price differential can be literally 40-to-1. But if you take a look at the benchmarks, particularly on scalar problems, the differential can be about 5-to-1. So on certain problems, clustering can be very, very effective."

Agencies that have these cluster include the National Science Foundation at Cornell University and University of Illinois, DOE's Los Alamos National Laboratory, FERMI and Livermore National Labs, and the Naval Research Lab in Washington D.C.

Source: Federal Computer Week Date: May 11, 1992

... snip ... and
https://www.garlic.com/~lynn/2001n.html#6000clusters3

other trivia: Early 80s, I got HSDT project, T1 and faster computer links (both satellite and terrestrial). First long haul T1 was between the IBM Los Gatos lab on the west coast and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston that had a boat load of Floating Pointing Systems boxes (including 40mbyte/sec disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems
Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.
... snip ...

Early on had been working with NSF director and was suppose to get $20M to interconnect the NSF Supercomputing Centers. Then congress cuts the budget, some other things happen and eventually a RFP is released (in part based on what we already had running), NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Computers in the 60s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Computers in the 60s
Date: 21 Mar, 2025
Blog: Facebook
took 2 credit hr intro to computers, at end of semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30. Univ was getting 360/67 for tss/360 replacing 709(tape->tape)/1401(unit record front end) ... 360/30 temporarily replacing 1401 pending arrival of 360/67. univ. shutdown datacenter on weekends and i was given the whole place dedicated (although 48hrs w/o sleep made monday classes hard). They gave me a bunch of hardware & software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc; within a few weeks had 2000 card program. Within a year, the 360/67 arrives and I was hired fulltime responsible for os/360 (ran as 360/65, tss/360 didn't come to production). Before I graduate, univ library got ONR grant to do online catalogue, part of the money went for 2321 data cell. The project was also selected for betatest for original CICS product ... and CICS support and debuging was added to tasks. CICS wouldn't come up ... turns out CICS had hard coded some BDAM options (that wasn't covered in the documentation) and library had built datasets with different set of options.

before I graduate, was hired into small group in the Boeing CFO's office to help with the formation of Boeing Computing Services (consolidate all dataprocessing into independent business unit, including offering services to non-Boeing entities). I think Renton largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room (joke that Boeing got 360/65s like other companies got key punches). Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field for payroll, although they enlarge the machine room to install a 360/67 for me to play with when I wasn't doing other stuff.

At the univ. CSC had come out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) which I mostly got to play with during my weekend dedicated time. CP67 had come with 1052 & 2741 terminal support with automagic terminal type identification using SAD CCW to change port scanner type. Univ. had some ASCII TTY 33&35s ... so I integrate ASCII terminal support in with the automagic terminal type. I then wanted to have a single dial-in number for all terminal types ("hunt group") ... didn't quite work, while could change scanner type, IBM had taken short cut and hard wired port baud rate. Univ. then kicks off clone controller product, channel interface board for Interdata3 programmed to emulate IBM controller (with auto baud support). This was upgraded with Interdata4 for the channel interface and a cluster of Interdata3s for port interfaces which Interdata and later Perkin-Elmer sell (and four of us are written up for some part of IBM clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
CICS &/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

some recent Univ 709, 1401, MPIO, 360/30, 360/67, Boeing CFO, Renton posts
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#1 Large Datacenters
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"

--
virtualization experience starting Jan1968, online at home since Mar1970

FAA ATC, The Brawl in IBM 1964

From: Lynn Wheeler <lynn@garlic.com>
Subject: FAA ATC, The Brawl in IBM 1964
Date: 21 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#36 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2025b.html#37 FAA ATC, The Brawl in IBM 1964

CSC had tried to get 360/50 for hardware modifications to add virtual memory support and implement virtual machine support, however all the extra 360/50s were going to FAA ATC effort, so they had to settle for a 360/40 to modify with virtual memory and they implemented "CP/40". Then when 360/67 became available standard with virtual memory, CP/40 morphs into CP/67 (precursor to VM370). I was undergraduate at univ and fulltime responsible for OS/360 (running on 360/67 as 360/65), when CSC came out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I got to rewrite a lot of CP67 code. Six months later, CSC was having one week CP67 class in LA. I arrive Sunday and am asked to teach the CP67 class, the CSC members that were going to teach it had given notice on friday ... leaving for one of CSC commercial CP67 online spinoffs.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

couple recent posts mentioning CP/40 and teaching one week CP/67 class
https://www.garlic.com/~lynn/2024f.html#40 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024.html#28 IBM Disks and Drums

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM APPN

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM APPN
Date: 22 Mar, 2025
Blog: Facebook

https://www.garliic.com/~lynn/2025.html#0 IBM APPN
https://www.garliic.com/~lynn/2025.html#1 IBM APPN
https://www.garliic.com/~lynn/2025.html#2 IBM APPN
https://www.garliic.com/~lynn/2025.html#12 IBM APPN
https://www.garliic.com/~lynn/2025.html#13 IBM APPN

same time involved in doing HSDT, T1 and faster computer links (both terrestrial and satellite) ... mentioned in the earlier APPN posts

was asked to see about putting out a VTAM/NCP simulator done by baby bell on IBM Series/1 as IBM Type-1 product ... basically compensating for lots of SNA/VTAM short comings along with many new features (in part encapsulating SNA traffic tunneled through real networking). Part of my presentation at fall 1986 SNA ARB (architecture review board) meeting in Raleigh: https://www.garlic.com/~lynn/99.html#67

also part of "baby bell" presentation at spring 1986 IBM user COMMON meeting
https://www.garlic.com/~lynn/99.html#70

Lots of criticism of the ARB presentation, however the Series/1 data came from baby bell production operation, the 3725 came from the communication group HONE configurator (if something wasn't accurate, they should correct their 3725 configurator).

objective was to start porting it to RS/6000 after it first ships as IBM product. Several IBMers involved were well acquainted with the communication group internal politics and attempted to provide countermeasure to everything that might be tried, but what happened next can only be described as reality/truth is stranger than fiction.

CSC trivia: one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and HONE (after CSC itself) was my first and long time customer. Also other CSC co-workers tried hard to convince CPD that they should use the (Series/1) Peachtree processor for 3705.

T1 trivia: FSD for gov. accounts with failing 60s IBM telecommunication controllers supporting T1, had come out with the Zirpel T1 cards for Series/1 (which hadn't been available to the "baby bell").

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some recent posts mentioning Series/1 zirpel T1 card
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#15 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#99 Terminals
https://www.garlic.com/~lynn/2024g.html#79 Early Email
https://www.garlic.com/~lynn/2024e.html#34 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1

--
virtualization experience starting Jan1968, online at home since Mar1970

AIM, Apple, IBM, Motorola

From: Lynn Wheeler <lynn@garlic.com>
Subject: AIM, Apple, IBM, Motorola
Date: 23 Mar, 2025
Blog: Facebook
The last product we did at IBM was HA/6000 approved by Nick Donofrio in 1988 (before RS/6000 was announced) for the NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had vaxcluster support in same source base with unix). The S/88 product administrator then starts taking us around to their customers and also has me do a section for the corporate continuous availability strategy document ... it gets pulled when both Rochester/AS400 and POK/(high-end mainframe) complain they couldn't meet the requirements.

When HA/6000 started, the executive that we had first reported to, then goes over to head up Somerset for AIM (later leaves for SGI and president of MIPS)
https://en.wikipedia.org/wiki/AIM_alliance
do a single chip 801/RISC for Power/PC, including Motorola 88k bus/cache enabling multiprocessor configurations

Early Jan1992 have a meeting with Oracle CEO and IBM/AWD Hester tells Ellison we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later).

1992, IBM has one of the largest losses in the history of US companies and was in the process of being re-orged into the 13 "baby blues" in preparation for breaking up the company (take off on the "baby bell" breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

Not long after leaving IBM, I was brought in as consultant into small client/server startup, two of the former Oracle people (that were in the Hester/Ellison meeting) were there responsible for something they called "commerce server" and they wanted to do payment transactions, the startup had also invented this technology they called "SSL" they wanted to use, it is now frequently called "electronic commerce" (or ecommerce).

I had complete responsibility for everything between "web servers" and gateways to the financial industry payment networks. Payment network trouble desks had 5min initial problem diagnoses ... all circuit based. I had to do a lot of procedures, documentation and software to bring packet-based internet up to that level. I then did a talk (based on ecommerce work) "Why Internet Wasn't Business Critical Dataprocessing" ... which Postel (Internet standards editor) sponsored at ISI/USC.

Other Apple history, 80s, IBM and DEC co-sponsored MIT Project Athena (X-windows, Kerberos, etc), each contributing $25M. Then IBM sponsored CMU group ($50M) that did MACH, Camelot, Andrew widgets, Andrew filesystem, etc. Apple brings back Job and MAC is redone built on CMU MACH.
https://en.wikipedia.org/wiki/Mach_(kernel)

1993, mainframe and multi-chip RIOS RS/6000 comparison (industry benchmark, number program iterations compared to reference platform)
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS; 16-system: 2BIPS; 128-system: 16BIPS


During 90s, i86 implements pipelined, on-the-fly, hardware translation of i86 instructions to RISC micro-ops for execution, largely negating RISC throughput advantage. 1999, enabled multiprocessor, but still single core chips
• single IBM PowerPC 440 hits 1,000MIPS
• single Pentium3 hits 2,054MIPS (twice PowerPC 440)


by comparison, mainframe Dec2000
z900: 16 processors, 2.5BIPS (156MIPS/proc)

then 2010, mainframe versus i86
• E5-2600, two XEON 8core chips, 500BIPS (30BIPS/proc)
• z196, 80 processors, 50BIPS (625MIPS/proc)


HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ecommerce payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

some recent posts mention business ciritical dataprocessing
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#25 Taligent
https://www.garlic.com/~lynn/2024g.html#20 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024f.html#47 Postel, RFC Editor, Internet
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 70s & 80s

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 70s & 80s
Date: 23 Mar, 2025
Blog: Facebook
Amdahl won the battle to make ACS, 360 compatible. It was then canceled, folklore that executives felt it would advance state of the art too fast and IBM would loose control of the market. Amdahl leaves IBM and starts his own computer company. More details including some features that show up more than 20yrs later with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html

1972, Learson tries (and fails) to block the bureaucrats, careerists and MBAs from destroying Watson culture and legacy; pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

IBM starts "Future System", completely different than 370 and was going to completely replace it. During FS, internal politics were killing off 370 efforts (claim that the lack of new 370 during FS period is what gave clone 370 makes, including Amdahl, their market foothold). When FS finally implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

Future System ref, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive
... snip ...

After joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters (online sales&marketing support HONE was one of the first and long time customers). I also continued to visit customers and attend user group meetings. The director of one of the largest financial datacenters like me to stop by periodically and talk technology. At one point the IBM branch manager horribly offended the customer and in retaliation they order an Amdahl system (a lone Amdahl in vast sea of blue, up until then Amdahl had been selling into technical/scientific/univ and this would be the first for "true blue" commercial). I was asked to go onsite for 6-12 months (apparently to obfuscate why they were ordering an Amdahl machine). I talk it over with the customer and then decline the IBM offer. I was then told that the branch manager was good sailing buddy of IBM CEO and if I didn't, I could forget career, promotions, raises.

3033 started out 168 logic remapped to 20% faster chips) and 3081 was going to be multiprocessor only using some left-over FS technology. 2-CPU 3081D benchmarked slower than Amdahl single-cpu and even some benchmarks slower than 2-cpu 3033. They quickly double the processor cache size for 3081K with about same aggregate MIPS as single CPU Amdahl (although MVS docs was that 2-CPU overhead only got 1.2-1.5 times throughput of 1-CPU system, aka 3081K only .6-.75 throughut of Amdahl single processor, so Amdahl still had distinct edge).

I was introduced to John Boyd in the early 80s and would sponsor his briefings at IBM. The Commandant of Marine Corps in 89/90 leverages Boyd for make-over of the corps (when IBM was desperately in need of make-over, at the time the two organizations had about the same number of people). USAF pretty much had disowned Boyd when he passed in 1997 and it was the Marines that were at Arlington and Boyd's effects go to Quantico. The (former) commandant (passed 20Mar2024) continued to sponsor Boyd conferences for us at Marine Corps Univ, Quantico.

In 1992, IBM has one of the largest losses in the history of US companies and was being re-organized into the 13 "baby blues" in preparation for breaking up the company (take-off on the "baby bell" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

some more detail
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 70s & 80s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 70s & 80s
Date: 23 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s

early 80s got HSDT project, T1 and faster computer links (both terrestrial and satellite) and some battles with communication group; note 60s IBM had telecommunication controller supporting T1, IBM move to SNA/VTAM in the 70s, with the associated issues seem to cap controllers at 56kbit/sec links

mid-80s was asked to see about putting out a VTAM/NCP simulator done by baby bell on IBM Series/1 as IBM Type-1 product ... basically compensating for lots of SNA/VTAM short comings along with many new features (in part encapsulating SNA traffic tunneled through real networking). Part of my presentation at fall 1986 SNA ARB (architecture review board) meeting in Raleigh:
https://www.garlic.com/~lynn/99.html#67
also part of "baby bell" presentation at spring 1986 IBM user COMMON meeting
https://www.garlic.com/~lynn/99.html#70

Lots of criticism of the ARB presentation, however the Series/1 data came from baby bell production operation, the 3725 came from the communication group HONE configurator (if something wasn't accurate, they should correct their 3725 configurator).

objective was to start porting it to RS/6000 after it first ships as IBM product. Several IBMers involved were well acquainted with the communication group internal politics and attempted to provide countermeasure to everything that might be tried, but what happened next can only be described as reality/truth is stranger than fiction.

T1 trivia: FSD for gov. accounts with failing 60s T1 IBM telecommunication controllers, had come out with the Zirpel T1 cards for Series/1 (which hadn't been available to the "baby bell").

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

recent posts mentioning Zirpel T1 card
https://www.garlic.com/~lynn/2025b.html#40 IBM APPN
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#15 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#99 Terminals
https://www.garlic.com/~lynn/2024g.html#79 Early Email
https://www.garlic.com/~lynn/2024e.html#34 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 70s & 80s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 70s & 80s
Date: 23 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#43 IBM 70s & 80s

Depends what workload was, my brother was apple regional marketing rep and when came into hdqtrs, I could be invited to business dinners and could argue mac design with developers (before mac was announced). He also figured out how to remote dial into the s/38 that ran apple to track manufacturing and shipment schedules.

I was doing some work for disk bldg14/engineering and bldg15/product test ... when bldg15 got engineering 4341 in 1978 and jan1979, IBM branch office cons into doing benchmark for national lab looking at getting 70 for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).

4300s sold into same mid-range market as DEC VAX machines and in about same numbers in small unit orders. Big difference was large corporations with orders for hundreds of vm/4341s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). Inside IBM, conference rooms were becoming scarce because so many had been converted to departmental distributed vm/4341s.

MVS saw big explosion in these departmental machines but were locked out out of the market since only new non-datacenter DASD were fixed-block 3370s (and MVS never did FBA support). Eventually came the 3375, w/CKD emulation. However it didn't do MVS much good, support for departmental vm/4341s was measured in scores of machines per person, while MVS was still scores of support personnel per system.

aka 1000 distributed AS/400s supported by 10-20 people?

IBM 4341 announced 30jan1979, FCS 30Jun1979, replaced by 4381 announced 1983, as/400 announced jun1988, released aug1988 ... almost decade later.

trivia: when I 1st transferred from CSC to SJR in the 70s, I got to wander around IBM (and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test across the street. They were running 7x24, prescheduled, stand-alone testing and mentioned they had recently tried MVS, but it had 15min MTBF (in that environment) requiring manual re-ipl. I offered to rewrite I/O supervisor to be bullet proof and never fail, allowing any amount of ondemand, concurrent testing, greatly improving productivity. Downside, was I started getting phone calls asked for help when they had problems, and I had to increasingly play disk engineer, also worked some with disk engineer that got RAID patent. Later when 3380s were about to ship, FE had test package of 57 simulated errors, in all 57 cases, MVS was still failing (requiring re-ipl) and in 2/3rds of the cases with no indication of what caused the failure.

getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

Earlier, all during FS, I continued to work on 370 and would periodically ridicule what they were doing. I had done an internal page-mapped filesystem (single level store) for CMS (that was never released to customers) and would claim I learned what not to do from TSS/360 (which some of FS was modeled). Later I learned that Rochester S/38 was sort of a significantly simplified FS followon ... part of simplification was allowing scatter allocate across all disks in the system (as a result, all disks had to be backed up as single entity and any time there was single disk failure ... common in the period ... it required replacing the failed disk and restoring the complete filesystem. This became increasingly traumatic as number of disks in the system increased that S/38 became an early RAID adopter (wasn't seen in small single disk configuration). One of the final nails in the FS coffin was by the Houston Science Center that if 370/195 applications were redone for FS machine built out of the fastest technology available, it would have the throughput of 370/145 (about 30 times slowdown). There was a lot of throughput headroom between available technology and S/38 market throughput requirements.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

I was also working with Jim Gray and Vera Watson on original SQL/relational, System/R and we were able to do tech transfer to Endicott for SQL/DS "under the radar" while the company was preoccupied with the next great DBMS, "EAGLE". When Jim Gray departed IBM Research fall 1980 for Tandem, he was palming off some stuff on me, including wanted me to help with the System/R joint project with BofA, that was getting 60 VM/4341s for System/R (including further reducing operations support for distributed VM/4341s).. Later when "EAGLE" imploded there was request for how fast could System/R be ported to MVS, which was eventually released as DB2, originally for "decision support" only.

original SQL/relational, System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

Business Planning

From: Lynn Wheeler <lynn@garlic.com>
Subject: Business Planning
Date: 25 Mar, 2025
Blog: Facebook
Starting in the late 70s, we had lots of discussions about how computer use illiterate most of IBM was ... especially management and executives, and what could we do to turn it around. Also 3270 terminals were part of fall planning and each one required VP sign-off. Then there was period with rapidly spreading rumors that some senior executives might be using PROFS/EMAIL and lots of other executives and middle management began rerouting justified technical/development 3270 deliveries to their desks (for a status symbol, to create a facade that they might be computer literate). These were typically turned on in the morning and left on all day with the VM logon logo or some cases PROFS menu being burned into the screen ... while their admin people actually processed their email. This management rerouting of 3270s and later large screen PCs (as status symbols) continued through the 80s.

1972, Learson had tried (and failed) to block the bureaucrats, careerists, and MBAs from destroying watson culture/legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Future System
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive
... snip ...

I was introduced to John Boyd in the early 80s and use to sponsor his briefings at IBM. The Commandant of Marine Corps in 89/90 leverages Boyd for make-over of the corps (same time IBM was desperately in need of make-over, at the time the two organizations had about the same number of people). Then 1992 (20yrs after Learson's attempt to save the company), IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take off on "baby bell" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

some more detail
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

trivia: in the wake of FS implosion, Endicott cons me into helping with ECPS for virgil/tully (138/148), initial analysis:
https://www.garlic.com/~lynn/94.html#21

then I'm con'ed into presenting the 138/148 case to business planners around the world. Find out that US "region" planners tended to forecast what ever Armonk said was strategic (because that is how they got promoted) ... and manufacturing was responsible for inaccurate regional forecasts. Different in world-trade, countries ordered and took delivery of forecasted machines and business planners were held accountable for problems. That gave rise to joke in the US, Armonk habit of declaring things "strategic" things that weren't selling well (excuse for sales/marketing incentives/promotions).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
John Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

some posts mentioning 138/148 business planning in US regions and world trade:
https://www.garlic.com/~lynn/2017b.html#2 IBM 1970s
https://www.garlic.com/~lynn/2016e.html#92 How the internet was invented
https://www.garlic.com/~lynn/2012d.html#70 Mainframe System 370
https://www.garlic.com/~lynn/2005g.html#16 DOS/360: Forty years

--
virtualization experience starting Jan1968, online at home since Mar1970

POK High-End and Endicott Mid-range

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: POK High-End and Endicott Mid-range
Date: 26 Mar, 2025
Blog: Facebook
Early last decade customer asked me to track down decision to add virtual memory too all 370s and found staff to executive that made decision. basically MVT storage management was so bad that regions had to be specified four times larger than used, as a result, typical 1mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. Going to MVT mapped to single 16mbyte virtual address space (similar to running MVT in a CP67 16mbyte virtual machine), allowed number of concurrently running regions to be increased by factor of four times (caped at 15 because 4bit storage protect key) with little or no paging.
https://www.garlic.com/~lynn/2011d.html#73

However as systems got bigger, needed more than 15 concurrent running tasks and move to MVS, giving each task its own 16mbyte virtual address space. However, OS/360 was heavily pointer passing API and they map a 8mbyte image into every 16mbyte virtual address space, leaving 8mbytes for task. Then because subsystems were given their own virtual address space, needed some way to pass information back&forth and the common segment ("CSA") was created mapped into every address space. However CSA space requirement was somewhat proportional to number of subsystems and number of concurrent tasks that CSA quickly became "common system area", by 3033 was frequently 5-6mbytes leaving 2-3mbytes (but threatening to become 8, leaving nothing for tasks). As a result, for MVS, a subset of 370/xa access registers was retrofitted to 3033 as "dual address space mode" (by person that shortly left IBM for HP labs ... and was one of the major architects for Itanium) so MVS subsystems could directly access caller's address space.

The other issue was with the increase in concurrently running tasks and MVS bloat, 3033 throughput was increasingly limited with only 16mbytes of real storage. Standard 370 had a 16bit page table entry with 2 undefined bits. They co-opt those bits to prefix to 12bit 4096kbyte page number (16mbytes) for 14bit page number (allowing mapping 16mbyte virtual address into 64mbyte real address, enabling 64mbyte real storage).

3081 initially ran 370 mode (and could do the 3033 hack for 64mbyte real storage), but with MVS bloat things were increasingly becoming desperate for MVS/XA and 31-bit addressing. 3081 did support "data streaming" channels i.e. selector and block mux did end-to-end handshake for every byte transferred, "data streaming" channels did multiple byte transfer per end-to-end handshake, increasing max channel distrance from 200ft to 400ft and 3mbyte/sec transfers, supporting 3880 controllers with 3mbyte/sec 3380 disks.

When FS imploded, Endicott cons me into helping with ECPS microcode assist (originally 138/148, but then for 4300s also)
https://www.garlic.com/~lynn/94.html#21

and another group gets me to help with a 16-cpu, tightly-coupled 370 multiprocessor and we con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips).

Everybody thought the 16-cpu was really great until somebody tells the head of POK that it could be decades before the POK favorite son operating system ("MVS") had (effective) 16-cpu support. At the time, MVS docs had 2-cpu system only had 1.2-1.5 times the throughput of single cpu system (and overhead increased as number CPUs increased). Then head of POK invites some of us to never visit POK again and directed the 3033 processor engineers "heads down and no distractions" (note POK doesn't ship 16-CPU tightly-coupled until after turn of century).

Early 80s, I get permission to give talk on how ECPS was done at user group meetings, including monthly BAYBUNCH hosted by Stanford SLAC. After the meetings, Amdahl attendees would grill me for additional details. They said they had created MACROCODE mode, 370-like instructions that ran in microcode mode in order to quickly respond to the plethora of trivial 3033 micocrode hacks done required for running latest MVS. They were then in process of using it to implement microcode hypervisor ("multiple domain"), sort of like 3090 LPAR/PRSM nearly decade later. Customers then weren't converting to MVS/XA as IBM planned, but Amdahl was having more success because they could run MVS and MVS/XA concurrently on the same machine.

Also after the implosion of FS, the head of POK had managed to convince corporate to kill VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission for the mid-range, but had to recreate a development group from scratch). The group did manage to implement a very simplified 370/XA VMTOOL for MVS/XA development (never intended for release to customers). However, with the customers not moving to MVS/XA as planned (and Amdahl having more success), the VMTOOL was packaged as VM/MA (migration aid) and VM/SF (system facility) allowing MVS and MVS/XA to be run concurrently on IBM machines.

Other trivia: 3081s were going to be multiprocessor only, initial 2cpu 3081D aggregate MIPS was less than 1cpu Amdahl and some benchmark throughput was even less than 2cpu 3033. Quickly processor caches were doubled for 3081K which had about same aggregate MIP-rate as 1cpu Amdahl (but MVS had much lower throughput because of the 2cpu overhead).

future systems posts
https://www.garlic.com/~lynn/submain.html#futuresys

other posts mentioning MVS, CSA, MVS/XA VMTOOL/ VM/MA, VM/SF, VM/XA, & SIE
https://www.garlic.com/~lynn/2025b.html#27 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Datacenters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Datacenters
Date: 27 Mar, 2025
Blog: Facebook
I had taken 2 credit hr intro to fortran/computers and at end of semester, I was hired to rewrite 1401 MPIO for 360/30. Univ was getting 360/67 for tss/360 to replace 709/1401 and got 360/30 temporarily pending 360/67. Then when 360/67 shows up I was hired fulltime responsible for OS/360 (ran as 360/65, tss/360 wasn't up to production). Then before I graduate, I was hired fulltime into small group in Boeing CFO office to help with creation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hall ways around machine room (joke that Boeing was getting 360/65s like other companies got keypunches).

After graduating, I leave CFO and join IBM science center, one of hobbies was enhanced production operating systems for internal datacenters, first CP67L, then CSC/VM, then after transferring to research on the west coast, I got to wander around IBM (& non-IBM) datacenters in silicon valley including DASD bldg14/engineering and bldg15/product test, across the street. They were running 7x24, pre-scheduled, stand-alone testing and mentioned that they had recently tried MVS, but it had 15min MTBF failure (requiring manual re-ipl) in that environment. I offer to rewrite I/O supervisor to make it bullet proof and never fail allowing any amount of concurrent, on-demand testing, greatly improving productivity (downside was I started getting phone calls any time they had any sort of problem and I had to spend increasing amount of time playing disk engineer). There was joke that I worked 4shift weeks, 1st shift in 28, 2nd shift in 14/15, 3rd shift in 90, and 4th shift (weekends) up in Palo Alto HONE. Later I also got part of a wing in bldg29 for office and labs.

Then bldg15, product test, got a engineering 3033 (first outside of POK processor engineering) for doing DASD testing. Testing only took a percent or two of processor, so we scrounge up a 3830 and 3330 string for setting up our own private online service (and run a 3270 coax under the street to my office in 28). At the time, air-bearing simulation (part of designing thin film disk head, originally used for 3370 FBA DASD) was getting a couple turn arounds a month on the SJR 370/195. We set it up on the 3033 in bldg 15 (with less than half MIPS of 195) and they could get several turn arounds a day.

3272(& 3277) had .086sec hardware response. then 3274/3278 was introduced with lots of 3278 hardware move back to 3274 controller, cutting 3278 manufacturing costs and significantly driving up coax protocol chatter ... increasing hardware response to .3sec to .5sec depending on amount of data (in the period studies were showing .25sec response improved productivity). Letters to the 3278 product administrator complaining about interactive computing got a response that 3278 wasn't intended for interactive computing but data entry (sort of electronic keypunch).

.086sec hardware response required .164sec system response (for .25sec response). joke about 3278, time machine was required to transmit responses into the past (in order for .25sec response). I had several internal SJR/VM systems with .11sec system response (SJR, STL, consolidated US HONE sales&marketing support up in palo alto, etc)

3270 did have half-duplex problem, if typing away and hit key just as screen was being updated, keyboard would lockup and would have to stop and hit reset before continue. Yorktown had FIFO boxes made for 3277, unplug the keyboard from the screen, plug in the FIFO box and plug the keyboard into the FIFO box (it would hold chars in the FIFO box whenever screen was being written, eliminating the lockup problem).

Later IBM/PC 3277 emulator cards had 4-5 upload/download throughput of 3278 emulator cards.

TSO/MVS users never noticed 3278 issues, since they rarely ever saw even 1sec system response. One of MVS/TSO problems wasn't so much TSO but OS/360 extensive use of multi-track search that would result in 1/3rd second I/O that locks up the device, the controller (and all devices on that controller), and channel (and all controllers on that channel). SJR's 370/195 was then replaced with a 370/168 for MVS and a 370/158 VM/370 with dual channel connections to all 3830 DASD controllers, but controllers and strings were categorized as VM/370 only and MVS only.

One morning, bldg28 operations had mistakenly mounted a MVS 3330 on a VM/370 string and within five minutes, operations started getting irate calls from VM/370 users all over bldg28 asking what happened to interactive response. It came down to the mismounted MVS 3330 and operations said that they couldn't move it until 2nd shift. VM370 people then put up a one pack VS1 3330 (highly optimized for running under VM370) on a MVS string ... and even though the VS1 system was running in virtual machine on heavily loaded 158, it was able to bring the 168 MVS system to its knees ... alleviating some of the response troubles for the VM370 users. Operations then said that they would immediately move the mismounted MVS 3330, if we move the VS1 3330.

Then 1980, STL was bursting at the seams and moving 300 people (& 3270s) from the IMS group to offsite bldg (just south of main plant site). They had tried "remote 3270" but found the human factors totally unacceptable. I then get con'ed into doing channel extender support, placing channel-attached 3270 controllers at the offsite bldgs, which resulted in no perceptible difference in off-site and inside STL response. STL had been configuring their 168s with 3270 controller spread across all channels with 3830 controllers. It turns out the 168s for the offsite group, saw a 10-15% improvement in throughput. The issue was that IBM 3270 controllers had high channel busy interfering with DASD I/O, the channel-extender radically reduced channel busy for the same amount of 3270 terminal traffic.

trivia: before transferring to SJR, I had got to be good friends with the 3033 processor engineers. After Future System implosion
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

I got ask to help with 16-CPU SMP, tightly-coupled machine and we con'ed the 3033 processor engineers into helping in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips), everybody thought it was great until somebody told the head of POK that it could be decades before the POK favorite son operating system ("MVS") had (effective) 16-CPU support (MVS docs at time had 2-CPU operation only had 1.2-1.5 times throughput of 1-CPU because of enormous multiprocessor overhead) and he invites some of us to never visit POK again and directs the 3033 processor engineers heads down and no distractions ... POK doesn't ship a 16-CPU system until after turn of century. Note in the morph from CP67->VM370, lots of features had been dropped including multiprocessor support. Starting with VM370R2, I start moving lots of CP67 stuff back into VM370 (for CSC/VM). Then early VM370R3, I put in multiprocessor support, initially for consolidated US HONE operations in Palo Alto so they could add a 2nd processor to each of their systems (and 2-CPU were getting twice the throughput of previous 1-CPU).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
cp67l, csc/vm, sjr/vm, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
playing disk engineer in bldgs 14/15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some undergraduate, 709/1401, MPIO, 360/67, Boeing CFO, Renton datacenter posts
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

posts mentioning response and 3272/3277 3724/3278 comparison
https://www.garlic.com/~lynn/2025.html#127 3270 Controllers and Terminals
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2014h.html#106 TSO Test does not support 65-bit debugging?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Datacenters

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Datacenters
Date: 27 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters

1988, IBM branch office asked if I could help LLNL (national lab) standardize some serial stuff that they were working with. It quickly becomes "fibre channel standard" ("FCS", including some stuff that I had done in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec). Then POK finally get their stuff released with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy-weight protocol that significantly reduces throughput that is released as FICON. The most recent public benchmark that I can find is z195 "Peak I/O" that got 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time, a FCS was announce for E5-2600 server blade claiming over million IOPS (two such FCS having higher throughput than 104 FICON. Note IBM docs says that SAPs (system assist processors that do actual I/O) need to be kept to 70% CPU (or about 1.5M IOPS). Note also no CKD DASD had been made for decades, all being simulated on industry standard fixed-block disks (extra simulation layer overhead for disk I/O).

1993, mainframe and multi-chip RIOS RS/6000 comparison (industry benchmark, number program iterations compared to reference platform)
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS; 16-system: 2BIPS; 128-system: 16BIPS


During 90s, i86 implements pipelined, on-the-fly, hardware translation of i86 instructions to RISC micro-ops for execution, largely negating RISC throughput advantage. 1999, enabled multiprocessor, but still single core chips
• single IBM PowerPC 440 hits 1,000MIPS
• single Pentium3 hits 2,054MIPS (twice PowerPC 440)


by comparison, mainframe Dec2000
• z900: 16 processors, 2.5BIPS (156MIPS/proc)

then 2010, mainframe (z196) versus i86 (E5-2600 server blade)
• E5-2600, two XEON 8core chips, 500BIPS (30BIPS/proc)
• z196, 80 processors, 50BIPS (625MIPS/proc)


multi-core chips ... each core is somewhat analogous to a large car assembly building, each building with multiple assembly lines, some specializing in types of vehicles, orders come in one end, assigned to a line and comes off line to large parking area ... and released from building in same sequence that order order appeared. sometimes a vehicle will reach station that doesn't have part locally, can be pulled off the line, while part request is sent to remote warehouse. BIPS now is number of vehicles/hour (and less how long each vehicle takes to build).

trivia: I was introduced to John Boyd in early 80s and would sponsor his briefings. He had lot of stories to tell, including being very vocal against electronics across the trail, possibly as punishment he was put in command of "spook base" (about the same time I'm at Boeing) also claimed it had the largest air conditioned bldg in that part of the world.
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
Boyd biography claims that "spook base" was $2.5B windfall for IBM (about ten times IBM systems in Boeing Renton datacenter).

Marine Corps Commandant in 89/90 leverages Boyd for make-over for the corps (at a time when IBM was desperately in need of make-over, also IBM and the corps had about same number of people). Then IBM has one of the largest losses in the history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company (take-off on the "baby bell" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup.

FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

POK High-End and Endicott Mid-range

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: POK High-End and Endicott Mid-range
Date: 27 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#47 POK High-End and Endicott Mid-range

FS/Future System:
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

I continued to work on 370 all during FS and perriodically would ridicule what they were doing

Future System ref, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive
... snip ...

one of the last nails in the FS coffin was study by Houston Science Center that if 370/195 apps were redone for FS machine made out of fastest technology available, it would have throughput of 370/145 (about 30 times slowdown)

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880, 3380, Data-streaming

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880, 3380, Data-streaming
Date: 28 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#26 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#27 IBM 3880, 3380, Data-streaming

Old email about 3880 speed matching Calypso (and Calypso ECKD) and double density 3350
https://www.garlic.com/~lynn/2015f.html#email820111
https://www.garlic.com/~lynn/2007e.html#email820907b

also mentioned is MVS supporting FBA devices. I had offered to do it for the MVS group but got back an answer then even if I provided fully integrated and tested I needed business case of $26M ($200M incremental sales) to cover cost of education, training and publications ... and since IBM was already selling every disk made, sales would just switch from CKD to FBA for the same amount of revenue.

posts about getting to play disk engineering in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disks
posts discussing DASD, CKD, FBA, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd

some posts mentioning $26M for MVS FBA support:
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#16 REXX and DUMPRX
https://www.garlic.com/~lynn/2024b.html#110 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#32 3081 TCMs
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2022f.html#85 IBM CKD DASD
https://www.garlic.com/~lynn/2021b.html#78 CKD Disks
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019b.html#25 Online Computer Conferencing
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018f.html#34 The rise and fall of IBM
https://www.garlic.com/~lynn/2018e.html#22 Manned Orbiting Laboratory Declassified: Inside a US Military Space Station
https://www.garlic.com/~lynn/2017j.html#88 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2016g.html#74 IBM disk capacity
https://www.garlic.com/~lynn/2016c.html#12 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015h.html#24 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2015f.html#86 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
https://www.garlic.com/~lynn/2014b.html#18 Quixotically on-topic post, still on topic
https://www.garlic.com/~lynn/2014.html#94 Santa has a Mainframe!
https://www.garlic.com/~lynn/2013n.html#54 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2013i.html#2 IBM commitment to academia
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2013f.html#80 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013d.html#2 Query for Destination z article -- mainframes back to the future
https://www.garlic.com/~lynn/2013c.html#68 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013.html#40 Searching for storage (DASD) alternatives
https://www.garlic.com/~lynn/2012p.html#32 Search Google, 1960:s-style
https://www.garlic.com/~lynn/2012o.html#58 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing
https://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012g.html#19 Co-existance of z/OS and z/VM on same DASD farm
https://www.garlic.com/~lynn/2011j.html#57 Graph of total world disk space over time?
https://www.garlic.com/~lynn/2011e.html#44 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011b.html#47 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???

--
virtualization experience starting Jan1968, online at home since Mar1970

POK High-End and Endicott Mid-range

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: POK High-End and Endicott Mid-range
Date: 29 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#47 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#49 POK High-End and Endicott Mid-rangea

As part of ECPS 138/148, Endicott tried to get corporate permission to allow them to preinstall VM370 on every 138/148 shipped (VS1 with VS1 handshaking ran faster under VM370 than on bare machine) ... however with POK in the process of getting VM/370 product killed ... it was vetoed.

Endicott also cons me into going around the world presenting the 138/148 business case (and I learned some about difference between US region planners and WTC country planners ... including much of WTC was mid-range market). Note also much of POK had been deeply involved in the FS disaster ...including how VS2 R3 (MVS) would be the FS operating system ... pieces of old email about decision to add virtual memory to all 370s.
https://www.garlic.com/~lynn/2011d.html#73

also as mentioned one of final nails in FS coffin was analysis that if 370/195 apps were redone for FS machine made out fastest hardware technology available, they would have throughput of 370/145 (30 times slow down)

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mentioning 138/148, ecps, endicott, vm370, virtual memory for all 370s, POK getting VM370 product "killed"
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#10 IBM 37x5
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017d.html#83 Mainframe operating systems?
https://www.garlic.com/~lynn/2017c.html#80 Great mainframe history(?)
https://www.garlic.com/~lynn/2015b.html#39 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2011k.html#9 Was there ever a DOS JCL reference like the Brown book?
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2010d.html#78 LPARs: More or Less?
https://www.garlic.com/~lynn/2009r.html#51 "Portable" data centers
https://www.garlic.com/~lynn/2009r.html#38 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Modernization

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Modernization
Date: 29 Mar, 2025
Blog: Facebook
review of 1990 FAA ATC modernization supposedly had hardware redundancies masking all failures ... greatly simplifying software ... big one (at least) they missed was checking for human mistakes (needed to go back and correct infrastructure programmed for "correct operation" ... including human part).

Mid-90s, prediction mainframes were going away and financial spent billions on new transaction support. Huge amount of cobol (some from the 60s) that did overnight settlement that had since been added some "real-time" transaction support (with settlement deferred/queued for overnight window ... which was becoming major bottleneck). Billions were spent redoing for straight-through processing on large numbers of "killer micros" running in parallel. They ignored warnings that the industry parallel libraries they were using had 100 times the overhead of mainframe batch cobol (totally swamping throughput) ... until major pilots went down in flames and predictions it would be decades/generations before it was tried again.

posts modernization faa atc modernization
https://www.garlic.com/~lynn/2025b.html#36 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2024.html#114 BAL
https://www.garlic.com/~lynn/2023f.html#84 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2022b.html#97 IBM 9020
https://www.garlic.com/~lynn/2013n.html#76 A Little More on the Computer
https://www.garlic.com/~lynn/2012i.html#42 Simulated PDP-11 Blinkenlight front panel for SimH
https://www.garlic.com/~lynn/2007e.html#52 US Air computers delay psgrs
https://www.garlic.com/~lynn/2005c.html#17 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2002g.html#16 Why are Mainframe Computers really still in use at all?

posts mentioning overnight batch cobol and modernization for straight-through processing
https://www.garlic.com/~lynn/2025.html#78 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#76 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2022g.html#69 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2018f.html#85 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2017f.html#11 The Mainframe vs. the Server Farm: A Comparison
https://www.garlic.com/~lynn/2017d.html#39 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017.html#82 The ICL 2900

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Datacenters

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Datacenters
Date: 30 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#49 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#51 IBM Datacenters

3090 story trivia: The 3090 product administrator tracked me down after 3090s had been out for a year. 3090 channels had been designed to have only 3-5 "channel errors" aggregate across all customers per year ... but there was a industry service that collected customer EREP data from IBM and non-IBM (compatible) mainframes and published summaries ... which showed (customer) 3090 channels had aggregate of 20 "channel errors".

Turns out when I had done channel-extender support in 1980 (originally for IBM STL but also for IBM Boulder), and for various kinds of transmission errors, I would simulate unit-check/channel-check ... kicking off channel program retry. While POK managed to veto releasing my support to customers, a vendor replicated it and it was running in some customer shops. 3090 product administrator asked me if I could do something ... I researched retry and showed that simulating unit-check/IFCC (interface control check) effectively resulted in the same channel program retry and got the vendor to change to simulating "IFCC".

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.exender

some past posts mentioning the incident
https://www.garlic.com/~lynn/2025.html#28 IBM 3090
https://www.garlic.com/~lynn/2024g.html#42 Back When Geek Humour Was A New Concept To Me
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#27 STL Channel Extender
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2016h.html#53 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2012l.html#25 X86 server
https://www.garlic.com/~lynn/2012e.html#54 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2010m.html#83 3270 Emulator Software

--
virtualization experience starting Jan1968, online at home since Mar1970

Planet Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: Planet Mainframe
Date: 30 Mar, 2025
Blog: Facebook
Planet Mainframe
https://planetmainframe.com/influential-mainframers-2024/lynn-wheeler/
taking votes for 2025

2024 Planet Mainframe BIO
https://planetmainframe.com/influential-mainframers-2024/lynn-wheeler/

some 2022 Linkedin posts

z/VM 50th part 1 through 8
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-7-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50-part-8-lynn-wheeler/

and then there is

Knights of VM
http://mvmua.org/knights.html
Mainframe Hall of Fame
https://www.enterprisesystemsmedia.com/mainframehalloffame
Mar/Apr '05 eserver article
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
Apr2009 Greater IBM Connect article
https://www.garlic.com/~lynn/ibmconnect.html

Reinventing Virtual Machines
https://cacm.acm.org/opinion/reinventing-virtual-machines/

original virtual machines, CP/40 and CP/67 at IBM Cambridge Science Center
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

POK High-End and Endicott Mid-range

From: Lynn Wheeler <lynn@garlic.com>
Subject: POK High-End and Endicott Mid-range
Date: 31 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#47 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#49 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#51 POK High-End and Endicott Mid-rangea

3081 was some warmed over FS technology
http://www.jfsowa.com/computer/memo125.htm
The 370 emulator minus the FS microcode was eventually sold in 1980 as as the IBM 3081. The ratio of the amount of circuitry in the 3081 to its performance was significantly worse than other IBM systems of the time; its price/performance ratio wasn't quite so bad because IBM had to cut the price to be competitive. The major competition at the time was from Amdahl Systems -- a company founded by Gene Amdahl, who left IBM shortly before the FS project began, when his plans for the Advanced Computer System (ACS) were killed. The Amdahl machine was indeed superior to the 3081 in price/performance and spectaculary superior in terms of performance compared to the amount of circuitry.]
... snip ...

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

from one of the 3033 processor engineers worked with in their spare time on 16-cpu 370 (before head of POK put a stop to it when he heard that it could be decades before POK favorite son operating system ("MVS") would have (effective) 16-CPU support (POK doesn't ship 16-cpu machine until after turn of century), once 3033 was out the door, they start on trout/3090
https://www.garlic.com/~lynn/2006j.html#email810630

of course it wasn't just paging SIE that was 3081 problems.

smp, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

recent posts mentioning SIE, 16-cpu 370, 3033 processor engineers and head of POK:
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#35 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#89 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#56 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024g.html#37 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#90 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#46 IBM TCM
https://www.garlic.com/~lynn/2024f.html#37 IBM 370/168
https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#17 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2021i.html#66 Virtual Machine Debugging

--
virtualization experience starting Jan1968, online at home since Mar1970

POK High-End and Endicott Mid-range

From: Lynn Wheeler <lynn@garlic.com>
Subject: POK High-End and Endicott Mid-range
Date: 01 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#47 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#49 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#51 POK High-End and Endicott Mid-rangea
https://www.garlic.com/~lynn/2025b.html#55 POK High-End and Endicott Mid-rangea

discusses Amdahl winning battle to make ACS compatible ... folklore it was killed because there was fear that it would advanced state of the art too fast and IBM would loose control of the market ... Amdahl leaves IBM shortly later ... also mentions features show up with ES/9000 more than 20yrs later
https://people.computing.clemson.edu/~mark/acs_end.html

then during FS, internal politics was killing off 370 efforts and claim that lack of new 370 during FS is credited with giving clone 370 makers (including Amdahl) their market foothold. When FS imploded there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 in parallel. One of the last nails in the FS coffin was analysis by the IBM Houston Science Center if 370/195 apps were redone for FS machine made out of the fastest technology available, it would have throughput of 370/145 (about 30 times slowdown).

Future System ref, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous,"
... snip ...

early 70s was seminal for IBM, Learson tried (and failed) to block the bureaucrats, careerists, and MBAs from destroying Watson culture/legacy. refs pg160-163
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

20 yrs later, IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

additional information
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

futuresys posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downturn, Downfall, Breakup

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downturn, Downfall, Breakup
Date: 03 Apr, 2025
Blog: Facebook
Early 70s was seminal for IBM, Learson tried (and failed) to block the bureaucrats, careerists, and MBAs from destroying Watson culture/legacy. refs pg160-163
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Then comes Future System, completely different than 370 and was going to completely replace 370. Internal politics during FS was killing off 370 efforts and the lack of new 370 products during FS is claimed to have given the clone 370 system makers (including Amdahl, who had left IBM shortly before FS, and after ACS/360 was killed), their market foothold. Then the Future System disaster, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive
... snip ...

... one of the last nails in the FS coffin was study by Houston Science Center that if 370/195 apps were redone for FS machine made out of fastest technology available, it would have throughput of 370/145 (about 30 times slowdown). When FS finally implodes, there was mad rush to get stuff back into 370 product pipelines, including quick and dirty 3033&3081 efforts.
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

I continued to work on 360/370 all during FS and would periodically ridicule what they were doing (including drawing analogy with long playing cult film down at central sq), which wasn't exactly career enhancing. Early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM. Then in 89/90, the Commandant of Marines Corps leverages Boyd for a make-over of the Marine Corps (at a time when IBM was desperately in need of make-over ... at that time, the Marine Corps and IBM had approx. same number of people).

20 yrs after Learson's failed effort, IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup.

more information
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
end of ACS/360 reference
https://people.computing.clemson.edu/~mark/acs_end.html

Future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downturn, Downfall, Breakup

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downturn, Downfall, Breakup
Date: 04 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#57 IBM Downturn, Downfall, Breakup

.... after FS finally implodes, Endicott cons me into working on 138/148 (and then used for 4300s) ECPS VM370 microcode assist ... and then going around the world presenting business case to planners (WTC saw better acceptance than US regions). Endicott then tries to get corporate to allow them to pre-install VM370 on every machine shipped ... but the head of POK was in process of convincing corporate to kill the VM370 product (and shut down the development group, moving all the people to POK for MVS/XA; Endicott did eventually acquire VM370 product mission for the mid-range, but had to recreate a development group from scratch). Old archived post with initial ECPS analysis
https://www.garlic.com/~lynn/94.html#21

was also roped into helping with 16-CPU 370 multiprocessor and we con the 3033 processor engineers into helping in their spare time. Everybody thought it was great until somebody tells head of POK that it could be decades before the POK favorite son operating system ("MVS") had (effective) 16-CPU support. At the time MVS documentation claimed that 2-CPU support only had 1.2-1.5 throughput of single processor system (note POK doesn't ship 16-CPU machine until after turn of century). The head of POK then invites some of us to never visit again and directs the 3033 processor engineers, heads down and no distractions. trivia: after graduating and joining IBM, one of my hobbies was enhanced production operation systems for internal datacenters (and the new sales&marketing support HONE systems were one of my first ... and long time customers). After the decision to add virtual memory to all 370s, the decision was to also do VM370. In the morph from CP67->VM370 lots of stuff was simplified and/or dropped (including dropping multiprocessor support). Starting with VM370R2, I start adding CP67 stuff back into VM370 ... and then for my VM370R3-based CSC/VM, I add multiprocessor support back in (initially for HONE, so they could add 2nd CPU to all their systems ... and their 2-CPU systems were getting twice the throughput of the single CPU).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp

Note AMEX and KKR were in competition for private-equity, reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help. Then IBM has one of the largest losses in the history of US companies and\ was preparing to breakup the company when the board hires the former president of AMEX as CEO to try and save the company, who uses some of the same techniques used at RJR.
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Stockman and financial engineering company
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall Street momo traders. It was actually a stock buyback contraption on steroids. During the five years ending in fiscal 2011, the company spent a staggering $67 billion repurchasing its own shares, a figure that was equal to 100 percent of its net income.

pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82 billion, or 122 percent, of net income over this five-year period. Likewise, during the last five years IBM spent less on capital investment than its depreciation and amortization charges, and also shrank its constant dollar spending for research and development by nearly 2 percent annually.
... snip ...

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases have come to a total of over $159 billion since 2000.
... snip ...

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
former AMEX president and IBM CEO
https://www.garlic.com/~lynn/submisc.html#gerstner
pensions posts
https://www.garlic.com/~lynn/submisc.html#pensions
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Retain and other online

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Retain and other online
Date: 04 Apr, 2025
Blog: Facebook
I took two credit hr intro to fortran/computers and at end of semester was hired to reimplement 1401 MPIO in assembler for 360/30. The univ. was getting 360/67 replacing the 709/1401 and 360/30 temporarily replaced 1401, pending arrival of 360/67. The univ. shutdown datacenter on weekends and I had the whole place dedicated, although 48hrs w/o sleep made Monday classes hard. I was given pile of hardware & software manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, storage management and within a few weeks had 2000 card program. Then within a yr of taking intro class, 360/67 arrived and I was hired fulltime responsible for OS/360 (ran as 360/65, tss/360 hadn't come to production operation). Student fortran ran under second on 709 (tape->tape), but initially over a minute on OS/360. I install HASP cutting the time in half. For MFT-R11, I start doing carefully reorged stage2 SYSGEN to place datasets and PDS members for optimized arm seek and multi-track search, cutting another 2/3rds to 12.9secs (student fortran never got better than 709 until I install Univ. of Waterloo WATFOR).

Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton largest IBM 360 datacenter in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room (some joke that Boeing was getting 360/65s like other companies got keypunches). Lots of politics between Renton director and CFO who only had 360/30 up at Boeing Field for payroll (although they enlarge the room for 360/67 for me to play with, when I'm not doing other stuff).

Later in the early 80s, I'm introduced to John Boyd and would sponsor his all day briefings at IBM. John had lots of stories including being very vocal that the electronics across the trail wouldn't work. Possibly as punishment he is put in command of "spook base" (about the same time I'm at Boeing) ... some ref:
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
Boyd biography has "spook base" a $2.5B "windfall" for IBM (ten times Renton).

Both Boeing and IBM team had story on 360 announce, Boeing places an order that makes the IBM rep the highest paid IBM employee that year (in the days of straight commission). The next year, IBM transitions to "quota" and late January another Boeing order makes his quota for the year. His quota is then "adjusted" and he leaves IBM.

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

recent posts mention 709/1401, MPIO, Boeing CFO, Renton
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#38 IBM Computers in the 60s
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#1 Large Datacenters
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Retain and other online

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Retain and other online
Date: 05 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online

Person that did VMSG (profs group borrowed the VMSG code for their email client) also did CMS parasite/story ... 3270 emulation sort of like IBM/PC with HLLAPI-like language ... before IBM/PC ... old archived description with sample stories (story ran in less than 8k bytes)
https://www.garlic.com/~lynn/2001k.html#35
story that automated RETAIN "PUT Bucket" retriever
https://www.garlic.com/~lynn/2001k.html#36

old email about internal devlopment using parasite/story to automated testcase drivers ... including HONE (APL-based) AIDS development (HONE was online sales&marketing support, predated VM370/CMS originally with CP67/CMS) using it to automate some configurator operation (mentions SEQUOYA, >400kbytes APL-code in every HONE APL workspace, that automatically started to provide tailored/canned interactive environment for sales&marketing)
https://www.garlic.com/~lynn/2019d.html#email840117
reference to IMS using it for stress/regression testings
https://www.garlic.com/~lynn/2019d.html#email840117b

trivia: one of my hobbies after graduating and joining IBM (instead of staying with Boeing CFO) was enhanced production operating systems for internal datacenters and HONE was one of my first (back to CP67/CMS days) and long time customer. US HONE datacenters were consolidated in Palo Alto in the mid-70s (all the systems were merged into largest, single-system image, shared-dasd complex in the world with fall-over and load balancing, trivia, when FACEBOOK 1st moves into silicon valley, it was into a new bldg built next door to the former US HONE datacenter).

Note: after the decision to add virtual memory to all 370s, there was decision to do VM370 and in the morph from CP67->VM370 lots of features/functions were simplified an/or dropped (including multiprocessor support) . For VM370R2, I started moving lots of CP67 back into VM370 and then for VM370R3-based CSC/VM, I added multiprocessor support back in, initially for US HONE so they could add a 2nd processor to all their systems. After the earthquake, US HONE was replicated 1st in Dallas and then another in Boulder (as HONE clones were sprouting up all over the world). Besides, supporting HONE (as hobby), I was asked to go over to install the first HONE clones in Paris (for EMEA) and Tokyo (for AFE) .. my first overseas trips (vague memory that YEN was over 300/dollar)

trivia: one of emails mentions ITE (internal technical exchange) ... annual, internal VM370 conference originally hosted by SJR in bldg28 auditorium. previous post in this thread, mentioning John Boyd, 1st time I hosted his all day briefing, it was also in SJR bldg28 auditorium

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

other recent posts mention parasite/story
https://www.garlic.com/~lynn/2025.html#90 Online Social Media
https://www.garlic.com/~lynn/2024f.html#91 IBM Email and PROFS
https://www.garlic.com/~lynn/2024e.html#27 VMNETMAP
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#43 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981

--
virtualization experience starting Jan1968, online at home since Mar1970

Capitalism: A Six-Part Series

From: Lynn Wheeler <lynn@garlic.com>
Subject: Capitalism: A Six-Part Series
Date: 05 Apr, 2025
Blog: Facebook
Capitalism: A Six-Part Series
https://www.amazon.com/Capitalism-Six-Part-Noam-Chomsky/dp/B07BF2S4FS
https://www.amazon.com/Capitalism/dp/B07DHY1P2J

The father claimed he didn't know anything about Iran Contra because he was fulltime deregulating the S&L industry ... causing the S&L crisis
http://en.wikipedia.org/wiki/Savings_and_loan_crisis
with help from other members of his family
https://web.archive.org/web/20140213082405/https://en.wikipedia.org/wiki/Savings_and_loan_crisis#Silverado_Savings_and_Loan
and another
http://query.nytimes.com/gst/fullpage.html?res=9D0CE0D81E3BF937A25753C1A966958260
The S&L crisis had 30,000 criminal referrals and 1000 prison terms.

This century, the son's economic mess was 70 times larger then the father's S&L crisis and proportionally should of had 2.1M criminal complaints and 70k prison terms.

Corporations use to be for chartering projects in the public interest, then they got it changed so that they could be run in private/self interest
https://www.uclalawreview.org/false-profits-reviving-the-corporations-public-purpose/
... and even more recently got people rights under the 14th amendment
https://www.amazon.com/We-Corporations-American-Businesses-Rights-ebook/dp/B01M64LRDJ/
pgxiv/loc74-78:
Between 1868, when the amendment was ratified, and 1912, when a scholar set out to identify every Fourteenth Amendment case heard by the Supreme Court, the justices decided 28 cases dealing with the rights of African Americans--and an astonishing 312 cases dealing with the rights of corporations.
... snip ...

and increasing rights ever since.

S&L crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
regulatory capture
https://www.garlic.com/~lynn/submisc.html#regulatory.capture

some posts mentioning capitalsim and piketty
https://www.garlic.com/~lynn/2021b.html#83 Capital in the Twenty-First Century
https://www.garlic.com/~lynn/2017h.html#1 OT: book: "Capital in the Twenty-First Century"
https://www.garlic.com/~lynn/2016c.html#65 A call for revolution
https://www.garlic.com/~lynn/2016c.html#53 Qbasic
https://www.garlic.com/~lynn/2014m.html#55 Piketty Shreds Marginal Productivity as Neoclassical Justification for Supersized Pay
https://www.garlic.com/~lynn/2014f.html#14 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2012o.html#73 These Two Charts Show How The Priorities Of US Companies Have Gotten Screwed Up

--
virtualization experience starting Jan1968, online at home since Mar1970

Capitalism: A Six-Part Series

From: Lynn Wheeler <lynn@garlic.com>
Subject: Capitalism: A Six-Part Series
Date: 05 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#61 Capitalism: A Six-Part Series

note that John Foster Dulles played major role rebuilding Germany economy, industry, military from the 20s up through the early 40s
https://www.amazon.com/Brothers-Foster-Dulles-Allen-Secret-ebook/dp/B00BY5QX1K/
loc865-68:
In mid-1931 a consortium of American banks, eager to safeguard their investments in Germany, persuaded the German government to accept a loan of nearly $500 million to prevent default. Foster was their agent. His ties to the German government tightened after Hitler took power at the beginning of 1933 and appointed Foster's old friend Hjalmar Schacht as minister of economics.

loc905-7:
Foster was stunned by his brother's suggestion that Sullivan & Cromwell quit Germany. Many of his clients with interests there, including not just banks but corporations like Standard Oil and General Electric, wished Sullivan & Cromwell to remain active regardless of political conditions.

loc938-40:
At least one other senior partner at Sullivan & Cromwell, Eustace Seligman, was equally disturbed. In October 1939, six weeks after the Nazi invasion of Poland, he took the extraordinary step of sending Foster a formal memorandum disavowing what his old friend was saying about Nazism
... snip ...

June1940, Germany had a victory celebration at the NYC Waldorf-Astoria with major industrialists. Lots of them were there to hear how to do business with the Nazis
https://www.amazon.com/Man-Called-Intrepid-Incredible-Narrative-ebook/dp/B00V9QVE5O/

somewhat replay of the Nazi celebration, after the war 5000 industrialists and corporations from across the US had conference at the Waldorf-Astoria, and in part because they had gotten such a bad reputation for the depression and supporting Nazis, as part of attempting to refurbish their horribly corrupt and venal image, they approved a major propaganda campaign to equate Capitalism with Christianity.
https://www.amazon.com/One-Nation-Under-God-Corporate-ebook/dp/B00PWX7R56/

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Retain and other online

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Retain and other online
Date: 05 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online

after 23jun69 unbundling ... starting to charge for (application) software, SE services, maint, etc. At the time SE training was part of large group at customer site ... but they they couldn't figure out how not to charge for trainee SE at customers.

Eventually HONE started out for branch office SEs to practice online with guest operating systems running in CP67 virtual machines.

Besides doing CP67 (and bunch of other things), CSC (Cambridge Scientific Center) also ported APL\360 to CP67/CMS as CMS\APL ... redoing stotage management for multi-mbyte demand page workspaces (instead of 16kbyte swap) and APIs for system services (like file i/o) enabling real world apps.

HONE then started using CMS\APL for online sales and marketing support AIDS ... which comes to dominate all HONE activity ... and online CP67 guest operating practice just dwindles away.

23jun69 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downturn, Downfall, Breakup

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downturn, Downfall, Breakup
Date: 05 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#57 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#58 IBM Downturn, Downfall, Breakup

I was blamed for online computer conferencing on the (non-SNA) internal network in the late 70s and early 80s. It really took off spring of 1981 when I distributed a trip report of visit to Jim Gray at Tandem. There was only about 300 that directly participated but claims 25,000 were reading. Folklore was that when corporate executive committee was told, 5of6 wanted to fire me. Some of the results were officially supported software and sanctioned/moderated discussion groups, also researcher was paid to sit in back of my office for 9months taking notes on how I communicated, face-to-face, telephone, etc. got copies of all my incoming/outgoing email and logs of all instant messages (results were IBM research reports, papers, conference talks, books, and Stanford Phd, joint with language and computer ai, winograd was advisor on computer side). and from IBM Jargon (copy here)
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Supercomputer Datacenters

From: Lynn Wheeler <lynn@garlic.com>
Subject: Supercomputer Datacenters
Date: 06 Apr, 2025
Blog: Facebook
LANL (national lab) behind standardization Cray channel as HIPPI
https://en.wikipedia.org/wiki/HIPPI
Then some work on serial HIPPI (still 800mbit/sec)

Then in 1988, IBM branch asks if I could help LLNL (national lab) get some serial stuff they were working with, standardized, which quickly becomes FCS (fibre channel standard, including some stuff I had done in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec). By 1992, had FCS microchannel cards for RS/6000 (and IBM disk division AdStaR w/PACHEKO-C FCS in disk arrays)
https://en.wikipedia.org/wiki/Fibre_Channel

Later POK releases some serial stuff they been playing with for previous decade, with ES/9000 as ESCON (when it was already obsolete, 17mbyte/sec).

Quite a bit later, POK engineers become involved with FCS and define a protocol that significantly reduces FCS throughput that is eventually released as FICON. Latest public benchmark I can find was z196 "Peak I/O" which got 2M IOPS using a 104 FICON. About the same time a FCS was announced for E5-2600 server blades claiming over million IOPS (two such native FCS having higher throughput than 104 FICON). Note also IBM pubs recommend that SAPs (system assist processors that do actual I/O) be kept to 70% (or 1.5M IOPS). Also no real CKD DASD have been made for decades, all being emulated on industry standard fixed block disks (with extra layer of simulation).

mid-80s IBM 3090 HIPPI trivia: attempts were trying to sell 3090s into the compute intensive market ... but required HIPPI I/O support ... and 3090 was stuck at 3mbyte/sec datastreaming channels. Some engineers hack the 3090 expanded-store bus (extra memory where synchronous instructions moved 4k pages between expanded-memory and processor-memory) to attach HIPPI I/O devices ... and used the synchronous instructions with special expanded store addresses to perform HIPPI I/O (sort of like PC "peak/poke")

FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

trivia: UCSD supercomputer was operated by General Atomics ... which was also marketing LANL supercomputer archive system as "Datatree" (and LLNL supercomputer filesystem as "Unitree"). Congress was pushing national labs to commercialize advanced technology making the US more competitive. NCAR also spun off their filesystem in "Mesa Archival".

DataTree and UniTree: software for file and storage management
https://ieeexplore.ieee.org/document/113582

In 1988, I got HA/6000 product, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when I start doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that have VAXCluster support in same source base with UNIX). Besides HA/CMP as cluster supercomputer (later, cluster scale-up was transferred for announce as IBM Supercomputer and we were told we couldn't work on anything with more than four processors), we also had Unitree (LLNL LINCS) and (NCAR) Mesa Archival porting to HA/CMP.

In the 80s, NCAR had started off with IBM 4341 and NSC HYPERCHANNEL ... each NSC box had up to four 50mbit LAN interfaces, there were NSC boxes for most mainframe channels, telco T1 and T3, as well as IBM mainframe channel emulator. NCAR had NSC mainframe channel-interface boxes for IBM 4341 (NSC A220) and their supercomputers and IBM channel emulator boxes (NSC A515) for attaching IBM DASD Controllers (and drives). IBM DASD Controllers had two channel interfaces that attached directly to 4341 channel and to NSC A515 emulated channel. Supercomputers would send the 4341 a file record read/write requests (over HYPERCHANNEL). If read, 4341 would initially check that file was staged to disk, if not it would do I/O to copy from tape to disk. It would then download a channel program into the appropriate NSC channel box (A515) and return information to the requesting supercomputer the information for it to directly invoke the (A515 downloaded) channel program (resulting in direct I/O transfer between the IBM DASD and the supercomputer ("NAS" network attached storage, with "3rd party transfer"). In spin-off to "Mesa Archival" the 4341 function was being moved to HA/CMP.

other trivia: IBM communication group was trying to block IBM mainframe TCP/IP support. When that failed, they changed tactic; since they had corporate strategic ownership of everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did support for RFC1044 and in some tuning tests at Cray Research, between Cray and 4341, got 4341 sustained channel throughput, using only modest amount of 4341 CPU (something 500 times improvement in bytes moved per instruction executed).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3101 Glass Teletype and "Block Mode"

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3101 Glass Teletype and "Block Mode"
Date: 07 Apr, 2025
Blog: Facebook
within a year of taking two credit hr intro to fortran/computers, 360/67 was installed at univ, replacing 709/1401 for tss/360, which never came to fruition and I was hired fulltime responsible for os/360. Then CSC came out to install CP67 (3rd installation after CSC itself and MIT lincoln labs) and I mostly played with it during my dedicated 48hr weekend window. Initially I rewrote a lot of CP67 to improve OS/360 test jobstream (running 322secs on real machine) from 856secs to 435secs (cut CP67 CPU from 534secs to 113secs). CP67 originally came with 2741 and 1052 terminal support (and "magic" terminal type identification), and since univ had some tty/ascii, I added tty terminal support (integrated with magic terminal type support). I then wanted to have a single dial-in phone number for all terminal types, which only worked if the baud rate were all the same ... IBM had taken short-cut and hardwired baud rate for each port. Univ then kicks off clone controller project, building channel interface board for Interdata/3 programmed to simulate IBM telecommunication controller with addition that did dynamic baud rate. This was then upgraded with Interdata/4 for the channel interface and cluster of Interdata/3s for port interfaces (and four of us are written up responsible for some part of the clone controller business)
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

After graduating and joining IBM, I got 2741 at home (Mar1970), eventually replaced with (ASCII) 300baud CDI Miniterm, then late 70s, a 1200baud IBM Topaz/3101 (initially mod1, simple glass teletype). I track down contact in Fujisawa for "mod2" and they sent ROMs to upgrade mod1 to mod2 (that included "block mode" support)

There was (VM370) ascii->3270 simulation for IBM home terminal program that would leverage 3101 "block mode" .... later upgraded for "PCTERM" when IBM/PC was released (supported string compression and string caching).

ASCII trivia, originally, 360s were suppose to be ASCII machines, however the unit record gear wasn't ready, so they initially ("temporarily") shipped as EBCDIC with BCD gear ... the biggest computer goof:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

some topaz/3101 "block mode" posts
https://www.garlic.com/~lynn/2024g.html#84 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2023f.html#91 Vintage 3101
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2022d.html#28 Remote Work
https://www.garlic.com/~lynn/2021i.html#68 IBM ITPS
https://www.garlic.com/~lynn/2017h.html#12 What is missing?
https://www.garlic.com/~lynn/2014i.html#11 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014h.html#77 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014e.html#49 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2013k.html#16 Unbuffered glass TTYs?
https://www.garlic.com/~lynn/2012m.html#25 Singer Cartons of Punch Cards
https://www.garlic.com/~lynn/2010b.html#27 Happy DEC-10 Day
https://www.garlic.com/~lynn/2008m.html#37 Baudot code direct to computers?
https://www.garlic.com/~lynn/2006y.html#31 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2006y.html#24 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2006y.html#4 Why so little parallelism?
https://www.garlic.com/~lynn/2006y.html#0 Why so little parallelism?
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2004e.html#8 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2003n.html#7 3270 terminal keyboard??
https://www.garlic.com/~lynn/2003c.html#35 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#34 difference between itanium and alpha
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#1 ASR33/35 Controls
https://www.garlic.com/~lynn/2001b.html#12 Now early Arpanet security
https://www.garlic.com/~lynn/2000g.html#17 IBM's mess (was: Re: What the hell is an MSX?)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23Jun1969 Unbundling and HONE

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23Jun1969 Unbundling and HONE
Date: 07 Apr, 2025
Blog: Facebook
23jun1969, IBM unbundling announce started to charge for (application) software, SE services, maint., etc. SE training use to include part of large group at customer's site. However with unbundling they couldn't figure out how not to charge for the onsite trainee SEs. Solution was branch office SEs practicing with online CP67 "HONE" datacenters running guest operating systems in virtual machines.

After I graduate and join IBM Cambridge Science Center, one of my hobbies was enhanced production operating systems for internal datacenters and HONE was one of my 1st (and long time) customer. Cambridge Science center then ported APL\360 to CP67/CMS as CMS\APL, which included redoing storage management for demand paged multi-mbyte workspaces (instead of 16kbyte swaped) and APIs for system services (like file I/O, enabling lots of real world applications). HONE then starts offering APL-based sales&marketing support applications which came to dominate all HONE activity (and guest operating system practice just withered away).

Then with decision to add virtual memory to all 370s, there was also decision to do VM370 and some of the science center splits off and takes over the Boston Programming Center on the 3rd flr ... and in the morph of CP67->VM370 lots of stuff was simplified or dropped. I then start adding bunch of CP67 into VM370R2 for my internal CSC/VM and US HONE datacenters move to 370. The Palo Alto Science Center does APL\CMS for VM370/CMS as well as the APL microcode assist for 370/145 (claiming APL throughput of 370/168) and US HONE consolidates all their datacenters across the back parking lot from PASC (trivia: when facebook 1st moves into silicon valley, it is in a new bldg built next door to the former consolidated US HONE datacenter).

US HONE systems are enhanced to largest loosely-coupled, single-system image, shared DASD with fall-over and load-balancing across the complex and I upgrade CSC/VM to VM370R3-base with addition of CP67 multiprocessor support (initially for HONE so they can add a 2nd CPU to all systems). After the California earthquake, the US HONE datacenter is replicated in Dallas and then a 3rd in Boulder (while other HONE clones were cropping up all over the world). Nearly all HONE (US and world-wide) offerings were APL-based (by far the largest use across the world).

Trivia: PASC had done enhanced code optimizing, initially for internal FORTQ ... eventually released as FORTRAN HX. For some extremely compute intensive HONE APL-based calculations, a few of the APL applications were modified to have APL invoke FORTQ versions of the code.

Other trivia: APL purists were criticizing the CMS\APL API for system services and eventually counter with shared variable ... and APL\CMS morphs into APLSV followed by VS/APL.

4341 was better than twice MIPS of 360/65 ... and extraordinary price/performance. I was using early engineering VM/4341 when branch office hears about it in jan1979 (well before 1st customer ship) and cons me into doing benchmark for national lab that was looking at getting 70 for compute farm (leading edge of the coming cluster supercomputing tsunami).

In the 80s, VM/4300s were selling into same mid-range market as DEC VAX/VMS and in similar numbers for single/small unit orders. Big difference was large corporations ordering hundreds of VM/4300s at a time for distributing out in departmental areas (leading edge of the coming distributed computing tsunami). Inside IBM, so many conference rooms were being converted to VM/4300 rooms, that conference rooms became scarce.

23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23Jun1969 Unbundling and HONE

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23Jun1969 Unbundling and HONE
Date: 07 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#67 IBM 23Jun1969 Unbundling and HONE

by mid-70s, most mainframe orders were required to be 1st run through HONE configurators.

trivia: Co-worker at science center had done an APL-based analytical system model ... which was made available on HONE as Performance Predictor (branch enters customer configuration and workload information and then can ask what-if questions about effect of changing configuration and/or workloads). The consolidated US HONE single-system-image uses a modified version of performance predictor to make load balancing decisions.

I also use it for 2000 automated benchmarks (takes 3months elapsed time), in preparation for my initial VM370 "charged for" kernel add-on release to customers (i.e. during unbundling, the case was made that kernel software was still free, then with the rise of 370 clone makers during FS and after FS implodes, decision was made to transition to charging for all kernel software, and pieces of my internal CSC/VM was selected as initial guinea pig, after transition completed to charge for all kernel software, the OCO-wars started).

The first 1000 benchmarks have manually specified configuration and workload profiles that are uniformly distributed across known observations of real live systems (with 100 extreme combinations outside real systems). Before each benchmark, the modified performance predictor predicts benchmark performance and then compares the results with its prediction (and saves all values). The 2nd 1000 automated benchmarks have configuration and workload profiles specified by the modified performance predictor (searching for possible problem combinations).

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark

some recent posts mentioning automated benchmarking, performance predictor, hone
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#6 Testing
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#33 Copyright Software
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc

--
virtualization experience starting Jan1968, online at home since Mar1970

Amdahl Trivia

From: Lynn Wheeler <lynn@garlic.com>
Subject: Amdahl Trivia
Date: 07 Apr, 2025
Blog: Facebook
Amdahl trivia, 60s Amdahl had won the battle to make ACS, 360 compatible; ... then it was killed (folklore that they were afraid that it would advance the state of the art too fast and IBM might loose control of the market) and Amdahl leaves IBM ... ref has some ACS/360 features that show up more than 20yrs later with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html

Then FS completely different than 370 and was going to completely replace it (and internal politics was killing off 370 efforts, with little new 370s during the period, gave clone 370 makers their market foothold, including Amdahl). When 370 implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033 & 3081 in parallel. 3081 was some redone FS, huge number of circuits for performance (some claim was enough circuits to build 16 168s, major motivation for TCMs, to cram all those circuits into smaller physical area).
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

Initial was two processor 3081D that aggregate MIPS was less than Amdahl single processor, they then double 3081 processor cache sizes for 3081K, bringing it up to about same as Amdahl single processor (however MVS docs said 2-CPU only got 1.2-1.5 times throughput of 1-CPU because of MVS two processor overhead ... and 3084 throughput was significantly less than Amdahl two processor).

Also in the wake of FS implosion, I get asked to help with 16-CPU 370 SMP and we con the 3033 processor engineers into helping in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells head of POK that it could be decades before POK favorite son operating system ("MVS") had (effective) 16-processor support (i.e. based on its 2-cpu overhead was so great) and invites some of us to never visit POK again and directed 3033 processor engineers "heads down and no distractions". Note POK doesn't ship a 16-CPU SMP until after turn of century. Contributing was head of POK was in the process of convincing corporate to kill VM370 product, shutdown the development group and move all the people to POK for MVS/XA (Endicott eventually manages to save VM370 product mission for mid-range, but had to recreate a VM370 development group from scratch).

trivia: In the morph of CP67->VM370 they simplify and/or drop lots of stuff (including CP67 SMP support). I start moving CP67 stuff back into VM370R2 for my internal CSC/VM. Then for VM370R3-based CSC/VM, I add multiprocessor support in, initially for US HONE so they can add 2nd processor to each system (that would get twice the throughput of 1-CPU system).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Kernel Histories

From: Lynn Wheeler <lynn@garlic.com>
Subject: Kernel Histories
Date: 08 Apr, 2025
Blog: Facebook
joke was some Kingston (OS/360) MFT people went to Boca to reinvent MFT for Series/1 ("RPS")... later IBM san jose research physics summer student did Series/1 EDX

trivia: also before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, Kildall worked on IBM CP67/CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

lots of folklore on USENET AFC that person responsible for DEC VMS went to m'soft responsible for NT.

some of the MIT CTSS/7094 people went to 5th floor for MULTICS (some of belllabs people return home and did simplified MULTICS as "UNIX"), others went to IBM science center on 4th flr, modified a 360/40 with virtual memory and did CP40/CMS. It morphs into CP67/CMS when 360/67 standard with virtual memory became available (precursor to vm370/cms).

IBM and DEC both contributed $25M to MIT funding Project Athena ... then IBM funded $50M to CMU that did MACH (unix work-alike), camelot (IBM underwrites camelot spinoff as transarc and then buys transarc outright), andrew, etc. CMU MACH was used for NeXT and then brought over to Apple (with UCB BSD unix-alike)
https://www.operating-system.org/betriebssystem/_english/bs-darwin.htm
By the related UNIX design Mac OS X profits from the protected memory area and established preemptive multitasking. The Kernel consists of 5 components. Includet are the Mach Mikrokernel with the BSD subsystem, file system, network ability and the I/O Kit
... snip ...

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23Jun1969 Unbundling and HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23Jun1969 Unbundling and HONE
Date: 09 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#67 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#68 IBM 23Jun1969 Unbundling and HONE

in the morph of CP67->VM370, they simplify and/or drop a lot of features (including multiprocessor support), then for VM370R3 version of my internal CSC/VM, I add multiprocessor support back in, initially for HONE so they can add a 2nd processor to every system (largest APL operation in the world)

note: lots of schedulers made decisions based on most recent events of some sort ... as undergraduate in 60s, I did dynamic adaptive resource management for CP67 (which IBM released as part of standard scheduler, universities use to refer to it as "wheeler scheduler"). It was one of the things simplified/dropped in morph from CP67->VM370 ... but I put it back in for internal CSC/VM (and ran at HONE). Turns out the APL hack to lock up other systems, didn't work with mine ... and it wasn't just an APL-only hack ... it was possible to do it with CMS EXEC script.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

--
virtualization experience starting Jan1968, online at home since Mar1970

Cluster Supercomputing

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Cluster Supercomputing
Date: 11 Apr, 2025
Blog: Facebook
I was working with (IBM) engineering 4341 and then Jan1979 (well before 1st customer ship), IBM branch office finds out and cons me into doing benchmark for national lab looking to get 70 for compute farm (leading edge of the coming cluster supercomputing tsunami). Decade later I get HA/6000 project (1988), originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when I start doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) with VAXCluster support in same source base with Unix.

Early Jan1992, have meeting with Oracle CEO, and IBM/AWD executive Hester tells Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92 (code name "Medusa"). Mid-Jan give presentation to IBM FSD (federal system division) on HA/CMP work with national labs. They make decision to go with HA/CMP for federal supercomputers. Old email from technical assistant to the FSD president saying that he just returned from telling IBM Kingston Supercomputer group that FSD was going with HA/CMP.
Date: Wed, 29 Jan 92 18:05:00
To: wheeler

MEDUSA uber alles...I just got back from IBM Kingston. Please keep me personally updated on both MEDUSA and the view of ENVOY which you have. Your visit to FSD was part of the swing factor...be sure to tell the redhead that I said so. FSD will do its best to solidify the MEDUSA plan in AWD...any advice there?

Regards to both Wheelers...

... snip ... top of post, old email index

Within a day or two, we are told that cluster scale-up is being transferred to Kingston for announce as IBM Supercomputer (technical/scientific *ONLY*) and we aren't allowed to work with anything that has more than four systems (we leave IBM a few months later). Then 17feb1992, Computerworld news ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

other articles
https://www.garlic.com/~lynn/2001n.html#6000clusters1
https://www.garlic.com/~lynn/2001n.html#6000clusters2
https://www.garlic.com/~lynn/2001n.html#6000clusters3

IBM SP1 references 8-16 nodes/frame and up to four frames (8-64nodes) Feb 2, 1993
https://www.tech-insider.org/mainframes/research/1993/0202.html
PERFORMANCE: Computing power increases with the number of frames included in the system. Based on the number of frames, four major configurations of the SP1 are being offered. These configurations provide customers with modular processing power ranging from 1 to 8 gigaFLOPS. The peak performance per individual processor node is 125 megaFLOPS. A single frame houses eight to 16 processor nodes, the redundant power supply and an optional high performance switch. The SP1 can accommodate up to four frames.
... snip ...

Experiences with the IBM SP1 (June 1993, 128 nodes)
https://www.osti.gov/biblio/10176909
https://experts.illinois.edu/en/publications/experiences-with-the-ibm-sp1
One of the first IBM parallel processing computers - the SP1TM - and the largest, with 128 nodes, was installed in 1993 at Argonne National Laboratory.
... snip ...

IBM Scalable POWERparallel
https://en.wikipedia.org/wiki/IBM_Scalable_POWERparallel

part of old archived comp.arch discussion
https://www.garlic.com/~lynn/2006w.html#40 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#41 Why so little parallelism?

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
some archived 43xx system email
https://www.garlic.com/~lynn/lhwemail.html#4341
some archive MEDUSA email
https://www.garlic.com/~lynn/lhwemail.html#medusa

--
virtualization experience starting Jan1968, online at home since Mar1970

Cluster Supercomputing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cluster Supercomputing
Date: 11 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#72 Cluster Supercomputing

Early 70s, during IBM Future System effort, internal politics was killing off 370s efforts and the lack of new 370 products during the period is credited with giving the 370 clone makers their market foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

then when FS implodes there is mad rush to get stuff back into the 370 product pipeline, including kicking off quick&dirty 3033&3081 efforts in parallel.

I get roped into helping with a 16-CPU shared-memory, tightly-coupled multiprocessor, and we con the 3033 processor engineers into helping in their spare time (lot more interesting than remapping 370/168 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK (high-end 370 mainframe) that it could be decades before the POK favorite son operating system ("MVS") has (effective) 16-CPU support. The head of POK then invites some of us to never visit POK again (and directs the 3033 processor engineers "heads down and no distractions")

One of the referenced articles (in initial post) quotes IBM (early 90s) that IBM could shortly be showing a (mainframe) 32-CPU multiprocessor ... but POK doesn't ship even 16-CPU until after the turn of the century.

Same time 1988 got HA/6000 project, the IBM branch office asks if I could help LLNL standardize some serial stuff that they had been working with that quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec) ... and then we have FCS cards (for RS/6000) & 64-port non-blocking switch by 1992. Then POK releases their serial stuff for ES/9000 (when it is already obsolete, 17mbyte/sec)

Note following are industry MIPS benchmark based on count of program iterations (compared to industry reference platform, not actual count of instructions):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS; 16-system: 2BIPS; 128-system: 16BIPS


During 90s, i86 implements pipelined, on-the-fly, hardware translation of i86 instructions to RISC micro-ops for execution, largely negating RISC throughput advantage. 1999, enabled multiprocessor, but still single core chips
• single IBM PowerPC 440 hits 1,000MIPS
• single Pentium3 hits 2,054MIPS (twice PowerPC 440)


by comparison, mainframe Dec2000
• z900: 16 processors, 2.5BIPS (156MIPS/CPU)

then 2010, mainframe (z196) versus i86 (E5-2600 server blade)
• E5-2600, two XEON 8core chips, 500BIPS (30BIPS/CPU)
• z196, 80 processors, 50BIPS (625MIPS/CPU)


Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

Cluster Supercomputing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cluster Supercomputing
Date: 11 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#72 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#73 Cluster Supercomputing

Early 80s, I got HSDT project, T1 and faster computer links (both terrestrial and satellite), lots of conflict with communication group (note IBM had 2701 controller in the 60s that supported T1, but move to SNA/VTAM in the 70s and associated issues would cap telecommunication controllers at 56kbits/sec). First long haul satellite T1 was between Los Gatos lab on west coast and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
IBM E&S lab in Kingston that had boat load of Floating Point Systems boxes (including 40mbyte/sec disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems
Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.
... snip ...

Was also working with NSF director, was suppose to get $20M to interconnect the IBM supercomputing centers, congress cuts the budget, some other things happened and finally an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (lots of reasons inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Armonk, IBM Headquarters

From: Lynn Wheeler <lynn@garlic.com>
Subject: Armonk, IBM Headquarters
Date: 11 Apr, 2025
Blog: Facebook
note 1972, Learson tries (but fails) to block bureaucrats, careerists and MBAs from destroying Watson culture/legacy, refs pg160-163
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Some '81 about "Tandem Memos" in this post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

1992, IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

Corporate Network

From: Lynn Wheeler <lynn@garlic.com>
Subject: Corporate Network
Date: 12 Apr, 2025
Blog: Facebook
co-worker was responsible for the science center CP67 wide-area network, morphs into the corporate internal network (and technology used for the corporate sponsored univ BITNET). comment by another co-worker (one of the inventors of GML at the science center in 1969, decade later morphs into ISO standard SGML and after another decade morphs into HTML at CERN).
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

... CP67 morphs into VM370 ... and 1st webserver in the US was CERN sister installation Stanford SLAC ... on their VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

Science Center network then morphing into corporate internal network was larger than arpanet/internet from just about the beginning until sometime mid/late 80s (about the time the corporate network was forced to convert to SNA/VTAM). At the time of the great cutover to internetworking (1jan1983), there was approx. 100 IMP network nodes and 255 hosts ... while the corporate network was rapidly approaching 1000. I've commented that one of the inhibitors to arpanet was requirement for IMP approval&connection. Somewhat internal network equivalent was all links had to be encrypted (and gov. agency resistance especially when links cross national boundaries), 1985 link encryptor vendor claimed that the corporate internal network had more than half of all link encryptors in the world

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Corporate Network

From: Lynn Wheeler <lynn@garlic.com>
Subject: Corporate Network
Date: 12 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#76 Corporate Network

SJR had first (IBM) gateway to CSNET, fall '82
https://www.garlic.com/~lynn/2021f.html#email821022
CSNET advisory about cut-over to internetworking
https://www.garlic.com/~lynn/2021f.html#email821230
CSNET comment about cut-over
https://www.garlic.com/~lynn/2021f.html#email830202

Later BITNET and CSNET merge
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/CSNET
as CREN
https://en.wikipedia.org/wiki/Corporation_for_Research_and_Educational_Networking

trivia: 1981, I got HSDT project, T1 and faster computer links and lots of conflict with corporate communication product group (note, 60s, IBM had 2701 telecommunication controller that had T1 support, then with the move to SNA/VTAM and associated issues caped controllers at 56kbits/sec). HSDT was working with NSF director and was suppose to get $20M to interconnect the NSF Supercomputer center. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downturn

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downturn
Date: 12 Apr, 2025
Blog: Facebook
had already left, but continued to get email ... one joke would the last person to leave POK, please turn out the lights ... somewhat take-off on Seattle billboard about would the last person to leave, please turn out the Seattle lights .... during Boeing downturn in 1st part of the 70s (as undergraduate, I had worked fulltime in small group in Boeing CFO office helping with consolidating all dataprocessing into an independent business unit, after graduation, I left and joined IBM).

1992, IBM had one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup.

Note following are industry MIPS benchmark based on count of program iterations (compared to industry reference platform, not actual count of instructions):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS; 16-system: 2BIPS; 128-system: 16BIPS


During 90s, Somerset/AIM does single chip 801/risc with Motorola cache/bus enabling multiprocessor configurations and i86 implements pipelined, on-the-fly, hardware translation of i86 instructions to RISC micro-ops for execution, largely negating RISC throughput advantage. 1999, enabled multiprocessor, but still single core chips
• single IBM PowerPC 440 hits 1,000MIPS
• single Pentium3 hits 2,054MIPS (twice PowerPC 440)


by comparison, mainframe Dec2000
• z900: 16 processors, 2.5BIPS (156MIPS/CPU)

then 2010, mainframe (z196) versus i86 (E5-2600 server blade)
• E5-2600, two XEON 8core chips, 500BIPS (30BIPS/CPU) • z196, 80 processors, 50BIPS (625MIPS/CPU)

Note 1988, IBM branch asks if I could help LLNL standardize some serial stuff they were working with which quickly becomes fibre-channel standard ("FCS", initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec, including some stuff I had done in 1980). POK eventually releases their serial stuff with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). We have RS/6000 full/native FCS cards and switch by 1992.

Then some POK engineers become involved in FCS and define a heavy-weight protocol that significantly reduces throughput that is eventually released as FICON. Latest public benchmark I can find is a z196, "Peak I/O" benchmark that got 2M IOPS using 104 FICON. About the same time a "FCS" was released for E5-2600 server blade claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Note IBM pubs have SAPs (system assist processors that do actual I/O) be kept to 70% CPU (or about 1.5M IOPS).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

some archived posts mention POK (& Boeing) "turn out lights"
https://www.garlic.com/~lynn/2017j.html#89 The 1970s engineering recession
https://www.garlic.com/~lynn/2017f.html#51 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2015g.html#63 [Poll] Computing favorities
https://www.garlic.com/~lynn/2014m.html#143 LEO
https://www.garlic.com/~lynn/2013i.html#47 Making mainframe technology hip again
https://www.garlic.com/~lynn/2012h.html#57 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2012g.html#66 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012.html#57 The Myth of Work-Life Balance

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3081

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3081
Date: 13 Apr, 2025
Blog: Facebook
Late 60s, Amdahl wins battle to make ACS, 360 compatible ... it is then killed, folklore was worries that it would advance state-of-the-art too fast and IBM loose control of the market ... and Amdahl leaves IBM; has some acs/360 features that show up 20+yrs later: in ES/9000 https://people.computing.clemson.edu/~mark/acs_end.html

followed by in early 70s, Future System project, non-370 and to completely replace 370 ... internal politics killing off 370 efforts and lack of new 370s is what gives the clone 370 makers (including Amdahl) their market foothold. Then FS implodes and mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 in parallel. 3033 starts off remapping 168 logic to 20% faster chips, 3081 is warmed over FS technology with enormous number of circuits, claim was enough to have built 16 168s ... motivating TCMs in order to pack all those circuits in reasonable physical volume. http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive

... snip ...

prior to FS, Learson had tried (and failed) to block the bureaucrats, careerists, and MBAs from destroying Watson culture&legacy refs pg160-163
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Initial 3081D (2-CPU) was slower than Amdahl single processor, quickly double processor cache size, bringing 3081K up to about same aggregate MIPS as single CPU Amdahl ... although MVS docs that 2-CPU (3081K) support only has 1.2-1.5 times throughput of single processor system (w/half the MIPS). 3084 was two 3081Ks lashed to together and much, much slower than Amdahl 2-CPU.

In FS failure, I had gotten roped into helping with a 16-CPU multiprocessor design (that everybody thought was great) and we con the 3033 processor engineers into helping in their spare time (lot more interesting than remapping 168 logic to 20% faster chips). Then somebody informs the head of POK that it could be decades before POK's favorite son operating system ("MVS") had (effective) 16-CPU support (2-CPU support started out as 1.2-1.5 times single processor and MVS multiprocessor overhead greatly increasing as number of processors increased, POK doesn't ship 16-CPU system until after turn of century) and head of POK invites some of us to never visit POK again and directs the 3033 processor engineers to heads down and no distractions.

trivia: I got HSDT project early 80s, T1 and faster computer links and lots of battles with communication group (note: in 60s, IBM had 2701 telecommunication controller supporting T1 links but 70s move to SNA/VTAM and associated issues limited controllers to 56kbit/sec links). My 1st long haul T1 link was between Los Gatos lab on west coast and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (on the east coast) that had a boat load of Floating Point System boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems

I also got asked to turn out a "baby bell" PU4/PU5 emulation done on IBM S/1 as type-1 IBM product. The S1 NCP/VTAM emulated owning all the resources and used cross-domain support to host VTAMs ... and included support for T1 and faster links ... with significantly better features, function, throughput, performance, and price/performance than official IBM NCP/VTAM (could have eight S/1 interconnected with "chat ring" in redundant, no-single-point-of-failure" operation). Internal politics killed the effort can only be described as truth is stranger than fiction.

part of my S1 PU4/PU5 presentation at Raleigh SNA/ARB meeting
https://www.garlic.com/~lynn/99.html#67
part of baby bell S1 PU4/PU5 presentation at COMMON (user group) meeting
https://www.garlic.com/~lynn/99.html#70

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

some recent posts mentioning ACS/360, Amdahl, and 3081
https://www.garlic.com/~lynn/2025b.html#69 Amdahl Trivia
https://www.garlic.com/~lynn/2025b.html#57 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#56 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s
https://www.garlic.com/~lynn/2025.html#100 Clone 370 Mainframes
https://www.garlic.com/~lynn/2025.html#53 Canned Software and OCO-Wars
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2024g.html#32 What is an N-bit machine?
https://www.garlic.com/~lynn/2024g.html#26 IBM Move From Leased To Sales
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024e.html#109 Seastar and Iceberg
https://www.garlic.com/~lynn/2024e.html#100 360, 370, post-370, multiprocessor
https://www.garlic.com/~lynn/2024e.html#65 Amdahl
https://www.garlic.com/~lynn/2024e.html#37 Gene Amdahl
https://www.garlic.com/~lynn/2024e.html#18 IBM Downfall and Make-over
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024d.html#100 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3081

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3081
Date: 13 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081

part of reason getting sucked into 16-cpu was after graduating and joining IBM, one of my hobbies was enhanced production operating system for internal datacenters and sales&marketing support HONE was one of first (and long time) customers. Then in the decision to add virtual memory to all 370s and bring out VM370 product, the morph of CP67->VM370 simplifies and/or drops lots of CP67 (including multiprocessor support). Then for VM370R2 I start adding CP67 stuff back in for my internal CSC/VM. Then for VM370R3-based CSC/VM, I add multiprocessor support back in, originally for HONE so they could add 2nd CPU to each system (and started getting twice throughput of single CPU systems, optimized support and hacks for cache-affinity getting higher cache-hit and MIP rates, offsetting SMP overhead).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3081

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3081
Date: 13 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025b.html#80 IBM 3081

IBM IMS hot-standby was supporter for the S/1 PU4/PU5, VTAM had heavy-weight session establishment, IMS hot-standby configuration with large number of ATM machines, IMS fall-over in minutes, but even 3090 could take VTAM over an hour to get the sessions back up. S1 PU4/PU5 could keep "shadow" sessions established with the fall-over machines, so VTAM take-over is immediate.

other trivia: mid-80s IBM communication group was trying to block release of IBM mainframe TCP/IP support (fighting off client/server and distributed computing). When that didn't work, they changed tactics; since they had corporate strategic ownership of everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did support for RFC1044 and in some tuning tests at Cray Research, between Cray and 4341, got 4341 sustained mbyte channel throughput, using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

For HSDT, did dynamic adaptive rate-based pacing (as opposed to window-size-based pacing) and could handle most link speeds and link round-trip latency (even >T1 geostationary satellite "double-hop"; west coast up, east coast down, east coast up, EU down ... and then back).

Second half of 80s, communication group finally comes out with 3737 for (a single) T1 link support (20yrs after IBM 2701 T1), it had boat load of motorola 68k processors and huge amount of memory .... simulating host CTCA VTAM. It would immediately "ACK" traffic to local host VTAM (trying to minimize host VTAM from exhausting its "window") and then use its own protocols to the remote 3737. For full-duplex, short-haul terrestrial (minimal round-trip latency) T1 (1.5mbits transfer, 3mbits/sec aggregate), 3737 peaked around 2mbits/sec aggregate.

some old archived 3737 email
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2018f.html#email880715
https://www.garlic.com/~lynn/2011g.html#email881005

loosely-coupled (mainframe for cluster, also some mention IMS hot-standby) posts
https://www.garlic.com/~lynn/submain.html#shareddata
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3081

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3081
Date: 13 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025b.html#80 IBM 3081
https://www.garlic.com/~lynn/2025b.html#81 IBM 3081

3880 trivia: after transferring from science center to san jose reseach in 2nd half of 70s, I got to wander around IBM (& non-IBM) datacenters in silicon valley, including disk engineering(bldg14) and product test(bldg15) across the street. They were running 7x24, prescheduled, stand-alone testing and mentioned that they had recently tried MVS but it had 15min MTBF (in that environment). I offer to rewrite I/O supervisor to make in bullet proof and never fail so they could do any amount of on-demand, concurrent testing ... greatly improving productivity. Bldg15 then gets 1st engineering 3033 outside POK processing engineering (guys I worked with on 16-cpu, trivia once 3033 was out the door, they start on trout/3090). Product testing was only taking a percent or two of CPU, so we scrounge up 3830 controller and 3330 string for our own private online service (and run 3270 coax under street to my office).

One of the problems was that they got in habit of calling me whenever they had a problem. 3880 controller ... while it could do 3mbyte/sec data streaming channel, it had a really slow vertical mcode processor and lots more channel busy doing anything more than strict data transfer. We then get a engineering 4341 and with some tweaking can do 3880/3380 3mbyte/sec data streaming testing (303x channel director was old 158 engine with just integrated channel mcode and no 370 mcode).

Initially trout/3090 configure number of channels for target throughput assuming 3880 was same as (fast) 3830 controller (but with 3mbyte/sec data streaming) ... when they find out how bad it was, they realized that they would need to significantly increase the number of channels to achieve target throughput (which required extra TCM, they joked that they would bill the 3880 group for increase in 3090 manufacturing cost). Note marketing eventually respins increase in channels

trout/3090 trivia: originally was going to have a 4331 running highly modified VM370R6 for 3092 service processor (with all the screens done in CMS IOS3270) ... they then upgrade it to a pair of 4361s (w/VM370R6), each requiring 3370 FBA.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
playing disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainfame System Meter

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainfame System Meter
Date: 14 Apr, 2025
Blog: Facebook
Charges based on system meter that ran whenever CPU and/or any channel was busy. In the 60s there were two commercial online CP67 spin-offs of the cambridge scientific center (had hardware modified 360/40 with virtual memory and developed CP40, which morphs into CP67 when 360/67 standard with virtual memory became available, precursor to VM370 after decision to add virtual memory to all 370s).

CSC and the commercial spin-offs did a lot of work on 7x24 availability including dark room, no human around and special terminal channel programs that allowed channels to go idle ... but instant on whenever characters were arriving.

Note all CPUs and channels had to be idle for at least 400ms before the system meter would stop. Long after IBM had switched from rent/lease to sales (and charges no longer based on system meter), MVT/VS2/MVS still had a 400ms system timer event that guaranteed that system meter never stopped

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

past posts mentioning system meter
https://www.garlic.com/~lynn/2024g.html#59 Cloud Megadatacenters
https://www.garlic.com/~lynn/2024g.html#55 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024g.html#23 IBM Move From Leased To Sales
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#61 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024c.html#116 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023d.html#78 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022g.html#93 No, I will not pay the bill
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#115 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#23 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#2 IBM Games
https://www.garlic.com/~lynn/2022d.html#108 System Dumps & 7x24 operation
https://www.garlic.com/~lynn/2021i.html#94 bootstrap, was What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021b.html#3 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2019d.html#19 Moonshot - IBM 360 ?
https://www.garlic.com/~lynn/2019b.html#66 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#111 Online Timsharing
https://www.garlic.com/~lynn/2017i.html#65 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017.html#21 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016h.html#47 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016b.html#86 Cloud Computing
https://www.garlic.com/~lynn/2016b.html#17 IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?
https://www.garlic.com/~lynn/2015c.html#103 auto-reboot
https://www.garlic.com/~lynn/2015b.html#18 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2014m.html#113 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014l.html#56 This Chart From IBM Explains Why Cloud Computing Is Such A Game-Changer
https://www.garlic.com/~lynn/2014g.html#85 Costs of core
https://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
https://www.garlic.com/~lynn/2014e.html#4 Can the mainframe remain relevant in the cloud and mobile era?
https://www.garlic.com/~lynn/2012l.html#47 I.B.M. Mainframe Evolves to Serve the Digital World

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3081

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3081
Date: 14 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025b.html#80 IBM 3081
https://www.garlic.com/~lynn/2025b.html#81 IBM 3081
https://www.garlic.com/~lynn/2025b.html#82 IBM 3081

3033 story: also in the wake of the FS implosion, the head of POK manages to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (endicott manages to save the VM370 product mission for the mid-range, but had to recreate a development group from scratch). IBM was under mandate that orders had to shipped in the sequence that they were received, and the 1st order was VM370 (story is while the VM370 3033 left the shipping dock first, route was fiddled so a MVS 3033 order arrived first and installed, pubs were carefully written that the MVS 3033 was first installed)

Besides, the 16-CPU work, Endicott also cons me into helping with 138/148 ECPS microcode assist (and then accompanying them around the world presenting the business case to US&WTC planners/forcecasters). Endicott had also tried to get permission to ship VM370 pre-installed on every 138/148 shipped. However, with POK in the process of getting corporate to kill the VM370 product, that was vetoed.

some posts mentioning POK killing vm370, 3033, & ecps
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#81 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#37 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers

--
virtualization experience starting Jan1968, online at home since Mar1970

An Ars Technica history of the Internet, part 1

From: Lynn Wheeler <lynn@garlic.com>
Subject: An Ars Technica history of the Internet, part 1.
Date: 15 Apr, 2025
Blog: Facebook
An Ars Technica history of the Internet, part 1. In our new 3-part series, we remember the people and ideas that made the Internet.
https://arstechnica.com/gadgets/2025/04/a-history-of-the-internet-part-1-an-arpa-dream-takes-form/

Picture looks like selectric typewriter, not 2741
https://en.wikipedia.org/wiki/IBM_2741
https://www.columbia.edu/cu/computinghistory/2741.html
URL ref to picture at ibm.com seems to have gone, here it is at wayback machine
https://web.archive.org/web/20121003021322/http://www-03.ibm.com/ibm/history/ibm100/images/icp/Z491903Y91074L07/us__en_us__ibm100__selectric__selectric_2__900x746.jpg

trivia: as undergraduate, I was hired fulltime responsible for IBM OS/360 and had 2741 in my office (60s), then also got one at home when I graduated and joined IBM Cambridge Science Center. note some of the MIT CTSS/7094 people went to 5th flr for multics, others went to IBM CSC on the 4th flr and did virtual machines. Co-worker at science center was responsible for the CP67-based science center wide-area network ... that morphs into the corporate internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s). At the time of the 1jan1983 conversion to internetworking, ARPANET had approx 100 IMPS and 255 hosts, while the internal network was about to pass 1000 (spread all over the world, although corporate required all links be encrypted and there were periodic battles with various gov. agencies, especially when links crossed international boundaries).

The univ got 360/67 replacing 709/1401 for tss/360, but ran it as 360/65 with os/360. Then CSC came out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly got to play with it during my dedicated weekend window (48hrs w/o sleep made monday classes hard). CP67 came with 1052 and 2741 terminal support (and automagic terminal type identification switching terminal type port scanner with SAD CCW). Univ. had some TTY terminals and I add TTY/ASCII support integrated with automagic terminal support (trivia: when terminal controller TTY support arrived, it came in a Heathkit box). I then wanted to have single dial-in phone number ("hunt group") for all terminal types ... didn't quite work, each port terminal type could be switched, but port line speed had been hard-wired. Univ. then starts a clone controller project, IBM channel interface board for Interdata/3 programmed to emulate IBM controller with the addition of line autobaud support. This is later upgraded to Interdata/4 for the channel interface and cluster of Interdata/3s for port interface (and four of us written up for some part of IBM clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

CSC had picked up my TTY support and distributed it with CP67. Van Vleck has story about CP67 installed at MIT USL (in bldg across tech sq quad)
http://www.multicians.org/thvv/360-67.html
I had done hack with one byte line lengths. Some user down at Harvard got some ASCII non-TTY and Tom patched CP67 to increase ASCII max line length to 1200 which resulted in CP67 crashing 27 times in single day.

Before I graduate, I was hired fulltime into small group in the Boeing CFO office to help with consolidating all dataprocessing into a independent business unit. I think Renton datacenter was largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between the Renton director and CFO, who only had a 360/30 up at Boeing Field (for payroll), although they enlarge it for a 360/67 for me to play with (when I wasn't doing other stuff).

After I joined CSC, one of my hobbies was enhanced production operating systems for internal datacenters ... and online sales&marketing support HONE was one of the first (and long time) customers (as HONE clones started sprouting up all over the world). Then with IBM decision to add virtual memory to all 370s, part of the science center splits off and takes over the IBM Boston Programming Center on the 3rd flr and morphs CP67 into VM370 (although simplifying and/or dropping lots of stuff). I then start move CP67 stuff into VM370R2-base for my internal CSC/VM ... then upgraded to VM370R3-base including adding multiprocessor support back in, originally for US HONE so they could add a 2nd processor to each system.

Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Packet network dean to retire

From: Lynn Wheeler <lynn@garlic.com>
Subject: Packet network dean to retire
Date: 17 Apr, 2025
Blog: Facebook
Packet network dean to retire
https://www.facebook.com/groups/internetoldfarts/posts/1218380263142523/
https://www.facebook.com/groups/internetoldfarts/posts/1218380263142523/?comment_id=1218594379787778
Citation: Computerworld, 22 August 1988,
https://archive.org/details/computerworld2234unse/page/47/mode/1up

Early 80s, got HSDT project ... T1 and faster computer links (terrestrial and satellite) and battles with IBM communication group (60s, IBM had 2701 telecommunication controller that supported T1 links ... in 70s transition to SNA/VTAM w/associated issues, caped controller links at 56kbits/sec). HSDT 1st long haul T1 link was between Los Gatos lab on west coast and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (on the east coast) that had a boat load of Floating Point System boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems

Was also working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released, in part based on T1 links what we already had running. NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid). By 1988, NSFnet was operatonal and as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

I had been ridiculing their "T1 link implementation" ... they had 440kbit/sec links and possibly to look like T1, they had T1 trunks with telco multiplexors running multiple 440kbit/sec links. Then the call for T3 upgrade. For the proposal, possibly figuring to cut my ridicule, I was asked to be the red team and people from a few labs around the world were the blue team. At the final review, I presented 1st and then the blue team. Within 10mins, the executive running the show pounded on the table and said he would lay down in front of a garbage truck before he let anything but the blue team proposal go forward.

I had workstation in (non-IBM) booth at Interop '88, at immediate right angles to the SUN booth where Case was demo'ing SNMP and con him into installing on my workstation. This was at time when government was mandating the replacement of Internet/TCPIP with OSI ... and there were several OSI booths at Interop '88.

I was on Chesson's XTP TAB and there were a number of gov/mil operations involved and so we took it as "HSP" to (ISO chartered) ANSI X3S3.3 (for layers 4&3 protocol). We eventually got told that ISO had requirement only for standards that conformed to OSI. XTP/HSP didn't conform because 1) it supported internetworking (didn't exist in OSI), 2) it bypassed layer 4/3 interface, 3) it went directly to LAN MAC interface (didn't exist in OSI, sitting somewhere in middle of layer 3). There was a joke that while IETF required at least two interoperable implementations to progress in standards, ISO didn't even require a standard be implementable.

trivia: co-worker at the science center was responsible for the CP67-based scientific center wide-area network which morphs into the internal corporate network (larger than ARPANET/Internet from just about the beginning until sometime mid/late 80s, about the time it was forced to convert to SNA/VTAM) ... technology also used for the corporate sponsored univ BITNET. At the time of the 1Jan1983 internetworking cut-over, there were 100 IMPs and 255 hosts ... while internal network was about to pass 1000 nodes (all over the world, one of the limiting factors was corporate required all links be encrypted and there was periodic gov resistance especially when links cross national boundaries). Edson (passed Aug2020)
https://en.wikipedia.org/wiki/Edson_Hendricks
Some of Ed history failing to get it converted to TCP/IP
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Late 80s, Congress got on program to improve US competitiveness and pushing gov operations to commercialize advanced technology. Part of it became NII, national information infrastucture & testbed
https://en.m.wikipedia.org/wiki/National_Information_Infrastructure
The National Information Infrastructure (NII) was the product of the High Performance Computing Act of 1991. It was a telecommunications policy buzzword, which was popularized during the Clinton Administration under the leadership of Vice-President Al Gore.[1]
... snip ...

was attending NII meetings at LLNL. feds wanted testbed participants to do it on their own nickel ... got some of it back when Singapore invited all the US participants to be part of the fully funded Singapore NII testbed

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Interop '88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Packet network dean to retire

From: Lynn Wheeler <lynn@garlic.com>
Subject: Packet network dean to retire
Date: 17 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#86 Packet network dean to retire

we transfer from science center on east coast to San Jose Research on west coast; SJR had first (IBM) gateway to CSNET, fall '82
https://www.garlic.com/~lynn/2021f.html#email821022
CSNET advisory about cut-over to internetworking
https://www.garlic.com/~lynn/2021f.html#email821230
CSNET comment about cut-over
https://www.garlic.com/~lynn/2021f.html#email830202

Later BITNET and CSNET merge
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/CSNET
as CREN
https://en.wikipedia.org/wiki/Corporation_for_Research_and_Educational_Networking

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet (& earn) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Technology Competitiveness

From: Lynn Wheeler <lynn@garlic.com>
Subject: Technology Competitiveness
Date: 18 Apr, 2025
Blog: Facebook
In 70s, Japanese cars were increasingly being bought in the US. Congress passed import quotas, which greatly increased US makers sales & profits, the profits were supposed to have been used to remake US industry to be more competitive, but instead they were just pocketed. In early 80s, articles in WashDC newspaper called for 100% unearned profit tax on US car makers. The Japanese car makers also determined that based on the quotas, they could sell as many high-end market cars as inexpensive entry automobiles ... further reducing downward price pressure on car makers. US problem was that without increase in worker wages, avg US car price required purchase loans going from 36months to 60m to 72m ... however lending institutions required warranties to be extended for the life of the loan ... losses on the extended warranties forced US auto makers to improve manufacturing quality.

In 1990, US auto makers had "C4" taskforce to finally look at remaking the US industry and since they were planning on heavily leveraging IT technology, they invited representatives from IT vendors and I was invited to be one of the representatives. One of the major issues they highlighted was auto industry was taking 7-8yrs to come out with new models, usually running two new design efforts in parallel, offset 40-48months to make it look more frequent. Japanese makers in the shift from entry market product to high-end market, cut their new design effort in half to 40months and by 1990 were in process of cutting it in half again (allowing them to more quickly adapt to new technology and changing customer preferences). During that period, US auto makers also spun off their parts operation ... which resulted in parts business no longer kept consistent with car design, finding that 8yr old design had to be redone for currently available parts. However, as was seen in the TARP funding, US makers status quo still hadn't changed after turn of the century.

In early 80s, I got HSDT project, T1 and faster computer links, both terrestrial and satellite and started having custom designs being made in Japan. During visits, they liked to show off their latest technology. Part of US electronics was still high-markup commercial/industry items while Japan was rapidly expanding consumer electronic business. One of the things I noticed was I couldn't get any surface mount technology while it was coming to dominate Japan market. I also found the electronics in $300 CDROM player had better technology (optical drivers, forward error correcting, etc) than $6,000 (mainframe) computer modems. I started pontificating the volumes from consumer electronics allowed them to invest more in up-front design (recouped with volumes from consumer market), which didn't exist in the commercial/industry market. Also 1990, Dept of Commerce, was pushing hard for US makers to move into HDTV market ... as part of driving new innovation and competitiveness. Congress was also amending laws and pushing gov. agencies to commercialize advanced technology (as part of making US more competitive) ... part of (non-commercial) NSFNET morphing into (commercial) Internet.

auto c4 taskforce posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

misc. posts mentioning council on competitiveness
https://www.garlic.com/~lynn/2024c.html#89 Inventing The Internet
https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2016b.html#102 Qbasic
https://www.garlic.com/~lynn/2007t.html#15 Newsweek article--baby boomers and computers

misc. posts mentioning Dept of Commerce and HDTV:
https://www.garlic.com/~lynn/2019e.html#140 Color TV ad 1964
https://www.garlic.com/~lynn/2018f.html#49 PC Personal Computing Market
https://www.garlic.com/~lynn/2017d.html#46 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016g.html#41 President Obama announces semiconductor industry working group to review U.S. competitiveness
https://www.garlic.com/~lynn/2012k.html#87 Cultural attitudes towards failure
https://www.garlic.com/~lynn/2012i.html#69 Is there a connection between your strategic and tactical assertions?
https://www.garlic.com/~lynn/2010o.html#22 60 Minutes News Report:Unemployed for over 99 weeks!
https://www.garlic.com/~lynn/2009p.html#59 MasPar compiler and simulator
https://www.garlic.com/~lynn/2008n.html#4 Michigan industry
https://www.garlic.com/~lynn/2008.html#47 dig. TV
https://www.garlic.com/~lynn/2007q.html#50 US or China?
https://www.garlic.com/~lynn/2007d.html#50 Is computer history taugh now?
https://www.garlic.com/~lynn/2006s.html#63 Microsoft to design its own CPUs - Next Xbox In Development
https://www.garlic.com/~lynn/2006q.html#62 Cray-1 Anniversary Event - September 21st
https://www.garlic.com/~lynn/2006.html#45 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005k.html#25 The 8008
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001.html#73 how old are you guys

--
virtualization experience starting Jan1968, online at home since Mar1970

Packet network dean to retire

From: Lynn Wheeler <lynn@garlic.com>
Subject: Packet network dean to retire
Date: 18 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#86 Packet network dean to retire
https://www.garlic.com/~lynn/2025b.html#87 Packet network dean to retire

In the mid-80s, the IBM communication group was fighting off client/server and distributed computing and trying to block releasing mainframe TCP/IP support. When they lost battle to block mainframe TCP/IP, they changed their tactic and said because they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what ships got aggregate 44kbytes/sec using nearly whole 3090 processor. I then do RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement between bytes moved per instruction executed).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AdStar

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AdStar
Date: 19 Apr, 2025
Blog: Facebook
IBM AdStar
https://en.wikipedia.org/wiki/AdStar
In 1992 IBM combined their Storage Products businesses comprising eleven sites in eight countries into this division.[1] On its creation, AdStar became the largest information storage business in the world. It had a revenue of $6.11 billion, of which $500 million were sales to other manufacturers (OEM sales), and generated a gross profit of about $440 million (before taxes and restructuring).[2]
... snip ...

Late 80s, a senior disk engineer got a talk scheduled at internal, annual, world-wide communication group conference, supposedly on 3174 performance, but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing data fleeing datacenters to more distributed computing friendly platforms with drop in disk sales. Several solutions had been developed but were constantly vetoed by the communication group with their corporate strategic ownership of everything that crossed datacenter walls. The division VP of software partial countermeasure was investing in distributed computing startups that would use IBM disks ... and he would periodically asks us to stop by his investments to see if we could offer any help. One of his investments was NCAR supercomputer filesystem that had been spun-off as "Mesa Archival" in Boulder.

... it wasn't just datacenter disks and in 1992, IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

posts getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

re: "13 baby BLUEs" (like true BLUE customers); I think it was take-off on the AT&T "baby bell" breakup a decade earlier. One of the issues for the breakup were supplier contracts with a specific business unit, that were in use by multiple other business units, after the breakup each company would need its own supplier contracts. One of the 1st things were identify all these supplier contracts.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AdStar

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AdStar
Date: 19 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#90 IBM AdStar

after transferring from science center to san jose research, I got to wander around silicon valley IBM (& non-IBM) datacenters, including disk bldg14/engineering and bldg15/product test across the street. They were running 7x24, pre-scheduled, stand-alone mainframe testing ... mentioning that they had recently tried MVS but it had 15min MTBF (in that environment). I offer to rewrite I/O supervisor, making it bullet proof and never fail, allowing any amount of on-demand concurrent testing, greatly improving productivity.

STL was then bursting at the seams and transferring 300 people (& 3270s) from the IMS group to offsite bldg. They had tried "remote 3270s" but found the human factors totally unacceptable. They con me into doing channel-extender support so they can place channel-attached 3270 controllers in the offsite bldg, with no perceptable difference in human factors. Side-effect were those mainframe systems throughput increased 10-15%. STL was configuring 3270 controllers across all channels shared with DASD controllers. The channel-extender hardware had significantly lower channel busy (for same about of 3270 activity) than directly channel-attached 3270 controllers, resulting increased system (DASD) throughput. There was then some discussion about placing all 3270 controllers on channel-extenders (even for controllers inside STL). Then there is attempt by the hardware vendor to get IBM to release my support, however there is a group in POK that were trying to get some serial stuff released and they were concerned if my stuff was in the field, it would make it harder to release the POK stuff (and request is vetoed)

... more Adstar ... also I implemented CMSBACK for internal datacenters ... decade later its was used as base for WDSF product with the addition of workstation and PC clients ... which then morphs into ADSM (ADSTAR Distributed Storage Manager) and rebranded TSM in 1999
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager
https://www.ibm.com/support/pages/ibm-introduces-ibm-spectrum-protect-plus-vmware-and-hyper-v

For CMSBACK, I had modified CMS VMFPLC for VMXPLC, increasing maximum tape record size and moving the File Status information appended to first tape record of each file (instead of separate tape record, reducing inter-record gaps, in some cases could represent more than half of a tape). Early 70s, I had done a CMS page mapped filesystem (that frequently would have three times the throughput of standard CMS filesystem) so carefully aligned VMXPLC buffers on page-aligned boundaries.

1988:

1) Communication group was trying to block release of mainframe TCP/IP support. When that failed, they switch strategy and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then do support for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times increase in bytes moved per instruction executed).

2) Branch office asks me if I can help LLNL standardize some serial stuff they were working with, which quickly becomes fibre channel standard ("FCS", including some stuff I was doing in 1980, initially 1gbit transfer, full-duplex, aggregate 200mbyte/sec. Later the POK group finally get their stuff released (after more than decade) with ES/9000 as ESCON (when it was aleady obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy-weight protocol that significantly reduces throughput that eventually is released as FICON. The latest public benchmark I've found is z196 "Peak I/O" that got 2M IOPS using 104 FICON. At the same time there was a FCS for E5-2600 server blades that claimed over a million IOPS (two such FCS having higher throughput than 104 FICON). Also note IBM recommends that the SAPs (system assist processors that do actual I/O) be kept to 70% CPU (would be more like 1.5M IOPS).

3) Nick Donofrio approves HA/6000, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXcluster support in same source base with Unix; I do distributed lock manager with VAXCluster semantics to ease ports; IBM Toronto was still long way before having simple relational for PS2). Then S/88 product administrator starts taking us around to their customers and gets me to write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/as400 and POK/mainframe, complain they can't meet the objectives). Work was also underway to port LLNL supercomputer filesystem (LINCS) to HA/CMP and working with NCAR spinoff (Mesa Archival) to platform on HA/CMP.

trivia: NCAR's filesystem was more like Network Attached Storage ... implemented with 50mbit/sec LANs (each interface could be connected up to four 50mbit/sec LANs). Supercomputer would send request to the controller for disk I/O, controller would setup I/O operation and return a handle to the requester ... that would invoke the "handle" (doing "3rd party" I/O directly between disk and supercomputer). The Mesa Archival port to HA/CMP was to support both HIPPI (& related switch capable of 3rd party transfer)
https://en.wikipedia.org/wiki/HIPPI
and similar FCS configuration
https://en.wikipedia.org/wiki/Fibre_Channel

getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
mainframe TCP/IP and RFC1044
https://www.garlic.com/~lynn/subnetwork.html#1044
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
some archive CMSBACK email
https://www.garlic.com/~lynn/lhwemail.html#cmsback

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AdStar

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AdStar
Date: 19 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#90 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar

Early Jan92, there was meeting with Oracle CEO and IBM/AWD executive Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Mid-jan92, I update FSD on HA/CMP work with national labs and FSD decides to go with HA/CMP for federal supercomputers. Old email from FSD president's TA:
Date: Wed, 29 Jan 92 18:05:00
To: wheeler

MEDUSA uber alles...I just got back from IBM Kingston. Please keep me personally updated on both MEDUSA and the view of ENVOY which you have. Your visit to FSD was part of the swing factor...be sure to tell the redhead that I said so. FSD will do its best to solidify the MEDUSA plan in AWD...any advice there?

Regards to both Wheelers...

... snip ... top of post, old email index

Within a day or two, we are told that cluster scale-up is being transferred to Kingston for announce as IBM Supercomputer (technical/scientific *ONLY*) and we aren't allowed to work with anything that has more than four systems (we leave IBM a few months later). Then 17feb1992, Computerworld news ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

other articles
https://www.garlic.com/~lynn/2001n.html#6000clusters1
https://www.garlic.com/~lynn/2001n.html#6000clusters2
https://www.garlic.com/~lynn/2001n.html#6000clusters3

other trivia: over the years there is periodically lots of overlap/similarity between cloud cluster technology and supercomputer cluster technology

Note following are industry MIPS benchmark based on count of program iterations (compared to industry reference platform, not actual count of instructions), 1993:
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU • RS6000/990 : 126MIPS; 16-system: 2BIPS; 128-system: 16BIPS

During 90s, AIM/Somerset does single-chip 801/RISC with Motorola 88k bus enabling multiprocessor configurations and i86 implements pipelined, on-the-fly, hardware translation of i86 instructions to RISC micro-ops for execution, largely negating RISC throughput advantage.
• single IBM PowerPC 440 hits 1,000MIPS • single Pentium3 hits 2,054MIPS (twice PowerPC 440)

by comparison, mainframe Dec2000
• z900: 16 processors, 2.5BIPS (156MIPS/CPU)

then 2010, mainframe (z196) versus i86 (E5-2600 server blade)
• E5-2600, two XEON 8core chips, 500BIPS (30BIPS/CPU) • z196, 80 processors, 50BIPS (625MIPS/CPU)

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
some archive MEDUSA email
https://www.garlic.com/~lynn/lhwemail.html#medusa

some recent posts mentioning aim/somerset
https://www.garlic.com/~lynn/2025b.html#78 IBM Downturn
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025b.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025.html#119 Consumer and Commercial Computers
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput
https://www.garlic.com/~lynn/2025.html#74 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#37 IBM Mainframe
https://www.garlic.com/~lynn/2025.html#21 Virtual Machine History
https://www.garlic.com/~lynn/2024g.html#104 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#82 IBM S/38
https://www.garlic.com/~lynn/2024g.html#76 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#43 Apollo Computer
https://www.garlic.com/~lynn/2024f.html#121 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#25 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024f.html#21 Byte ordering
https://www.garlic.com/~lynn/2024f.html#18 The joy of RISC
https://www.garlic.com/~lynn/2024e.html#130 Scalable Computing
https://www.garlic.com/~lynn/2024e.html#121 IBM PC/RT AIX
https://www.garlic.com/~lynn/2024e.html#105 IBM 801/RISC
https://www.garlic.com/~lynn/2024e.html#62 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#40 Instruction Tracing
https://www.garlic.com/~lynn/2024d.html#95 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#14 801/RISC
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2024c.html#1 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#93 PC370
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#57 Vintage RISC
https://www.garlic.com/~lynn/2024b.html#55 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP
https://www.garlic.com/~lynn/2024.html#98 Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
https://www.garlic.com/~lynn/2024.html#85 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AdStar

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AdStar
Date: 20 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#90 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#92 IBM AdStar

In late 70s and early 80s, I was blamed for online computer conferencing (precursor to online social media) on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s, about the same time it was forced to convert to sna/vtam instead of tcp/ip). It really took off spring of 1981 when I distributed trip report of visit to jim gray at tandem. There was only 300 or so that directly participated but claims that 25,000 were reading (and folklore that when the corporate executive committee was told, 5of6 wanted to fire me). Some additional in this post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

I was then told that they were no longer be able to make me an IBM Fellow (with most of the corporate executive committee wanting to fire me), but if I were to keep a low profile, they could funnel funding to me (as if I was one), apparently from the 6th member ... and I got HSDT project; T1 and faster computer links, both terrestrial and satellites ... and battles with the communiction group (note in the 60s, IBM had 2701 telecommunication controller that supported T1, but in the transition to SNA/VTAM in the 70s, issues caped controllers at 56kbit/sec). Part of the funding was being able to show some IBM content (since otherwise, everything was going to be non-IBM). I eventually found the FSD special bid, Series/1 T1 Zirpel cards (for gov. customers with failing 2701s). I went to order half dozen Series/1 but found there was year's backlog for orders. This was right after the ROLM purchase and Rolm had made a really large Series/1 order (they were otherwise a data general shop, and trying to show their new owners they were part of the organization). I found that I knew the Rolm datacenter manager ... that had left IBM some years before. They offered to transfer me some Series/1s, if I would help them with some of their problems (one was that they were using 56kbit/sec links to load test switch software which was taking 24hrs, and they would really like upgrading to T1, radically reducing their testing process cycle time).

Trivia: late 90s (after having left IBM), I was doing a security transaction chip and dealing with chip group at Siemens (that had bought ROLM from IBM) and had offices temporarily on the ROLM campus (would shortly be spun off as Infineon, and move into new bldg complex at intersection of 101 & 1st, the guy I was dealing with became president of Infineon and rang the bell at NYSE as part of the spinoff). The chip was being fab'ed at a new secure chip fab in Dresden (that had already been certified by both German and US security agencies, but they insisted I also do secure audit walk through). I was on the X9 Financial Industry standards organization and co-author of some standards and the chip was part of doing secure financial transactions. Then TD to Information Assurance DDI at a US security agency was running panel in the trusted computing track at Intel IDF and asked me to give a talk on the chip ... gone 404, but lives on at wayback machine
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance
secure transaction standard posts
https://www.garlic.com/~lynn/subpubkey.html#privacy
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AdStar

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AdStar
Date: 21 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#90 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#92 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#93 IBM AdStar

joke at IDF talk: the guy responsible for TPM was sitting in the front row and so I commented that it was nice to see TPM chip starting to look a little more like mine ... he quipped back that I didn't have 200 member committee trying to help me with the design.

secure transaction standard posts
https://www.garlic.com/~lynn/subpubkey.html#privacy

posts mentioning IDF security chip talk
https://www.garlic.com/~lynn/2022f.html#68 Security Chips and Chip Fabs
https://www.garlic.com/~lynn/2021j.html#41 IBM Confidential
https://www.garlic.com/~lynn/2021j.html#21 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021h.html#97 What Is a TPM, and Why Do I Need One for Windows 11?
https://www.garlic.com/~lynn/2021h.html#74 "Safe" Internet Payment Products
https://www.garlic.com/~lynn/2015b.html#72 Do we really?
https://www.garlic.com/~lynn/2014m.html#26 Whole Earth
https://www.garlic.com/~lynn/2014l.html#55 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014g.html#41 Special characters for Passwords
https://www.garlic.com/~lynn/2013k.html#66 German infosec agency warns against Trusted Computing in Windows 8
https://www.garlic.com/~lynn/2013i.html#77 Insane Insider Threat Program in Context of Morally and Mentally Bankrupt US Intelligence System
https://www.garlic.com/~lynn/2012g.html#53 The secret's out for secure chip design
https://www.garlic.com/~lynn/2011p.html#48 Hello?
https://www.garlic.com/~lynn/2011c.html#59 RISCversus CISC
https://www.garlic.com/~lynn/2010p.html#72 Orientation - does group input (or groups of data) make better decisions than one person can?
https://www.garlic.com/~lynn/2010o.html#50 The Credit Card Criminals Are Getting Crafty
https://www.garlic.com/~lynn/2010d.html#38 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#34 "Unhackable" Infineon Chip Physically Cracked
https://www.garlic.com/~lynn/2010d.html#7 "Unhackable" Infineon Chip Physically Cracked - PCWorld
https://www.garlic.com/~lynn/2009p.html#59 MasPar compiler and simulator
https://www.garlic.com/~lynn/2009m.html#48 Hacker charges also an indictment on PCI, expert says
https://www.garlic.com/~lynn/2009k.html#5 Moving to the Net: Encrypted Execution for User Code on a Hosting Site
https://www.garlic.com/~lynn/2009j.html#58 Price Tag for End-to-End Encryption: $4.8 Billion, Mercator Says

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT to VS2/SVS

From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT to VS2/SVS
Date: 21 Apr, 2025
Blog: Facebook
after decision to add virtual memory to all 370s, some of us from cambridge science center would commute to POK for virtual memory meetings. Decade ago customer asked me if I could track down that decision ... and found staff member to executive making decision; basically MVT storage management was so bad that region sizes had to be specified four times larger than used and typical 1mbyte 370/165 would only run four regions concurrently, insufficient to keep it busy and justified. Running it in 16mbyte virtual memory (sort of like in 16mbyte CP67 virtual machine) allowed regions to be increased by factor of four (capped at 15 because of 4bit storage protect keys) with little or no paging

I would periodically drop by Don working offshift doing VS2/SVS prototype (AOS) on 360/67 ... a little bit of code to create 16mbyte virtual memory table and some (little used) paging. The biggest problem/effort was EXCP/SVC0 handling channel programs, similar to CP67, creating copy of channel programs (with virtual addresses), substituting real address for virtual ... and he borrows the CCWTRANS routine from CP67 (substituting real for virtual) for crafting into EXCP.

Trivia: as undergraduate in 60s, Univ had hired me fulltime responsible for OS/360. Then before I graduate I was hired fulltime into small group in the Boeing CFO office to help with the creation of Boeing Computing Services (consolidate all dataprocessing into an independent business unit). Boeing Huntsville had a two-processor 360/67, originally for TSS/360 with lots of 2250s for CAD work ... however it ran as 360/65 with OS/360 and they had ran into MVT storage problem had done simple mods to OS/360 MVT R13 to run in virtual address space mode as partial countermeasure to MVT storage problems (early precursor to VS2/SVS)

Archived post with pieces of email exchange about decision to add virtual memory to all 370s (both of us mentioning Don)
https://www.garlic.com/~lynn/2011d.html#73

trivia: I got in dustup with POK Performance Group that had come up with a hack for virtual page replacement (for VS2) that compromised its performance ... to stop arguing they eventually said that SVS would hardly do any paging anyway ... so it wouldn't make any difference. Late 70s, well into MVS, they decided that they were replacing high-used, shared R/O linkpack pages before low-used, private, application data pages ... and somebody got a reward for fixing the implementation to be like it should have been originally. We started joke that POK had policy to do some things wrong initially so they could hand out more awards later.

Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
virtual memory, page replacement and paging posts
https://www.garlic.com/~lynn/subtopic.html#clock

posts mentioning 370 virtual memory, Ludlow, mvt prototype for VS2/SVS
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#95 IBM Mainframe Channels
https://www.garlic.com/~lynn/2024g.html#72 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#19 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2022h.html#93 IBM 360
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022f.html#41 MVS
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing
https://www.garlic.com/~lynn/2022e.html#91 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#10 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#59 370 Virtual Memory
https://www.garlic.com/~lynn/2019b.html#53 S/360
https://www.garlic.com/~lynn/2013.html#22 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012l.html#73 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2011o.html#92 Question regarding PSW correction after translation exceptions on old IBM hardware
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011.html#90 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2004e.html#40 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems

--
virtualization experience starting Jan1968, online at home since Mar1970

OSI/GOSIP and TCP/IP

From: Lynn Wheeler <lynn@garlic.com>
Subject: OSI/GOSIP and TCP/IP
Date: 22 Apr, 2025
Blog: Facebook
Back in days gov. had mandated elimination of internet/tcpip replaced by GOSIP (Interop88 had some number of OSI/GOSIP booths), I was on Chessin's XTP TAB and there were some gov/mil operations involved so we took it, as HSP, to ISO chartered ANSI X3S3.3 (responsible for layer 3&4 standards). Eventually they said ISO required that standard work only on protocols that conformed to OSI model and XTP/HSP didn't; 1) it supported internetworking (which doesn't exist in OSI), 2) bypassed layer 4/3 interface, and 3) went direct to LAN MAC interface (which doesn't exist in OSI, sitting somewhere in middle of layer 3).

There was a joke that (internet) IETF required at least two interoperable implementations as part of standards progress, while ISO didn't even require a standard be implementable

OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ...

Co-worker at the IBM Cambridge Scientific Center was responsible for the CP/67-based, scientific center wide-area network (pre-dating SNA/VTAM) that morphs into the corporate internal network (larger than Arpanet/internet from just about the beginning until sometime mid/late 80s, about the time that it was forced to convert to SNA/VTAM).
https://en.wikipedia.org/wiki/Edson_Hendricks
Some of Ed history failing to get it converted to TCP/IP
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Early 80s, I got HSDT project, T1 and faster computer links (both terrestrial and satellite) as well as conflicts with the communication group (60s, IBM had 2701 telecommunication controller that supported T1, then 70s IBM transition to SNA/VTAM and associated issues capped controllers to 56kbit/sec links). Part of HSDT funding was be able to show some IBM content (otherwise it would all be non-IBM, including custom design hardware being done in Japan). I eventually found FSD special-bid Series/1 T1 Zirpel card for gov. customers with failing 2701 controllers. I was also working with NSF director and was suppose to get $20M to interconnect NSF Supercomputer centers, then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid).

By 1988, NSFnet was operational and as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet (late 80s, congress was amending some laws and encouraging national labs and gov. agencies to commercialize advanced technologies as part of making US more competitive, NSF director departs gov and joins the Council on Competiveness, lobbying group over on K-street(?) and we would periodically drop in).

Interop88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Open Networking with OSI

From: Lynn Wheeler <lynn@garlic.com>
Subject: Open Networking with OSI
Date: 23 Apr, 2025
Blog: Facebook
Open Networking with OSI

90s, I got brought in as consultant to small client/server startup, two former oracle people (had worked with on cluster scaleup) were there resonsible for something called "commerce server" and they wanted to do payment transactions. the startup had also invented this technology call "SSL" they wanted to use, it is now frequently called "electronic commerce". I had responsibility for everything between webservers and payment networks. I then did talk "Why Internet Wasn't Business Critical Dataprocessing" (based on processes, software, diagnostics, documentation, etc, I did for e-commerce) that (Internet RFC editor) Postel sponsored at ISI/USC.

As ecommerce webservers started to ramp up there was a 6month period where large e-commerce servers were pegged at 100% CPU, 95% of it running TCP session close FINWAIT list. The client/server startup brought in large Sequent DYNIX server ... where DYNIX had previously fixed the problem a couple years earlier. It took the other UNIX vendors six months to start shipping a fix.

Back in days gov. had mandated elimination of internet/tcpip replaced by GOSIP (Interop88 had some number of OSI/GOSIP booths), I was on Chessin's XTP TAB and there were some gov/mil operations involved so we took it, as HSP, to ISO chartered ANSI X3S3.3 (responsible for layer 3&4 standards). Eventually they said ISO required that standard work only on protocols that conformed to OSI model and XTP/HSP didn't; 1) it supported internetworking (which doesn't exist in OSI), 2) bypassed layer 4/3 interface, and 3) went direct to LAN MAC interface (which doesn't exist in OSI, sitting somewhere in middle of layer 3).

There was a joke that (internet) IETF required at least two interoperable implementations as part of standards progress, while ISO didn't even require a standard be implementable

TCP had minimum 7 packet exchange and XTP defined a reliable transaction with minimum of 3 packet exchange. Issue was that TCP/IP was part of kernel distribution requiring physical media (and typically some expertise for complete system change/upgrade, browsers and webservers were self contained load&go). XTP also defined things like trailer protocol where interface hardware could do CRC as packet flowing through and do the append/check ... helping minimize packet fiddling overhead.

e-commerce gateways to payment networks posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
ha/cmp cluster scaleup posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
availability posts
https://www.garlic.com/~lynn/submain.html#available

posts mentioning internet wasn't business critical dataprocessing, postel, isi/usc,
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024g.html#80 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89

--
virtualization experience starting Jan1968, online at home since Mar1970

Heathkit

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Heathkit
Date: 24 Apr, 2025
Blog: Facebook
Univ. was getting an IBM 360/67 (for tss/360) to replace 709/1401. I took at two credit hr intro to computers/fortran and at the end of semester was hired to write some 360/30 (that was temporarily replacing 1401) assembler code. The univ. would shutdown the downcenter over the weekend and I would have the place dedicated for 48hrs (although 48hrs w/o sleep made monday classes hard). The 360/67 came in but tss/360 was never really ready, so ran as 360/65 w/os360 and I was hired fulltime responsible for os/360. The telecommunication controller came with 1052 and 2741 support, but the univ also had some tty33 & tty35 ascii terminals ... and the ascii terminal hardware upgrades for the controller came in heathkit boxes.

long-winded: some of the MIT CTSS/7094 people went to the 5th flr for project mac/MULTICS and others went to the IBM Cambridge Science Center on the 4th flr. CSC thought they would be the organization to provide IBM answer for project mac/multics ... but there was a large IBM organization announced instead. CSC then wanted a 360/50 to modify as part of studying virtual memory, but all the extra 360/50s were going to the FAA ATC, so they had to settle for a 360/40. The do hardware modifications for virtual memory and develop (virtual machine) CP/40 (as part of virtual memory study). When finally 360/67 standard with virtual memory becomes available, CP/40 morphs into CP/67. Lots more history here:
https://www.leeandmelindavarian.com/Melinda#VMHist
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

TSS/360 is way behind schedule and late ... and most 360/67s are being run as 360/65s with os/360 (although besides CSC, Stanford and UofMich also write their own virtual memory operating systems for 360/67). CSC comes out to install CP/67 at univ. (3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my weekend dedicated window (at the time, TSS/360 project had 1100 people and CSC CP67/CMS group had 11 people, including the secretary). Initially I work on rewriting CP67 pathlengths for running OS/360 in virtual machine. My OS/360 benchmark ran 322secs on real machine and initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced the CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to peak of 270).

When CP67 was first installed, there was still a TSS/360 SE at the univ. We set up a simulated benchmark of simulated interactive users (before I start doing rewrites) doing fortran program edit, compile and execute; (buggy) TSS/360 with four users and CP67/CMS with 35 users; CP67/CMS had higher throughput and much better interactive response (with 35 users) than TSS/360 (w/4 users).

After graduating, I join CSC and one of my hobbies was enhanced production operating systems for internal datacenters. Note: IBM branch office, customer support (technical) SE training had included being part of large group on-side at customer location. After the IBM 23june1969 unbundling announcement and starting to charge for (application) software, SE services, maint, etc ... they couldn't figure out how "NOT" to charge for trainee SEs on-site at customer. Eventually there is decision to deploy several "HONE" CP67 datacenters across the US for branch office SEs to practice with online guest operating systems running in CP67 virtual machines. CSC also ports APL\360 to CP67/CMS as CMS\APL, rewriting storage management for multi-megabyte, demand paged workspaces (rather than 16kbyte swapped APL\360 workspaces) and adding APIs for system services like file I/O ... enabling a lot of real-word applications ... and HONE starts using it for delivering lots of online branch office sales&marketing support applications (which come to dominate all HONE activity, and guest operating system support fades away) ... and HONE becomes one of my 1st (and long-time) customers. Later as HONE clones start appearing all over the world, it is by far the largest use of APL in the world.

After the transition from 360 to 370, there is eventually decision to add virtual memory to all 370s and a decision to do VM/370 , some of the CSC people split off and move to the 3rd flr, taking over the IBM Boston Programming Center for the VM/370 Development Group ... in the morph of CP67->VM370 lots of features are dropped or greatly simplified. Early last decade, a customer asks if I could track down the 370 virtual memory decision. I eventually find a staff member to executive making the decision. Basically (OS/360) MVT storage management is so bad that region sizes have to be specified four times larger than used, limiting typical 1mbyte, 370/165 to four concurrently running regions, insufficient to keep system busy and justified. Running MVT in 16mbyte virtual address space (like running MVT in a 16mbyte virtual machine on 1mbyte 360/67) showed regions could be increased by a factor of four times (capped at 15, because of 4bit storage protect key) with little or no paging.

Trivia: Starting with VM370R2, I start moving stuff from CP67 (including lots of what I had done as undergraduate) to VM370 for my internal CSC/VM. Then for VM370R3-based CSC/VM, I add multiprocessor support, initially so HONE can add 2nd processor to each of their systems. Note in mid-70s, US HONE consolidated all of their systems in silicon valley (trivia2: when FACEBOOK 1st moved into silicon valley it was into a new bldg built next door to the former US HONE datacenter).

Other trivia: univ student fortran run under second on 709 (tape->tape), but initially with OS/360 on 360/67 (running as 360/65), they ran well over minute. I install HASP which cuts the time in half. I then start doing carefully crafted STAGE2 SYSGENS to optimize placement of datasets and PDS members on disk to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student fortran never got better than 709 until I install Univ. of Waterloo WATFOR.

Paging system trivia: as undergraduate, part of rewrite included Global LRU page replacement algorithm ... at a time when there was ACM articles about Local LRU. I had transferred to San Jose Research and worked with Jim Gray and Vera Watson on (original SQL/relational) System/R. Then Jim leaves for Tandem, fall of 1980 (palming some stuff off on me). At Dec81 ACM SIGOPS meeting, Jim asks me if I can help Tandem co-worker get their Stanford Phd. It involved Global LRU page replacements and the forces from the late 60s ACM local LRU are lobbying to block giving Phd involving anything involving Global LRU. I had lots of data on my undergraduate Global LRU work and at CSC, that run 768kbyte 360/67 (104 pageable pages after fixed requirement) with 75-80 users. I also had lots of data from the IBM Grenoble Science Center that had modified CP67 to conform to the 60s ACM local LRU literature (1mbyte 360/67, 155 pageable pages after fixed requirement). CSC with 75-80 users had better response and throughput (104 pages) than Grenoble running 35 users (similar workloads and 155pages).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging, workiing set, page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock

--
virtualization experience starting Jan1968, online at home since Mar1970

Heathkit

From: Lynn Wheeler <lynn@garlic.com>
Subject: Heathkit
Date: 24 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#98 Heathkit

ascii trivia: CP67 came with 2741 & 1052 terminal support with automagic terminal type id, changing terminal type port scanner. We have the ascii terminals and controller upgrade for ascii terminal support (arrived in heathkit boxes). I add ascii terminal support to cp67 integrated with automagic changing port scanner terminal type. I then want to have a single dial-up number for all terminals ("hunt group") ... didn't quite work because IBM controller hard-wired each port baud rate. This kicks off univ. project to do clone controller, build ibm channel interface board for Interdata/3 programmed to emulate IBM controller with addition that it does port auto baud rate. This is then upgraded with an Interdata/4 for the channel interface and cluster of Interdata/3s for port interfaces. Interdata (& then Perkin-Elmer) sell it as clone controller (and four of us get written up responsible for some part of the IBM clone controller business)
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

some recent posts mentioning CP67, ascii terminal support, Interdata clone controller:
https://www.garlic.com/~lynn/2025b.html#85 An Ars Technica history of the Internet, part 1
https://www.garlic.com/~lynn/2025b.html#66 IBM 3101 Glass Teletype and "Block Mode"
https://www.garlic.com/~lynn/2025b.html#38 IBM Computers in the 60s
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#109 IBM Process Control Minicomputers
https://www.garlic.com/~lynn/2025.html#77 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#99 Terminals
https://www.garlic.com/~lynn/2024g.html#95 IBM Mainframe Channels
https://www.garlic.com/~lynn/2024g.html#1 Origin Of The Autobaud Technique
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#60 IBM 3705
https://www.garlic.com/~lynn/2024f.html#48 IBM Telecommunication Controllers
https://www.garlic.com/~lynn/2024f.html#32 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024e.html#98 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#40 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#26 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System, 801/RISC, S/38, HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Future System, 801/RISC, S/38, HA/CMP
Date: 24 Apr, 2025
Blog: Facebook
Future System had almost every blue sky thing in computer science ... but little knowledge how to actually implement including single level store out of IBM tss/360 and MIT multics (I did a CMS page-mapped filesystem for internal datacenters and claimed I learned what not to do from tss/360). One of the last nails in FS coffin was study by the IBM Houston Scientific Center that 370/195 applications ported to a FS machine made out of the fastest hardware technology available, would have throughput of 370/145 (a factor of 30 times slowdown).

Rochester then did very simplified FS as S/38 (and there was lots of hardware technology headroom for the S/38 entry level market). One of the simplifications was a single-level-store filesystem with scatter allocate across all available disks. Problem didn't show up with single disk systems but as things scaled up ... system shutdown for backing up the full filesystem as single entity. Then with any single disk failure (common at the time), the failed disk had to be replaced, and then the full filesystem (for all disks) restore was necessary. It was one of the reasons why S/38 was early adopter of RAID technology. More FS:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

I'm at SJR and part-time playing disk engineer across the street and one of the engineers files RAID patent in 1977
https://en.wikipedia.org/wiki/RAID#History

Amdahl in 60s, had earlier won the battle to make ACS, 360 compatible ... and then when it is killed, he leaves (before FS, folklore is ACS/360 was killed because they were afraid it would advanced the state of the art too fast and IBM would loose control of the market). Following has some features that don't show up for IBM until the 90s w/ES9000.
https://people.computing.clemson.edu/~mark/acs_end.html

During FS, internal politics was killing off 370 efforts and lack of new 370s is claimed to have given the 370 clone makers (including Amdahl) their market foothold (IBM marketing having little other than FUD). When FS finally implodes there is mad rush to get stuff back into the 370 product pipelines, including quick&dirty 3033&3081 in parallel. 3033 starts out mapping 168 circuit logic to 20% faster chips. 3081 is really slow warmed over FS technology & multiprocessor only. Initial 2-cpu 3081D is slower than Amdahl's single processor and IBM quickly doubles the CPU cache sizes for 2-CPU 3081K ... about the same aggregate MIPS as 1-CPU Amdahl (although IBM MVS docs are that MVS 2-cpu support only has 1.2-1.5 times the throughput of 1-CPU ... so MVS 3081K still is much lower throughput, .6-.75 that of Amdahl 1-CPU, even with same aggregate MIPS).

trivia: IBM was going to transition to 801/RISC for all internal IBM microprocessors ... including 4331/4341 followon, the 4361&4381, as well as AS/400 follow-on to S/38, lots of other things. For various reasons all of these efforts floundered and IBM returned to lots of different CISC microprocessors (and saw some number of RISC engineers leave IBM for other vendors). Also 1988, Nick Donofrio approved HA/6000, originally for NYTIMEs to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when I start doing cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and relational scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres that have VAXCluster support in same source base with Unix). The executive we reported to then moves over to head up Somerset, doing AIM single chip 801/RISC ... with Motorola bus/cache that supports multiprocessor (and AS/400 finally moves to RISC). The S/88 Product Administrator then starts taking us around to their customers and has me write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain that they can't meet the objectives).

Other trivia: my brother was Apple Regional Marketing Rep (largest physical area CONUS) and when he came to town, I sometimes got to attend business dinners and got to argue MAC design with MAC developers (before it was announced). He also figured out how to dial into the IBM S/38 that ran Apple to track manufacturing and delivery schedules.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
Paged-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
getting to play disk enginner posts
https://www.garlic.com/~lynn/subtopic.html#disk
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Financial Engineering (again)

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Financial Engineering (again)
Date: 24 Apr, 2025
Blog: Facebook
1972, Learson tries (& fails) to block the bureaucrats, careerists and MBAs from destroying Watson culture/legacy, refs pg160-163
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

20yrs later 1992, IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup.

then IBM as financial engineering company, Stockman
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall Street momo traders. It was actually a stock buyback contraption on steroids. During the five years ending in fiscal 2011, the company spent a staggering $67 billion repurchasing its own shares, a figure that was equal to 100 percent of its net income.

pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82 billion, or 122 percent, of net income over this five-year period. Likewise, during the last five years IBM spent less on capital investment than its depreciation and amortization charges, and also shrank its constant dollar spending for research and development by nearly 2 percent annually.
... snip ...

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases have come to a total of over $159 billion since 2000.
... snip ...

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AdStar

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AdStar
Date: 25 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#90 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#92 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#93 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#94 IBM AdStar

other trivia: early 80s, I was (also) introduced to John Boyd
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
and would sponsor his briefings at IBM. Then in 89/90, the Commandant of Marine Corps leverages Boyd for Corps makeover (at the time IBM was also desperately in need of make-over; then 1992, IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breakup of the company). Boyd passed in 1997 and the USAF had pretty much disowned him, it was the Marines at Arlington and Boyd effects go to Marine Corps Library and Gray Research Center in Quantico.
https://grc-usmcu.libguides.com/library

We continued to have Boyd conferences at Marine Corp Univ, sponsored by the former Commandant (passed a year ago).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
John Boyd's Art of War; Why our greatest military theorist only made colonel
http://www.theamericanconservative.com/articles/john-boyds-art-of-war/

I had taken two credit hr intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30. The univ was getting 360/67 for TSS/360 replacing 709/1401 and got a 360/30 temporarily pending 360/67s. Univ. shutdown datacenter on weekends and I got the whole place (although 48hrs w/o sleep made Monday classed hard). Within year of intro class, the 360/67 arrived and I was hired fulltime responsible for OS/360 (TSS/360 never came to fruition).

Then before I graduate, I was hired into small group in Boeing CFO office to help with consolidating all dataprocessing into an independent business unit. I think Renton was the largest IBM datacenter with 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room. Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarge it to install 360/67 for me to play with when I wasn't doing other stuff).

Boyd had lots of stories, one was about being very vocal that the electronics across the trail wouldn't work ... so possibly as punishment he is put in command of "spook base" (about the same time I'm at Boeing)
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

One of Boyd biographies has "spook base" a $2.5B "windfall" for IBM (ten times Renton).

Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

some posts mentioning univ and Boeing:
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downturn, Downfall, Breakup

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downturn, Downfall, Breakup
Date: 26 Apr, 2025
Blog: Facebook
Note AMEX and KKR were in competition for private-equity, reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help.

Then IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup and uses some of the same techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Then IBM as financial engineering company ... Stockman
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall Street momo traders. It was actually a stock buyback contraption on steroids. During the five years ending in fiscal 2011, the company spent a staggering $67 billion repurchasing its own shares, a figure that was equal to 100 percent of its net income.

pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82 billion, or 122 percent, of net income over this five-year period. Likewise, during the last five years IBM spent less on capital investment than its depreciation and amortization charges, and also shrank its constant dollar spending for research and development by nearly 2 percent annually.
... snip ...

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases have come to a total of over $159 billion since 2000.
... snip ..

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
pension plan posts
https://www.garlic.com/~lynn/submisc.html#pensions
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

from facebook:
Louis V Gerstner Jr IBM CEO from 1991 - 2002 changed CEO compensation to 10% Cash and 90% IBM Stock. Tax deferred like an IRA, not taxed until sold. As a shareholder, reading the annual report $2 million wage and comparing Palmisano's reported annual compensation of $20 million. The $18 million came from Stock. Gerstner implemented Stock Buybacks that increased EPS without Revenue. In 1991 there were 1.8 Billion shares of IBM stock, last I checked there is 990 million shares of IBM stock outstanding. Over 900 million shares were "bought back" to boost Vapor Profit and EPS with no increase in the business. All 10 manufacturing R&D plants closed and 150 marketing and sales Branches Offices closed. Gerstner retired at age 61 with $189 million Cash and awarded himself $600 million IBM Stock joining Trump on the Forbes 400 Richest Americans.
... snip ...

former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM S/88

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM S/88
Date: 26 Apr, 2025
Blog: Facebook
Boxes from Stratus rebranded as S/88
https://en.wikipedia.org/wiki/Stratus_Technologies

trivia: 1988 Nick Donofrio approves HA/6000, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/600. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXcluster support in same source base with Unix; I do distributed lock manager with VAXCluster semantics to ease ports).

Then S/88 product administrator starts taking us around to their customers and gets me to write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/as400 and POK/mainframe, complain they can't meet the objectives). At the time, Stratus required downtime to update software ... and at 5-nines, even 30mins for software upgrade could use up a century of unavailability. I also coined the terms disaster survivability and geographic survivability when out marketing (redundant systems not only local but also remotely).

I had worked with Jim Gray and Vera Watson on original SQL/relational (System/R) after transfer to SJR ... and then Jim leaves IBM for Tandem in fall of 1980. At Tandem, Jim did some studies that as hardware was becoming more reliable, human mistakes and environmental factors (disasters, floods, earth quakes, etc) were increasingly becoming major source of outages.
https://www.garlic.com/~lynn/grayft84.pdf
'85 paper
https://pages.cs.wisc.edu/~remzi/Classes/739/Fall2018/Papers/gray85-easy.pdf
https://web.archive.org/web/20080724051051/http://www.cs.berkeley.edu/~yelick/294-f00/papers/Gray85.txt

Early Jan1992, cluster scale-up meeting with Oracle CEO and IBM/AWD executive Hester tells Ellison we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then end Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later).

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
availability posts
https://www.garlic.com/~lynn/submain.html#available
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM S/88

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM S/88
Date: 27 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#104 IBM S/88

during HA/CMP we studied numerous system failures/outages ... one was NYSE datacenter, was carefully chosen bldg, that had water, power, and telco from multiple different sources feeding into the bldg from multiple sides ... the whole datacenter was shutdown when transformer in the basement exploded (contaminating the bldg with PCB) and had to be evacuated and everything shutdown.

we were also brought into bellcore to discuss the 1-800 systems, which took in 1-800 numbers and converted it to the real numbers ... where we could show more "nines" availability than S/88. the whole thing was put on hold when congress passed legislation that required 1-800 numbers to be portable ... and they needed to reset to redesign the implementation.

... before transferring to IBM San Jose Research on the west coast, after graduating, I had joined the IBM Cambridge Science Center for much of the 70s. The Stratus history refers to lot of Multics influence. Some of the MIT CTSS/7094 had gone to the Multics project on the 5th flr and others had gone to IBM science center on the 4th flr ... some influence between the two organizations so close in the same bldg and many were previously in the same organization.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
availability posts
https://www.garlic.com/~lynn/submain.html#available
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23Jun1969 Unbundling and HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23Jun1969 Unbundling and HONE
Date: 29 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#67 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#68 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#71 IBM 23Jun1969 Unbundling and HONE

As undergraduate, I had been hired fulltime responsible for os/360. Then before I graduate I'm hired fulltime into Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit). I think Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room

I was introduced to John Boyd in the early 80s and would sponsor his briefings at IBM. He had lots of stories including being very vocal that the electronics across the trail wouldn't work ... and possibly as punishment he was put in command of "spook base" (he claimed that it had the largest air conditioned bldg in that part of the world), about the same time I'm at Boeing
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
One of Boyd biographies has "spook base" a $2.5B "windfall" for IBM (ten times Renton).

89/90, the Commandant of the Marine Corps leverages Boyd for a make-over of the Corps ... at a time when IBM was desperately in need of make-over and in 1992, IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

Boyd passed in 1997 and the USAF had pretty much disowned him and it was the Marines at Arlington and his effects go to Marine center at Quantico. We continued to have Boyd themed conferences at Marine Corps Univ. sponsored by the former commandant (who passed spring 2024). Some more Boyd
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23Jun1969 Unbundling and HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23Jun1969 Unbundling and HONE
Date: 29 Apr, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#67 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#68 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#71 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#106 IBM 23Jun1969 Unbundling and HONE

... a small vm/4341 cluster was much less expensive than 3033, had higher throughput, used much less energy and air conditioning and had much smaller footprint. Folklore is that POK was so threatened, that head of POK got corporate to cut allocation of critical 4341 manufacturing component in half (making POK executive presentation at large sales/marketing gathering that nearly all DEC VAX machines sold should have been 4341s ... somewhat dubious).

aggravating the situation was after the Future System implosion, the head of POK had convinced corporate to kill the vm/370 product.

posts referencing folklore where POK gets allocation of 4341 critical manufacturing component cut
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021.html#53 Amdahl Computers
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2015f.html#80 Miniskirts and mainframes
https://www.garlic.com/~lynn/2010b.html#87 "The Naked Mainframe" (Forbes Security Article)

--
virtualization experience starting Jan1968, online at home since Mar1970

System Throughput and Availability

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: System Throughput and Availability
Date: 30 Apr, 2025
Blog: Facebook
Recent observations have been memory access (/cache misses) when measured in count of processor cycles are comparable to 60s disk accesses when measured in count of 60s processor cycles (memory is the new disk). The equivalent to 60s multiprogramming/multitasking has been our-of-order execution, branch prediction, speculative execution, hyperthreading. Shortly after joining IBM, I was asked to help with hyperthreading the 370/195 (which already had out-of-order execution pipeline, but no branch prediction and speculative execution, so conditional branches drained the pipeline and most code only ran at half rated speed). Going to 2-CPU simulated execution (two instruction streams, each running at half rate, could keep execution units busy).

Amdahl had won the battle to make ACS, 360 compatible ... then when ACS/360 was killed (folklore; concern it would advance the state of art too fast, and IBM would loose control of the market) and Amdahl leaves IBM. Following account of ACS/360 "end", has some on 2-CPU simulation hyperthreading (and some of ACS/360 features that show up more than 20yrs later with ES/9000).
https://people.computing.clemson.edu/~mark/acs_end.html

Then when the decision was made to add virtual memory to all 370s, it was decided to stop all efforts on 370/195 (too hard to add virtual memory to 370/195). However, the virtual memory decision was based on MVT storage management was so bad that region sizes had to be specified four times larger than used ... as a result, a typical 1mbyte, 370/165 only ran four concurrent regions, insufficient to keep the system busy and justified. Going to MVT running in 16mbyte virtual memory address space (similar to running MVT in CP67 16mbyte virtual machine), aka VS2/SVS, allowed number of regions to be increased by a factor of four times (capped at 15 with 4bit storage protect keys) with little or no paging. The other downside for 370/195 2-CPU was IBM docs had MVT through MVS, 2-CPU support only had 1.2-1.5 times the throughput of single processor (not twice).

This century, IBM docs claimed that at least half the per processor thoughput improvement going from Z10 to Z196 was due to introduction of memory latency countermeasures (aka out-of-order execution, etc) that have been in some other platforms for decades.

We were doing RS/6000 (RIOS) HA/CMP product
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
and in the 90s, the executive we reported to, went over to head up Somerset/AIM (Apple, IBM, Motorola) to do single-chip 801/RISC (with Motorola 88K RISC cache&bus enabling multiprocessor implementations) Power/PC. Also in the 90s, the i86 vendors did pipelined, hardware translation of i86 instruction to RISC micro-ops for actual execution (largely negating the RISC throughput advantage). Industry benchmark is number of program iterations compared to reference platform
1999 (single core chip)

AIM PowerPC 440, 1,000MIPS Intel Pentium3, 2,054MIPS (twice PowerPC 440)

2000:

z900: 16 processors, 2.5BIPS, 156MIPS/CPU

2008:

z10, 64 processors, 30BIPS, 469MIPS/CPU

2010:

z196, 80 processors, 50BIPS, 625MIPS/CPU E5-2600 server blade, 2 XEON chips, 500BIPS, 30BIPS/core


trivia: After the implosion of Future System, I got dragged into helping with 16-CPU 370 multiprocessor and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK that it could be decades before POK's favorite son operating system (MVS) had (effective) 16-CPU support (POK doesn't ship 16-CPU system until after turn of century). Head of POK then invites some of us to never visit POK again and directs 3033 processor engineers, "heads down" and "no distractions".

HA/CMP trivia: I had coined disaster survivability and geographic survivability when out marketing HA/CMP. Then IBM S/88 product administrator starts taking us around to their customers and also had me write a section for the corporate continuous availability strategic document (it gets pulled when both Rochester/AS400 and POK/mainframe complain that they couldn't meet the objectives). 1988, Nick Donofrio approves HA/6000, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres that had VAXCluster support in same source base w/UNIX ... I do a distributed lock manager with VAXCluster semantics to ease the port). The four RDBMS vendors also had lots of input on how to speed up VAXCluster "fall-over".

After transferring to SJR, I worked with Jim Gray and Vera Watson on the original SQL/relational, System/R. Then fall 1980, Jim departs for Tandem (and palms off some stuff on me). At Tandem Jim did studies of system availability:
https://www.garlic.com/~lynn/grayft84.pdf
'85 paper
https://pages.cs.wisc.edu/~remzi/Classes/739/Fall2018/Papers/gray85-easy.pdf
https://web.archive.org/web/20080724051051/http://www.cs.berkeley.edu/~yelick/294-f00/papers/Gray85.txt

Early Jan1992, we have meeting with Oracle CEO where IBM/AWD executive Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Late Jan1992 we are told that cluster scale-up is being transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we aren't allowed to work on anything with more than four systems (we leave IBM a few months later).

VM/370 Trivia: At SJR, I also get to wander around IBM (& non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test, across the street. They were running 7x24, pre-scheduled, stand alone mainframe testing and mentioned that they had recently tried MVS, but it had 15min MTBF (in that environment). I offer to rewrite I/O supervisor so it was bullet proof and never fail, allowing any amount of on-demand, concurrent testing greatly improving productivity.

When I originally joined IBM at science center, one of my hobbies was enhanced internal operating systems for internal datacenters. CSC had originally implemented virtual memory hardware on 360/40 and developed CP/40. CP/40 morphs into CP/67 when 360/67 standard with virtual memory, becomes available. Then with decision to add virtual memory to all 370s, part of CSC splits off and takes over the IBM Boston Programming Center on the 3rd flr for the VM370 development group. In the morph of CP67->VM370 they simplify and/or drop a lot of stuff (including CP67 multiprocessor support). Then for VM370R2-base, I start adding lots of stuff back in for my internal CSC/VM. Then for VM370R3-base, I put multiprocessor support in, originally for the internal, online sales&marketing support HONE systems, so they can add a 2nd processor to each of their systems. Highly optimized pathlengths and some cache-affinity hacks, HONE got 2-CPU systems with twice the throughput of 1-CPU (better than the MVT-thru-MVS, 1.2-1.5 times throughput).

SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HA/CMP processor
https://www.garlic.com/~lynn/subtopic.html#hacmp
availability posts
https://www.garlic.com/~lynn/submain.html#available
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
getting to play disk engineer ports
https://www.garlic.com/~lynn/subtopic.html#disk
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

System Throughput and Availability

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: System Throughput and Availability
Date: 01 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability

I/O trivia: also 1988, IBM branch office asks me if I can help LLNL standize some serial stuff they are working with which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980, initially 1gbit transfer, full-duplex, aggregate 200mbytes/sec). Then POK finally gets their serial stuff released with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Later POK becomes involved with FCS and defines a heavy-weight protocol that is eventually released as FICON. Latest public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 "FICON". About the same time a FCS is announced for E5-2600 server blades claiming over million IOPS (two such FCS higher throughput than 104 FICON). Also note, IBM docs recommend that SAPS (system assist processors that do actual I/O) be kept to 70% CPU (or about 1.5M IOPS). Also no CKD DASD has been made for decades, all being simulated on industry standard fixed-block devices.

FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

--
virtualization experience starting Jan1968, online at home since Mar1970

System Throughput and Availability

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: System Throughput and Availability
Date: 03 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#109 System Throughput and Availability

ESCON
https://en.wikipedia.org/wiki/ESCON
ESCON
https://www.ibm.com/docs/en/zos-basic-skills?topic=channels-enterprise-system-connectivity-escon-channel
Reminder: Half-duplex for ESCON is effectively a request-response format. Bi-directional communications are possible, but the synchronization of the ESCON I/O protocol limits communications to half-duplex.
... snip ....

... even though 200mbits/sec, half-duplex met lost bandwidth with turn-around latencies. IBM/AWD for RS/6000 had tweaked ESCON 200mbits to 220mbits and made full-duplex for "SLA" (serial link adaptor) ... so more like 40+mbyte/sec aggregate (than 17mbyte/sec). "SLA" downside was it was not interoperable with anything else, just other RS/6000s.

Before ESCON was released (when it was already obsolete), in 1988, IBM branch asked if I could help LLNL (national lab) standardize some serial stuff they were working with which quickly becames "FCS".

besides following, a number of IBM documents found with web search.

FICON ... overview
https://en.wikipedia.org/wiki/FICON
IBM System/Fibre Channel
https://www.wikiwand.com/en/articles/IBM_System/Fibre_Channel
Fibre Channel
https://www.wikiwand.com/en/articles/Fibre_Channel
FICON is a protocol that transports ESCON commands, used by IBM mainframe computers, over Fibre Channel. Fibre Channel can be used to transport data from storage systems that use solid-state flash memory storage medium by transporting NVMe protocol commands.
... snip ...

Evolution of the System z Channel
https://web.archive.org/web/20170829213251/https://share.confex.com/share/117/webprogram/Handout/Session9931/9934pdhj%20v0.pdf

above mention zHPF, a little more similar to what I had done in 1980 and also in the original native FCS, early documents claimed something like 30% throughput improvement ... pg39 claims increase in 4k IOs/sec for z196 from 20,000/sec to 52,000/sec and then 92,000/sec.
https://web.archive.org/web/20160611154808/https://share.confex.com/share/116/webprogram/Handout/Session8759/zHPF.pdf

aka z196 "Peak I/O" benchmark of 2M IOPS with 104 FICON is about 20,000 IOPS per FICON ... but apparently running SAPs full out (instead of limiting to 70% CPU).

FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

--
virtualization experience starting Jan1968, online at home since Mar1970

System Throughput and Availability II

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: System Throughput and Availability II
Date: 04 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#109 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#110 System Throughput and Availability

1980, STL (since renamed SVL) was bursting at the seams and moving 300 people (and 3270s) from the IMS group to offsite bldg. They tried "remote 3270s" and found the human factors completely unacceptable. They con me into doing channel-extender support so they can place channel-attached 3270 controllers in the offsite bldg, with no perceptable difference in human factors. Side-effect were those mainframe systems throughput increased 10-15%. STL was configuring 3270 controllers across all channels shared with DASD controllers. The channel-extender hardware had significantly lower channel busy (for same about of 3270 activity) than directly channel-attached 3270 controllers, resulting increased system (DASD) throughput. There was then some discussion about placing all 3270 controllers on channel-extenders (even for controllers inside STL). Then there is attempt by the hardware vendor to get IBM to release my support, however there is a group in POK that were trying to get some serial stuff released and they were concerned if my stuff was in the field, it would make it harder to release the POK stuff (and request is vetoed)

There was a later, similar problem with 3090 and 3880 controllers. While 3880 controllers supported "data streaming" channels capable of 3mbyte/sec transfer, they had replaced 3830 horizontal microprocessor with an inexpensive, slow vertical microprocessor ... so for everything else 3880 had much higher channel busy. 3090 had originally configured number of 3090 channels to meet target system throughput (assuming 3880 was same as 3830 but supporting "data streaming"). When they found out how much worse the 3880 actually was, they were forced to singificantly increase the number of channels to meet target throughput. The increase in number of channels required an extra TCM, and 3090 people semi-facetiously joked they would bill the 3880 organization for the increase in 3090 manufacturing costs. Eventually sales/marketing respins the large increase in number of 3090 channels as 3090 being wonderful I/O machine.

1988, IBM branch office asks me if I can help LLNL (national lab) get some serial stuff they are working with standardized, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec transfer, full-duplex, aggregate 200mbytes/sec. Then POK finally gets there stuff released (when it is already obsolete) with ES/9000 as ESCON (17mbytes/sec). Then POK becomes involved in "FCS" and define a heavy-weight protocol that significantly reduces the throughput, which eventually is released as FICON.

The latest public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 FICON. About the same time a FCS is announced for E5-2600 server blades claiming over a million IOPS (two such FCS with higher throughput than 104 FICON). Note IBM docs recommended that SAPs (system assist processors that do actual I/O) be kept to 70% CPU (which would be more like 1.5M IOPS). Also no CKD DASD have been made for decades, all being simulated on industry standard fixed-block devices.

trivia: z196 max configuration was 80 processors and benchmarked at 50BIPS (625MIPS/CPU) and went for $30M, while IBM base list price for E5-2600 server blade was $1815 and benchmarked at 500BIPS (ten times z196, note benchmarks are number of program iterations compared to industry reference platform). Large cloud megadatacenters had been claiming for at least a decade that they assemble their own blades at 1/3rd the cost of brand name blades. Then industry press had article the server chip makers were shipping half their product directly to large cloud megadatacenters and IBM sells off it server business.

ESCON
https://en.wikipedia.org/wiki/ESCON
ESCON
https://www.ibm.com/docs/en/zos-basic-skills?topic=channels-enterprise-system-connectivity-escon-channel
Reminder: Half-duplex for ESCON is effectively a request-response format. Bi-directional communications are possible, but the synchronization of the ESCON I/O protocol limits communications to half-duplex.
... snip ...

... even though 200mbits/sec, half-duplex met lost bandwidth with turn-around latencies. IBM/AWD for RS/6000 had tweaked ESCON 200mbits to 220mbits and made full-duplex for "SLA" (serial link adaptor) ... so more like 40+mbyte/sec aggregate (than 17mbyte/sec). "SLA" downside was it was not interoperable with anything else, just other RS/6000s.

besides following, a number of IBM documents found with web search.

FICON ... overview
https://en.wikipedia.org/wiki/FICON
IBM System/Fibre Channel
https://www.wikiwand.com/en/articles/IBM_System/Fibre_Channel
Fibre Channel
https://www.wikiwand.com/en/articles/Fibre_Channel
FICON is a protocol that transports ESCON commands, used by IBM mainframe computers, over Fibre Channel. Fibre Channel can be used to transport data from storage systems that use solid-state flash memory storage medium by transporting NVMe protocol commands.
... snip ...

Evolution of the System z Channel
https://web.archive.org/web/20170829213251/https://share.confex.com/share/117/webprogram/Handout/Session9931/9934pdhj%20v0.pdf

above mention zHPF, a little more similar to what I had done in 1980 and also in the original native FCS, early documents claimed something like 30% throughput improvement ... pg39 claims increase in 4k IOs/sec for z196 from 20,000/sec to 52,000/sec and then 92,000/sec.
https://web.archive.org/web/20160611154808/https://share.confex.com/share/116/webprogram/Handout/Session8759/zHPF.pdf

aka z196 "Peak I/O" benchmark of 2M IOPS with 104 FICON is about 20,000 IOPS/FICON ... but apparently running SAPs full out (instead of limiting to 70% CPU).

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extendeer
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

posts mentioning 3090 with high channel busy with 3880 disk controller
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee

--
virtualization experience starting Jan1968, online at home since Mar1970

System Throughput and Availability II

From: Lynn Wheeler <lynn@garlic.com>
Subject: System Throughput and Availability II
Date: 04 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#109 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#110 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#111 System Throughput and Availability II

trivia: When I transfer from CSC out to San Jose Reseach, got to wander around IBM (and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg5/product test across the street. They were running 7x24, prescheduled stand-alone mainframe testing and mentioned that they had recently tried MVS, but it had 15min MTBF (in that environment) requiring manual re-ipl. I offer to rewrite I/O supervisor to be bullet-proof and never fail, allowing any amount of on-demand, concurrent testing, significantly improving productivity. Bldg15 gets the newest engineering systems for testing and got first engineering 3033 (first outside POK 3033 processor engineering). Testing only took a percent or two of CPU, so we scrounge a 3830 and 3330 string and setup our own private online service. New thin-film disk head "air-bearing" design simulation was getting a couple turn-arounds a month on SJR's 370/195 ... and so we set it up on the bldg15 3033 ... and they were able to get multiple turn arounds/day. Initially used for 3370 FBA, and then 3380 CKD.
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

original 3380 drives had 20 track spacings between each data track. The spacing is then cut in half to double the number of tracks (cylinders) and then cut again for triple the number of tracks (cylinders). The "father" of 801/RISC comes up with a disk "wide head" that transfers data in parallel on 16 closely spaced tracks (disk formated with 16 data tracks and a servo track, head tracking the two servo tracks on each side of set of 16 data tracks. The IBM 3090 problem was the disk would have had 50mbyte/sec transfer (and 3090 only handled 3mbyte/sec).

getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

posts mentioning disk "wide head" & 50mbyte/sec
https://www.garlic.com/~lynn/2024g.html#57 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2024g.html#3 IBM CKD DASD
https://www.garlic.com/~lynn/2024f.html#5 IBM (Empty) Suits
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024d.html#96 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#72 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024b.html#110 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#67 Vintage IBM 3380s
https://www.garlic.com/~lynn/2019b.html#75 IBM downturn

--
virtualization experience starting Jan1968, online at home since Mar1970

CERN WWW, Browsers and Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: CERN WWW, Browsers and Internet
Date: 06 May, 2025
Blog: Facebook
1st webserver in the US at (CERN sister institution) Stanford SLAC on their VM370 system:
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

trivia: some of the MIT CTSS/7094 went to the 5th flr for Multics, others went to the IBM cambridge science center on the 4th floor, modified 360/40 with virtual memory and did CP/40, which morphs into CP/67 when 360/67 standard with virtual memory becomes available ... also invented GML (letters after inventors last names) in 1969 (after a decade it morphs into ISO standard SGML and after another decade morphs into HTML at CERN). In early 70s, after decision to add virtual memory to all 370s, some of CSC splits off and takes over the IBM Boston Programming Center for the VM370 development group.

for much of 70s & 80s, SLAC hosted the monthly "BAYBUNCH" meetings.

IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

other trivia

... 1981, I got HSDT project, T1 and faster computer links and lots of conflict with corporate communication product group (note, 60s, IBM had 2701 telecommunication controller that had T1 support, then with the move to SNA/VTAM and associated issues caped controllers at 56kbits/sec. HSDT was working with NSF director and was suppose to get $20M to interconnect the NSF Supercomputer center. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

SLAC server
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
1993 February: A new X browser called Mosaic is released by The National Center for Supercomputing Applications (NCSA). It has many of the features of MidasWWW and the support of a large organization. With the availability and widespread adoption of Mosaic, Web use starts to gain momentum...
... snip ...

Major recipient of NSF "New Technologies Program" was NCSA,
https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
and then more funding
https://en.wikipedia.org/wiki/High_Performance_Computing_Act_of_1991
doing
https://en.wikipedia.org/wiki/NCSA_Mosaic

1988 we got IBM HA/6000 product development&marketing and I rename it HA/CMP when start doing scientific/technical cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors with (Oracle, Sybase, Ingres, Informix) ... planning on 128-system clusters by ye92. Then late Jan92, cluster scaleup is transferred for announce as IBM Supercomputer and we were told we can't work on anything with more than four processors ... and we leave IBM a few months later.

Some of the NCSA people move to silicon valley and form MOSAIC Corp ... NCSA complains about use of "MOSAIC" and they change the name to NETSCAPE (getting rights for the name from a silicon valley router vendor). Two of the former Oracle employees (that we worked with on cluster scale-up HA/CMP), are there responsible for something they called "commerce server" and they want to do payment transactions on the server. I'm brought in as consultant responsible for everything between webservers and the financial industry payment networks. It is now frequently called "electronic commerce".

Jan1996 MSDC at Moscone center there were "Internet" banners everywhere ... but constant refrain in every session was "protect your investment" ... aka automatic execution of visual basic in data files (including email).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
payment network web gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

ROLM, HSDT

From: Lynn Wheeler <lynn@garlic.com>
Subject: ROLM, HSDT
Date: 06 May, 2025
Blog: Facebook
Early 80s, I got HSDT project; T1 and faster computer links, both terrestrial and satellites ... and battles with the communiction group (note in the 60s, IBM had 2701 telecommunication controller that supported T1, but in the transition to SNA/VTAM in the 70s, issues caped controllers at 56kbit/sec). Part of the funding was contingent being able to show some IBM content (since otherwise, everything was going to be non-IBM). I eventually found the FSD special bid, Series/1 T1 Zirpel cards (for gov. customers with failing 2701s). I went to order half dozen Series/1 but found there was year's backlog for S/1 orders. This was right after the ROLM purchase and Rolm had made a really large Series/1 order (they were otherwise a data general shop, and trying to show their new owners they were part of the organization). I found that I knew the Rolm datacenter manager ... that had left IBM some years before. They offered to transfer me some Series/1s, if I would help them with some of their problems (one was that they were using 56kbit/sec links to load test switch software that could take 24hrs, and they would really like upgrading to T1, radically reducing their testing process cycle time).

Mid-80s, the communication group made a board presentation claiming that customers wouldn't be interested in T1 until sometime in the mid-90s. The numbers were based on 37x5 "fat pipe" installs, multiple parallel 56kbit/sec links treated as single link, for 2, 3, 4, etc parallel links, dropping to zero by seven. What communication group didn't know (or didn't want to tell the board) was telco tariffs for T1 was about the same as five or six 56kbit links ... customers just moved to non-IBM hardware and software for full T1 (trivial survey by HSDT found 200 full T1 installations).

The communication group eventually came out with the 3737, boatload of memory and Motorola 68k processors emulating a CTCA host connected VTAM. The 3737 VTAM emulation would immediately ACK the (real host) VTAM (trying to keep traffic flowing) and then do T1 transmission to the remote 3737 in the background (aka even short-haul terrestrial T1 exhausted standard VTAM window-pacing limit ... trivia: HSDT had early on transitioned to dynamic adaptive rate-based pacing, and could even handle >T1 double-hop satellite between west cost and England/Europe; up/down over US, up/down over Atlantic).

I left IBM in the early 90s and was doing work in finanncial industry, rep to financial industry standards and co-author of some secure protocols. I then design a secure chip to support secure transactions and it was going to be done by new secure Siemens fab in Dresden. Siemens had also acquired ROLM and the guy I was dealing with had offices on the ROLM campus. Siemens then spins-off its chip group as Infineon with the guy I was working with, named president (rings bell at NYSE) and moves into new office complex at interesection of 101&1st.

TD to DDI at a gov. agency was running panel in the trusted computing track at Intel IDF and asked me to give a talk on the chip ... gone 404, but lives on at wayback machine
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance

some posts mentioning Infineon and secure chip
https://www.garlic.com/~lynn/2025b.html#93 IBM AdStar
https://www.garlic.com/~lynn/2022f.html#68 Security Chips and Chip Fabs
https://www.garlic.com/~lynn/2022b.html#103 AADS Chip Strawman
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#41 IBM Confidential
https://www.garlic.com/~lynn/2018b.html#11 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017d.html#10 Encryp-xit: Europe will go all in for crypto backdoors in June

other posts mentioning HSDT, ROLM, T1
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#79 Early Email
https://www.garlic.com/~lynn/2024e.html#34 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2018f.html#110 IBM Token-RIng
https://www.garlic.com/~lynn/2018b.html#9 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2018b.html#8 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2016h.html#26 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016d.html#27 Old IBM Mainframe Systems
https://www.garlic.com/~lynn/2015e.html#83 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2014f.html#24 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013j.html#43 8080 BASIC
https://www.garlic.com/~lynn/2013j.html#37 8080 BASIC
https://www.garlic.com/~lynn/2013g.html#71 DEC and the Bell System?
https://www.garlic.com/~lynn/2009j.html#4 IBM's Revenge on Sun
https://www.garlic.com/~lynn/2003j.html#76 1950s AT&T/IBM lack of collaboration?

--
virtualization experience starting Jan1968, online at home since Mar1970

SHARE, MVT, MVS, TSO

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: SHARE, MVT, MVS, TSO
Date: 06 May, 2025
Blog: Facebook
I was at SHARE when boney fingers was 1st performed
http://www.mxg.com/thebuttonman/boney.asp

Paging system trivia: as undergraduate, part of rewrite included Global LRU page replacement algorithm ... at a time when there was ACM articles about Local LRU. I had transferred to San Jose Research and worked with Jim Gray and Vera Watson on (original SQL/relational) System/R. Then Jim leaves for Tandem, fall of 1980 (palming some stuff off on me). At Dec81 ACM SIGOPS meeting, Jim asks me if I can help Tandem co-worker get their Stanford Phd. It involved global LRU page replacements and the forces from the late 60s ACM local LRU are lobbying to block giving Phd involving anything involving global LRU. I had lots of data on my undergraduate Global LRU work and at CSC, that run 768kbyte 360/67 (104 pageable pages after fixed requirement) with 75-80 users. I also had lots of data from the IBM Grenoble Science Center that had modified CP67 to conform to the 60s ACM local LRU literature (1mbyte 360/67, 155 pageable pages after fixed requirement). CSC with 75-80 users had better response and throughput (104 pages) than Grenoble running 35 users (similar workloads and 155pages).

Early last decade, I was asked to track down decision to add virtual memory to all 370s ... and tracked down staff member to executive making the decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used. As a result, typical 1mbyte 360/165 was limited to four concurrent regions, insufficient to keep system busy and justified. Moving MVT into 16mbyte virtual address space (similar to running MVT in CP67 virtual machine), allowed increasing number of concurrent running regions by factor of four times (capped at 15 because of storage protect keys) with little or no paging (VS2/SVS). As systems got larger, went to separate address space for every region as VS2/MVS, to get around the 15 concurrent cap. Recent posts mentioning Ludlow doing initial VS2/SVS implementation on 360/67
https://www.garlic.com/~lynn/2025b.html#95 MVT to VS2/SVS
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#72 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#19 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas

MVS/TSO trivia: late 70s, SJR got a 370/168 for MVS and 370/158 for VM/370 and several strings of 3330s all with two channel switch 3830s connecting to both systems .... but strings&controllers were labeled MVS or VM/370 and strict rules that no MVS use of VM/370 controller/strings. One morning, an MVS 3330 was placed on 3330 string and within a few minutes, operations were getting irate phone calls from all over the bldg about what happened to response. Analysis showed that the problem was MVS 3330 (OS/360 filesystem extensive use of multi-track search locking up controller and all drives on that controller) had been placed on VM/370 3330 string and demands that the offending MVS 3330 be moved. Operations said they would have to wait until offshift. Then a single pack VS1 (highly optimized for VM370 and hand-shaking) is put up on an MVS string and brought up on the loaded 370/158 VM370 ... and was able to bring the MVS/168 to a crawl ... alleviating a lot of the problems for VM370 users (operations almost immediately agreed to move the offending MVS 3330).

Trivia: one of my hobbies after joining IBM was highly optimized operating systems for internal datacenters. In early 80s, there was increasing studies showing quarter second response improved productivity. 3272/3277 had .086 hardware response. Then 3274/3278 was introduced with lots of 3278 hardware moved back to 3274 controller, cutting 3278 manufacturing costs and significantly driving up coax protocol chatter ... increasing hardware response to .3sec to .5sec depending on amount of data (impossible to achieve quarter second). Letters to the 3278 product administrator complaining about interactive computing got a response that 3278 wasn't intended for interactive computing but data entry (sort of electronic keypunch). 3272/3277 required .164sec system response (for human to see quarter second). Fortunately I had numerous IBM systems in silicon valley that had 90th percentile .11sec system response (I don't believe any TSO users ever noticed since they rarely ever saw even one sec system response). Later, IBM/PC 3277 emulation cards had 4-5 times the upload/download throughput as 3278 emulation cards.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM internal system posts
https://www.garlic.com/~lynn/submisc.html#cscvm
dynamic adaptive scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replace algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
Original sql/relational System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

system response, 3272/3277 response, 3274/3278 response
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025.html#127 3270 Controllers and Terminals
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024f.html#12 3270 Terminals
https://www.garlic.com/~lynn/2024e.html#26 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023f.html#78 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023e.html#0 3270
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2022h.html#96 IBM 3270
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2018d.html#32 Walt Doherty - RIP
https://www.garlic.com/~lynn/2017e.html#26 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2014h.html#106 TSO Test does not support 65-bit debugging?
https://www.garlic.com/~lynn/2014g.html#23 Three Reasons the Mainframe is in Trouble

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downturn, Downfall, Breakup

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downturn, Downfall, Breakup
Date: 07 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#103 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#64 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#58 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#57 IBM Downturn, Downfall, Breakup

... after we left IBM, we were getting email from existing employees complaining that (470?) senior executives weren't paying attention to running the business, but were totally focused on moving expenses from the following year to the current year. We ask our contact in bowels of Armonk about it. He said the current year was in the red and senior executes won't get bonus ... however if they can nudge the following year even the smallest amount into the black, the way the senior executive bonus plan was written, they would get a bonus more than twice as large as any previous bonus (might be construed as a reward for taking the company into the red).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

past posts mentioning shift expenses between years to juice executive bonus
https://www.garlic.com/~lynn/2023c.html#84 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#16 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
https://www.garlic.com/~lynn/2022f.html#84 Demolition of Iconic IBM Country Club Complex "Imminent"
https://www.garlic.com/~lynn/2022d.html#54 Another IBM Down Fall thread
https://www.garlic.com/~lynn/2022c.html#64 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#47 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#102 Online Computer Conferencing
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2021k.html#117 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2021j.html#113 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#100 Who Says Elephants Can't Dance?
https://www.garlic.com/~lynn/2021j.html#96 IBM 3278
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021j.html#70 IBM Wild Ducks
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021i.html#79 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#64 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021g.html#20 Big Blue's big email blues signal terminal decline - unless it learns to migrate itself
https://www.garlic.com/~lynn/2021f.html#55 3380 disk capacity
https://www.garlic.com/~lynn/2021f.html#47 Martial Arts "OODA-loop"
https://www.garlic.com/~lynn/2021f.html#32 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021f.html#24 IBM Remains Big Tech's Disaster
https://www.garlic.com/~lynn/2021d.html#68 How Gerstner Rebuilt IBM
https://www.garlic.com/~lynn/2021c.html#49 IBM CEO
https://www.garlic.com/~lynn/2021b.html#97 IBM Glory days
https://www.garlic.com/~lynn/2021.html#39 IBM Tech
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs
https://www.garlic.com/~lynn/2017g.html#105 Why IBM Should -- and Shouldn't -- Break Itself Up
https://www.garlic.com/~lynn/2017.html#62 Big Shrink to "Hire" 25,000 in the US, as Layoffs Pile Up
https://www.garlic.com/~lynn/2014m.html#143 LEO
https://www.garlic.com/~lynn/2008s.html#49 Executive pay: time for a trim?

--
virtualization experience starting Jan1968, online at home since Mar1970

SHARE, MVT, MVS, TSO

From: Lynn Wheeler <lynn@garlic.com>
Subject: SHARE, MVT, MVS, TSO
Date: 07 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#115 SHARE, MVT, MVS, TSO

Undergraduate, took two credit hr, intro to fortran/computers and at the end of semester was hired to re-implement 1401 MPIO in 360 assembler for 360/30. Univ. was getting 360/67 for tss/360 replacing 709/1401 ... and temporarily 360/30 replacing 1401 pending availability of 360/67. Univ. shutdown datacenter on weekends and I had place dedicated (although 48hrs w/o sleep made Monday classes hard). I was given bunch of hardware & software manuals, learned 360 assembler, hardware, etc and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had 2000 card assembler program. The 360/67 arrives within a year of taking intro class and was hired fulltime responsible for os/360 (tss/360 didn't come to production so ran as 360/65 with os/360). Then before I graduate, am hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing into independent business unit. I think Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around machine room.

Boeing Huntsville had gotten a 2-CPU 360/67 with bunch of 2250s for CAD/CAM, but ran it as two 360/65s. They had run into the MVT storage management problem and had modified MVT13 to run in (360/67) virtual address mode ... no actual paging but fiddling addresses as partial countermeasure to the problem (sort of simple precursor to VS2/SVS)

post with some email exchange about MVT storage problem and decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73

posts mentioning Boeing Huntsville MVT13
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#8 IBM OS/360 MFT HASP
https://www.garlic.com/~lynn/2024g.html#72 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024f.html#90 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024b.html#3 Bypassing VM
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#81 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#52 IBM Vintage 1130
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2010k.html#11 TSO region size
https://www.garlic.com/~lynn/2010c.html#4 Processes' memory
https://www.garlic.com/~lynn/2010b.html#61 Source code for s/360 [PUBLIC]
https://www.garlic.com/~lynn/2009r.html#43 Boeings New Dreamliner Ready For Maiden Voyage
https://www.garlic.com/~lynn/2007m.html#60 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
https://www.garlic.com/~lynn/2001m.html#55 TSS/360

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 168 And Other History

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 168 And Other History
Date: 08 May, 2025
Blog: Facebook
low/mid-range 370, ran vertical microcode micropocessors for 370 emulation ... avg. 10 CISC instructions to emulate a 370 instruction. High-end (158&168) were horizontal microcode processors ... that could overlap multiple things per instruction/machine cycle, as a result measured avg. number of machine cycles per 370 instruction. 168-1 microcode took avg of 2.1cycles per 370 instruction. For 168-3, microcode optimization reduced it to avg. 1.6cycles per 370 instruction (and also doubled the processor cache size).

During the FS period, which was going to completely replace 370, 370 efforts were being killed off. Then when FS imploded (one of the final nails in the FS coffin was study by the IBM Houston Science Center that if 370/195 applications were redone for an FS machine made out of the fastest hardware technology available, they would have throughput of 370/145, about 30times slowdown), there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 in parallel. more info:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

3033 started off remapping 168 logic to 20% faster chips and then they also further optimized 168 microcode improving to approx avg of only one machine cycle per 370 instruction (resulting in numerous of the 3033 microcode hacks, involving straight translation from 370 to microcode, showing little or no actual improvement).

Also after Future System implosion, I got roped into helping with 16-CPU 370 multiprocessor and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system ("MVS") had (effective) 16-CPU multiprocessor support (aka at the time, MVS docs had 2-CPU multiprocessor support only had 1.2-1.5times throughput; aka 2-CPU 3081 MVS .6-.75 throughput of a (Amdahl) single CPU MVS machine with the same MIP rate (because of the significant MVS multiprocessor overhead, note: POK doesn't ship 16-CPU multiprocessor until after the turn of the century). The head of POK then invites some of us to never visit POK again and directs the 3033 processor engineers "heads down and no distractions".

trivia: in the morph of CP67->VM370, they simplify and/or drop lots of stuff (including multiprocessor support). For VM370R2, I start adding things back in for my CSC/VM internal system. For VM370R3-based version, I add multiprocessor support back in, originally for the online sales&marketing support HONE systems, so they could add a 2nd CPU to each system (and was getting twice the throughput of the single CPU systems). Part of the issue was head of POK had also convinced corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission, but had to reconstitute a development group from scratch). Once 3033 was out the door, the processor engineers start on trout/3090 (and I can sneak back into POK).

Also after FS implosion, Endicott cons me into helping with 138/148 ECPS microcode assist (later on 4300s), mapping 6kbytes of kernel code into microcode with approx 10:1 performance improvement ... old archived post with initial analysis I did:
https://www.garlic.com/~lynn/94.html#21
Endicott then tries to get corporate to approve pre-installing VM370 on every 138/148 shipped, but with POK in the process of killing vm370 product, it was vetoed.

after transfer from CSC to San Jose Research, and in early 80s, I get permission to give talks on how ECPS was done at mainframe user group meetings. At some of them, Amdahl people corner me for more details. They said that they had done "MACROCODE" (370-like instructions that run in microcode mode, as fast and much simpler to implement on a horizontal microcode machine) and were in the process of implementing microcode hypervisor ("multiple domain", virtual machine subset including capable of running MVS & MVS/XA on same machine concurrently and MVSes w/o the multiprocessor overhead; POK doesn't respond with LPAR & PR/SM until nearly decade later).

somewhat related: in the 90s, i86 was enhanced to do on-the-fly, pipelined translation of i86 instructions to RISC micro-ops for actual execution (largely negating RISC machine throughput advantages). Somerset/AIM had also done single chip POWER ... 1999 comparison (industry benchmark, number program iterations compared to reference platform):
AIM PowerPC 440: 1,000MIPS i86 Pentium3: 2,054MIPS

and POK 16-CPU Dec2000:
z900 16 processors: 2.5BIPS, 156MIPS/CPU

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

ECPS, Amdahl, macrocode, hypervisor,etc posts
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025.html#19 Virtual Machine History
https://www.garlic.com/~lynn/2024g.html#38 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#30 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024b.html#26 HA/CMP
https://www.garlic.com/~lynn/2024.html#63 VM Microcode Assist
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#114 Copyright Software
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#102 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021j.html#4 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#31 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2019b.html#78 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018e.html#30 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2017i.html#43 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017e.html#46 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2014j.html#100 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2013n.html#46 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013l.html#27 World's worst programming environment?
https://www.garlic.com/~lynn/2013f.html#68 Linear search vs. Binary search
https://www.garlic.com/~lynn/2010m.html#74 z millicode: where does it reside?
https://www.garlic.com/~lynn/2008j.html#26 Op codes removed from z/10
https://www.garlic.com/~lynn/2007k.html#74 Non-Standard Mainframe Language?
https://www.garlic.com/~lynn/2006m.html#39 Using different storage key's
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode
https://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#43 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
https://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"

--
virtualization experience starting Jan1968, online at home since Mar1970

Too Much Bombing, Not Enough Brains

From: Lynn Wheeler <lynn@garlic.com>
Subject: Too Much Bombing, Not Enough Brains
Date: 08 May, 2025
Blog: Facebook
Too Much Bombing, Not Enough Brains; The Real Evil Empire May Surprise You
https://tomdispatch.com/the-real-evil-empire-may-surprise-you/
https://www.counterpunch.org/2025/05/09/the-real-evil-empire-may-surprise-you/

There has been lots of discussions in military groups about "Command Culture"
https://www.amazon.com/Command-Culture-Education-1901-1940-Consequences-ebook/dp/B009K7VYLI/

... comparing 1901-1940 military schools in Germany & US. US schools had lots of bullying & hazing (as part of enforcing conformity) and strict (memorizing) school solutions (representative of whole US education system) ... contrasting German schools that brought in prominent officers, and students were encouraged to argue (as part of promoting leadership). One example was Marshall was so injured in hazing event at military academy that he was almost had to drop out.

loc2341-44:
Help, however, was on the way in the person of George C. Marshall, who was determined to set right the wrongs he had experienced when a woefully unprofessional U.S. officer corps went to war in Europe, causing an unparalleled number of American casualties in only nineteen months of war. In his view--and it can be stated now that he made a historically correct assessment--the inadequacy of many American officers came from their advanced ages, inflexibility of mind, and lack of modern and practical training.
... snip ...

Command Culture talk at 1st div museum
https://www.youtube.com/watch?v=m7unu0fLYvc

somewhat glosses over genocide by the US army in the "Indian Wars" ... then WW2, Army Air Corp had claimed that strategic bombing could win the war with Germany w/o even having to invade Europe. In part because their claims, it justified 1/3rd of total US WW2 military spending went to strategic bombing ... and found that it was almost impossible to hit target from 5-6 miles up (even with Norden bombsights), which possibly served as motivation for fire bombing civilian cities ... McNamara was LeMay's staff planning fire bombing German cities and then Japanese cities (difficult to miss a whole city). After hostilities, McNamara leaves for auto industry, but comes back as SECDEF for Vietnam, where Laos becomes most bombed country in the world (more tonnage than Germany & Japan combined). McNamara later wrote that LeMay told him if the US had lost WW2, they would be the ones on trial for war crimes

The European Campaign
https://ssi.armywarcollege.edu/SSI-Media/Recent-Publications/Article/3941220/the-european-campaign-its-origins-and-conduct/
loc2582-85:
The bomber preparation of Omaha Beach was a total failure, and German defenses on Omaha Beach were intact as American troops came ashore. At Utah Beach, the bombers were a little more effective because the IXth Bomber Command was using B-26 medium bombers. Wisely, in preparation for supporting the invasion, maintenance crews removed Norden bombsights from the bombers and installed the more effective low-level altitude sights.
... snip ...

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

posts mentioning "secret war in laos"
https://www.garlic.com/~lynn/2021c.html#80 The Victims of Agent Orange the U.S. Has Never Acknowledged
https://www.garlic.com/~lynn/2019e.html#98 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2016h.html#32 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#21 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016d.html#90 "Computer & Automation" later issues--anti-establishment thrust
https://www.garlic.com/~lynn/2016d.html#82 "Computer & Automation" later issues--anti-establishment thrust
https://www.garlic.com/~lynn/2016d.html#30 AM radio Qbasic
https://www.garlic.com/~lynn/2016d.html#8 What Does School Really Teach Children
https://www.garlic.com/~lynn/2015f.html#77 1973--TI 8 digit electric calculator--$99.95

posts mentioning "the marshall plan"
https://www.garlic.com/~lynn/2021d.html#61 Google: US-EU tech trade is 'fraying' and we need a new council to save it
https://www.garlic.com/~lynn/2021d.html#59 WW2 Strategic Bombing
https://www.garlic.com/~lynn/2021d.html#29 The Shape of Things to Come: Why the Pentagon Must Embrace Soft Power to Compete with China
https://www.garlic.com/~lynn/2021d.html#21 History Has Never Deterred the U.S. Military
https://www.garlic.com/~lynn/2018c.html#82 The Redacted Testimony That Fully Explains Why General MacArthur Was Fired

posts "the european campaign"
https://www.garlic.com/~lynn/2024c.html#108 D-Day
https://www.garlic.com/~lynn/2022f.html#102 The Wehrmacht's Last Stand: The German Campaigns of 1944-1945
https://www.garlic.com/~lynn/2021k.html#25 Twelve O'clock High at IBM Training
https://www.garlic.com/~lynn/2021k.html#2 Who Knew ?
https://www.garlic.com/~lynn/2021i.html#60 How Did America's Sherman Tank Win against Superior German Tanks in World War II?
https://www.garlic.com/~lynn/2021d.html#59 WW2 Strategic Bombing
https://www.garlic.com/~lynn/2019e.html#79 Collins radio and Braniff Airways 1945
https://www.garlic.com/~lynn/2019d.html#92 The War Was Won Before Hiroshima--And the Generals Who Dropped the Bomb Knew It
https://www.garlic.com/~lynn/2019d.html#45 Sand and Steel
https://www.garlic.com/~lynn/2019c.html#69 The Forever War Is So Normalized That Opposing It Is "Isolationism"
https://www.garlic.com/~lynn/2019c.html#26 D-Day And The Myth That The U.S. Defeated The Nazis
https://www.garlic.com/~lynn/2018e.html#70 meanwhile in eastern Asia^WEurope, was tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018d.html#101 The Persistent Myth of U.S. Precision Bombing
https://www.garlic.com/~lynn/2018c.html#66 off topic 1952 B-52 ad
https://www.garlic.com/~lynn/2018c.html#45 Counterinsurgency Lessons from Malaya and Vietnam: Learning to Eat Soup with a Knife
https://www.garlic.com/~lynn/2018c.html#22 Historical Perspectives of the Operational Art
https://www.garlic.com/~lynn/2018b.html#89 The US destroyed Tokyo 73 years ago in the deadliest air raid in history
https://www.garlic.com/~lynn/2018.html#48 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017j.html#47 America's Over-Hyped Strategic Bombing Experiment
https://www.garlic.com/~lynn/2017j.html#21 Norden bombsight
https://www.garlic.com/~lynn/2017h.html#34 Disregard post (another screwup; absolutely nothing to do with computers whatsoever!)
https://www.garlic.com/~lynn/2017h.html#3 Dunkirk
https://www.garlic.com/~lynn/2017g.html#99 The Real Reason You Should See Dunkirk: Hitler Lost World War II There
https://www.garlic.com/~lynn/2017g.html#53 Dunkirk
https://www.garlic.com/~lynn/2016h.html#80 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#34 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#33 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016g.html#63 America's Over-Hyped Strategic Bombing Experiment
https://www.garlic.com/~lynn/2016g.html#24 US Air Power
https://www.garlic.com/~lynn/2016f.html#64 Strategic Bombing
https://www.garlic.com/~lynn/2016f.html#56 "One Nation Under God: How Corporate America Invented Christian America"
https://www.garlic.com/~lynn/2016e.html#117 E.R. Burroughs
https://www.garlic.com/~lynn/2016d.html#88 "Computer & Automation" later issues--anti-establishment thrust
https://www.garlic.com/~lynn/2015h.html#120 For those who like to regress to their youth? :-)
https://www.garlic.com/~lynn/2015d.html#13 Fully Restored WWII Fighter Plane Up for Auction
https://www.garlic.com/~lynn/2015c.html#64 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015c.html#62 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015c.html#61 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015c.html#60 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015b.html#85 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015b.html#84 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015b.html#70 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015b.html#69 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015b.html#68 Why do we have wars?
https://www.garlic.com/~lynn/2015b.html#53 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2015b.html#52 IBM Data Processing Center and Pi

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT, SNA, VTAM, NCP

From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT, SNA, VTAM, NCP
Date: 08 May, 2025
Blog: Facebook
My wife was co-author of AWP39 in same time-frame as SNA/NCP ... but had to qualify with "peer-to-peer" for title "Peer-to-Peer Networking" (because SNA had misused "network"). Later Bo Evens ask my wife to review 8100/DPPX .... shortly later 8100 was decommitted.

Early 80s, I got HSDT, T1 and faster computer links (both terrestrial and satellite) and battles with communication group (60s, IBM had 2701 supporting T1, but IBM transition to SNA in the 70s, capped controllers at 56kbit). Part of my funding was dependent on showing some IBM content (otherwise it would all be non-IBM). I eventually found FSD S/1 Zirpel T1 card (special bid for gov customers that were having failing 2701s).

Friday (15Feb1985) before business trip to the other side of the Pacific to see some custom hardware being built for HSDT, got email from Raleigh announcing a new online forum about computer links with definitions
low-speed: 9.6kbits/sec medium-speed: 19.2kbits/sec high-speed: 56kbits/sec very high-speed: 1.5mbits/sec

Monday morning on conference room wall on the other side of Pacific:
low-speed: <20mbits/sec medium-speed: 100mbits/sec high-speed: 200mbits-300mbits/sec very high-speed: >600mbits/sec

trivia: Late 70s and early 80s, I had been blamed for online computer conferencing (precursor to modern social media) on the internal network. It really took off spring of 1981 when I distributed a trip report visiting Jim Gray at Tandem (only about 300 directly participated but claims 25,000 were reading; folklore is when corporate executive committee was told, 5of6 wanted to fire me).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some recent posts mentioning AWP39 & 8100
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix
https://www.garlic.com/~lynn/2024d.html#69 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#30 ACP/TPF
https://www.garlic.com/~lynn/2024.html#84 SNA/VTAM
https://www.garlic.com/~lynn/2023f.html#40 Rise and Fall of IBM
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2021h.html#90 IBM Internal network
https://www.garlic.com/~lynn/2018b.html#13 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017e.html#62 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016d.html#48 PL/I advertising
https://www.garlic.com/~lynn/2015h.html#99 Systems thinking--still in short supply
https://www.garlic.com/~lynn/2013n.html#19 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013g.html#44 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2012o.html#52 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012m.html#24 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012i.html#25 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011n.html#2 Soups
https://www.garlic.com/~lynn/2011l.html#26 computer bootlaces
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2009q.html#83 Small Server Mob Advantage
https://www.garlic.com/~lynn/2009i.html#26 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009e.html#56 When did "client server" become part of the language?

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT to VS2/SVS

From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT to VS2/SVS
Date: 09 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#95 MVT to VS2/SVS

other trivia: At univ., student fortran jobs ran in less than second on 709 (tape->tape, 1401 /then 360/30/ MPIO was reader/printer/punch 709 front end). Initial move to 360/67 (as 360/65) os/360, student fortran ran well over a minute. I install HASP cutting the time in half. Then I start doing customized STAGE2 SYSGENs, carefully ordering datasets and PDS members to optimize arm seek and multi-track search, cutting time another 2/3rds to 12.9sec.

PTFs replacing PDS members were downside, destroying careful ordering ... as student fortran jobs started creeping back up towards 20secs, I would do a partial re-SYSGEN (to restore the ordering) and get back to 12.9secs. Student Fortran never got better than 709 until I install Univ. of Waterloo WATFOR.

Then CSC comes out to install CP67/CMS (3rd installation after CSC itself and MIT Lincoln Labs) and I mostly play with it during my weekend dedicated time 48hrs. Initially I concentrate on reducing pathlengths for running OS/360 in virtual machine. Test stream ran 322secs on real machine and initially initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced the CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO single 4k transfers) optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to peak of 270.

Univ library then gets ONR grant for online catalog, used part of the money for 2321; was also selected for betatest for original CICS product and CICS support added to my tasks. Initially CICS wouldn't come up, turns out had some undocumented, hard-coded BDAM options and univ had created datasets with different set of options ... and CICS was failing with initial file open.

CICS and WATFOR were similar in trying to minimize use of OS/360 system services (extremely heavy-weight and expensive), doing everything possible at startup ... and then implementing their own light-weight services while running.

HASP, ASP, JES2, JES3 posts
https://www.garlic.com/~lynn/submain.html#hasp
CSC (& CP/67) posts
https://www.garlic.com/~lynn/subtopic.html#545tech
BDAM &/or CICS posts
https://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS and MVS/TSO

From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS and MVS/TSO
Date: 09 May, 2025
Blog: Facebook
recent MVS song reference
https://www.garlic.com/~lynn/2025b.html#115 SHARE, MVT, MVS, TSO

Early in MVS, CERN had done a study comparing VM370/CMS and MVS/TSO and provided the analysis in paper given at SHARE. Inside IBM, copies of the SHARE paper were stamped "IBM Confidential - Restricted" (2nd highest security classification, available on need-to-know only).

EDGAR was 1st CMS 3270 fullscreen editor and started religious war with other CMS editors .... do the up/down commands logically move the file up/down across the screen ("up" moving screen towards end of the file) or move the 3270 "window" up/down across the file ("up" moving screen towards the beginning of the file).

EDGAR and edit religous wars
https://www.garlic.com/~lynn/2011k.html#7 History of user model for scrolling
https://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
https://www.garlic.com/~lynn/2001m.html#22 When did full-screen come to VM/370?
https://www.garlic.com/~lynn/2001k.html#44 3270 protocol

posts mentioning the CERN SHARE paper
https://www.garlic.com/~lynn/2024b.html#108 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#107 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#90 IBM User Group Share
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2024.html#109 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023e.html#66 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2022h.html#69 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#39 IBM Teddy Bear
https://www.garlic.com/~lynn/2022g.html#56 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2020.html#28 50 years online at home
https://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2014b.html#105 Happy 50th Birthday to the IBM Cambridge Scientific Center
https://www.garlic.com/~lynn/2010q.html#34 VMSHARE Archives

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS and MVS/TSO

From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS and MVS/TSO
Date: 09 May, 2025
Blog: Facebook
re
https://www.garlic.com/~lynn/2025b.html#115 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#117 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#122 VM370/CMS and MVS/TSO

"wheeler scheduler", yes, long-winded ... as undergraduate in 60s, I did lots of CP67/CMS changes including dynamic adaptive resource management (wheeler/fairshare scheduler) ... which got shipped in standard CP67. 23jun1969, IBM unbundling announcement, starting to charge for (application) software (but made case kernel software still free), SE services, maint. etc. after graduating joined IBM science center and one of my hobbies was enhanced production operating systems for internal datacenters (there internal, online sales and marketing support HONE systems were early & longtime customer, eventually HONE clones show up all over the world and requirement that customer system orders be 1st run through HONE configurators). Decision to add virtual memory to all 370s, some of IBM science center splits off and takes over the IBM Boston Programming Center on 3rd floor for VM370 development group. In the morph from CP67->VM370 lots of stuff was dropped (including much of stuff I had done as undergraduate and multiprocessing support) and/or greatly simplified.

I had done automated benchmarking system for CP67 with lots of support for different workloads and configurations. I start with moving automated benchmarking from CP67 to VM370R2 (to get baseline before moving rest of stuff) ... however, VM370 consistently crashed w/o completing and I needed next to move CP67 kernel serialization and integrity stuff to VM370 (in order to get reliable baseline performance) before moving lots of other stuff (like wheeler scheduler) for my internal CSC/VM (and SHARE has resolution that my internal CSC/VM scheduler be released to customers). Then for VM370R3-base CSC/VM I start with adding in mulltiprocessor support, initially for US HONE so they can add 2nd processor to each of the systems (which started getting twice the throughput of single processor configuration).

This was during the Future System period (I continued to work on CP67 and VM370 all during FS, including periodically ridiculing what they were doing), which was going to completely replace 370 (internal politics was killing off 370 efforts and lack of new 370 during the period is credited with giving the clone 370 makers their market foothold). Then when Future System implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off the 3033&3081 in parallel. Other FS details:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

Possibly because of the rise of 370 clone makers, there is a decision to start charging for kernel software (starting incremental add-ons and transitioning to everything in the 80s) ... and my internal CSC/VM scheduler was selected as early guinea pig (and I include a bunch of other stuff, including kernel re-org for multiprocessor support, but not the actual support itself). A corporate performance expert reviews the package and says he won't sign off on its release because it doesn't have any manual tuning knobs (which he considered state-of-the-art, especially since MVS SRM had large array of tuning parameters). I tried to explain dynamic adaptive, but it fell on deaf ears. I then add a few "SRM" parameters (to get release sign-off), fully documented with formulas and source (but ridiculing MVS SRM, from Operation Research, the SRM parameters have less degrees of freedom than the dynamic adaptive values in the formulas; the non-SRM pieces are packaged as "(DMK)STP" taken from TV commercials of the period). Before release, I run 2000 automated benchmarks that takes 3months elapsed, with a wide range of workloads and configurations (to validate dynamic adaptive operation) ... "Resource Manager" eventually ships with VM370R3PLC9-base.

This runs into a little problem. IBM wants to ship multiprocessor support in VM370R4, but a kernel software charging rule is that hardware support is (still) free (and can't require charged-for software as prereq) ... and VM370R4 multiprocessor support is dependent on kernel reorganization in the (charged-for) "Resource Manager". Eventual decision is move around 90% of the code in the "Resource Manager" into the free VM370R4 base as part of being able to release multiprocessor support.

While all this was going on the head of POK was lobbying corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission, but had to reconstitute a development group from scratch).

dynamic adaptive resource management, fairshare posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
23jun1969 unbundling announcement posts
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

MOSAIC

From: Lynn Wheeler <lynn@garlic.com>
Subject: MOSAIC
Date: 10 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025b.html#113 CERN WWW, Browsers and Internet

... 1981, I got HSDT project, T1 and faster computer links and lots of conflict with corporate communication product group (note, 60s, IBM had 2701 telecommunication controller that had T1 support, then with the move to SNA/VTAM and associated issues caped controllers at 56kbits/sec. HSDT was working with NSF director and was suppose to get $20M to interconnect the NSF Supercomputer center. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

SLAC server
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
1993 February: A new X browser called Mosaic is released by The National Center for Supercomputing Applications (NCSA). It has many of the features of MidasWWW and the support of a large organization. With the availability and widespread adoption of Mosaic, Web use starts to gain momentum...

Major recipient of NSF "New Technologies Program" was NCSA,
https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
and then more funding
https://en.wikipedia.org/wiki/High_Performance_Computing_Act_of_1991
doing
https://en.wikipedia.org/wiki/NCSA_Mosaic

1988 we got IBM HA/6000 product development&marketing and I rename it HA/CMP when start doing scientific/technical cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors with (Oracle, Sybase, Ingres, Informix) ... planning on 128-system clusters by ye92. Then late Jan92, cluster scaleup is transferred for announce as IBM Supercomputer and we were told we can't work on anything with more than four processors ... and we leave IBM a few months later.

Some of the NCSA people move to silicon valley and form MOSAIC Corp ... NCSA complains about use of "MOSAIC" and they change the name to NETSCAPE (getting rights for the name from a silicon valley router vendor). Two of the former Oracle employees (that we worked with on cluster scale-up HA/CMP), are there responsible for something they called "commerce server" and they want to do payment transactions on the server, the startup also invents technology they call "SSL" they want to use. I'm brought in as consultant responsible for everything between webservers and the financial industry payment networks. It is now frequently called "electronic commerce".

Jan1996 MSDC at Moscone center there were "Internet" banners everywhere ... but constant refrain in every session was "protect your investment" ... aka automatic execution of visual basic in data files (including email).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
payment network web gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

some other "High_Performance_Computing_Act_of_1991" posts
https://www.garlic.com/~lynn/2023g.html#67 Waiting for the reference to Algores creation documents/where to find- what to ask for
https://www.garlic.com/~lynn/2023g.html#25 Vintage Cray
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2022h.html#12 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#3 AL Gore Invented The Internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM z17

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM z17
Date: 10 May, 2025
Blog: Facebook

z900, 16 cores, 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS* (1250MIPS/core), Jun2025

... earlier mainframe numbers actual industry benchmark (number program iterations compared to reference platform), more recent numbers inferred from IBM pubs giving throughput compared to previous generations; *"z17 using 18% over z16" (& then z17 core/single-thread 1.12 times z16). Note: 2010-era E5-2600 server blade (two 8-core XEON chips), 500BIPS (30BIPS/core), same industry benchmark. Also 1999, Pentium3 (single core chip), 2BIPS.

The IBM z17: A New Era of Mainframe Innovation
https://hyperframeresearch.com/2025/04/11/unleashing-the-power-of-ibm-z17-a-technical-deep-dive-for-the-modern-enterprise/
The IBM z17, powered by the Telum II processor and complemented by the Spyre AI Accelerator, represents a significant leap forward for the IBM Z family. Unveiled at the Hotchips 2024 Conference, the Telum II processor is a marvel of modern engineering, boasting 43 billion transistors across 24 miles of wire, fabricated using Samsung's 5nm High-Performance Process (HPP). With eight 5.5 GHz cores per processor and a robust 36 MB L2 cache (a 40% increase over the z16), the Telum II delivers exceptional single-thread performance, up 11% compared to its predecessor, while supporting up to 2,500 MIPS per processor.

• Customer Cores: Up to 208, providing 15-20% capacity growth over the z16.
• Single-Thread Performance: 11% improvement over z16, driven by the Telum II's advanced architecture.


... snip ...

... "2,500 MIPS/processor" is that aggregate per 8-core chip?, the industry benchmark (using inferred numbers from previous IBM systems that ran actual benchmark) is 1250MIPS/core and aggregate 10BIPS/chip

and bunch of AMD threadripper numbers
https://gamersnexus.net/cpus/amds-cheap-threadripper-hedt-cpu-7960x-24-core-cpu-review-benchmarks
In compression, the 7980X sets the ceiling at 393K MIPS, followed closely by the 7970X at 352K MIPS, and then the 7960X at 288K MIPS. The 7970X leads the 7960X by about 22%, with the 7960X leading the 7950X non-3D by 50% (which is near the 14900K). That's a big gap between this lower-end HEDT part and the best desktop parts.
... snip ...

... aka 7980X chip (64-core) at 393BIPS for a "compression" benchmark (6BIPS/core)

disclaimer: while there are common IOPS benchmarks on both mainframes and i86 systems ... have to go back more than decade to see same CPU banchmark on mainframes and i86 and while I've followed IBM pubs for statements about throughput compared to previous generation, don't have similar numbers for i86. The industry MIPS benchmark wasn't actual instruction count, but from number of program iterations compared to the same reference platform.

recent posts with CPU :benchmark" numbers up through z16
https://www.garlic.com/~lynn/2025.html#119 Consumer and Commercial Computers
https://www.garlic.com/~lynn/2025.html#74 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#21 Virtual Machine History
https://www.garlic.com/~lynn/2024e.html#130 Scalable Computing
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#97 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#85 Vintage DASD
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2023d.html#47 AI Scale-up

recent posts with IOPS numbers
https://www.garlic.com/~lynn/2025b.html#111 System Throughput and Availability II
https://www.garlic.com/~lynn/2025b.html#110 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#109 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#78 IBM Downturn
https://www.garlic.com/~lynn/2025b.html#65 Supercomputer Datacenters
https://www.garlic.com/~lynn/2025b.html#48 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025.html#117 Consumer and Commercial Computers
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput
https://www.garlic.com/~lynn/2025.html#81 IBM Bus&TAG Cables
https://www.garlic.com/~lynn/2025.html#80 IBM Bus&TAG Cables
https://www.garlic.com/~lynn/2025.html#78 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#76 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#37 IBM Mainframe
https://www.garlic.com/~lynn/2025.html#28 IBM 3090
https://www.garlic.com/~lynn/2025.html#24 IBM Mainframe Comparison
https://www.garlic.com/~lynn/2025.html#18 Thin-film Disk Heads
https://www.garlic.com/~lynn/2025.html#17 On-demand Supercomputer

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, next, index - home