List of Archived Posts

2024 Newsgroup Postings (01/01 - 02/21)

Recent Private Equity News
How IBM Stumbled onto RISC
How the 'Visionaries' of Silicon Valley Mean Profits Are Prioritised over True Technological Progress
How IBM Stumbled onto RISC
IBM/PC History
How IBM Stumbled onto RISC
More IBM Downfall
How IBM Stumbled onto RISC
Niklaus Wirth 15feb1934 - 1jan2024
How IBM Stumbled onto RISC
GOP Rep. Says Quiet Part Out Loud About Rejecting Border Deal
How IBM Stumbled onto RISC
THE RISE OF UNIX. THE SEEDS OF ITS FALL
THE RISE OF UNIX. THE SEEDS OF ITS FALL
THE RISE OF UNIX. THE SEEDS OF ITS FALL
THE RISE OF UNIX. THE SEEDS OF ITS FALL
Billionaires Are Hoarding Trillions in Untaxed Wealth
IBM Embraces Virtual Memory -- Finally
IBM Embraces Virtual Memory -- Finally
Huge Number of Migrants Highlights Border Crisis
How IBM Stumbled onto RISC
1975: VM/370 and CMS Demo
1975: VM/370 and CMS Demo
The Greatest Capitalist Who Ever Lived
Tomasulo at IBM
1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
HASP, ASP, JES2, JES3
IBM Disks and Drums
IBM Disks and Drums
IBM Disks and Drums
MIT Area Computing
IBM Jargon: FOILS
RS/6000 Mainframe
RS/6000 Mainframe
RS/6000 Mainframe
RS/6000 Mainframe
RS/6000 Mainframe
RS/6000 Mainframe
Card Sequence Numbers
UNIX, MULTICS, CTSS, CSC, CP67
RS/6000 Mainframe
Los Gatos Lab, Calma, 3277GA
Univ, Boeing Renton and "Spook Base"
RS/6000 Mainframe
Hospitals owned by private equity are harming patients
RS/6000 Mainframe
3330, 3340, 3350, 3380
VAX MIPS whatever they were, indirection in old architectures
Card Sequence Numbers
Slow MVS/TSO
VAX MIPS whatever they were, indirection in old architectures
RS/6000 Mainframe
Happy Birthday John Boyd!
RS/6000 Mainframe
EARN 40th Anniversary Conference
Did Stock Buybacks Knock the Bolts Out of Boeing?
EARN 40th Anniversary Conference
Sales of US-Made Guns and Weapons, Including US Army-Issued Ones, Are Under Spotlight in Mexico Again
RUNOFF, SCRIPT, GML, SGML, HTML
IOS3270 Green Card and DUMPRX
VM Microcode Assist
VM Microcode Assist
VM Microcode Assist
IBM 4300s
IBM Mainframes and Education Infrastructure
Further Discussion of Boeing's Slow Motion Liquidation: Rational Given Probable Airline Industry Shrinkage?
VM Microcode Assist
IBM 3270
NIH National Library Of Medicine
IBM AIX
IBM AIX
IBM AIX
UNIX, MULTICS, CTSS, CSC, CP67
UNIX, MULTICS, CTSS, CSC, CP67
Slow MVS/TSO
IBM 3270
Boeing's Shift from Engineering Excellence to Profit-Driven Culture: Tracing the Impact of the McDonnell Douglas Merger on the 737 Max Crisis
Mainframe Performance Optimization
Benchmarks
RS/6000 Mainframe
Benchmarks
Benchmarks
SNA/VTAM
SNA/VTAM
RS/6000 Mainframe
RS/6000 Mainframe
IBM 360
IBM 360
IBM 360
IBM, Unix, editors
IBM, Unix, editors
IBM, Unix, editors
IBM, Unix, editors
MVS SRM
IBM AIX
IBM, Unix, editors
IBM, Unix, editors
Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
A Look at Private Equity's Medicare Advantage Grifting
Multicians
Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
EBCDIC Card Punch Format
Multicians
Multicians
IBM, Unix, editors
IBM, Unix, editors
Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
IBM, Unix, editors
IBM User Group SHARE
IBM User Group SHARE
IBM User Group SHARE
IBM User Group SHARE
Cobol
BAL
Boeing is a wake-up call
IBM's Unbundling
IBM Downfall
Transfer SJR to YKT
Transfer SJR to YKT
The Greatest Capitalist Who Ever Lived
IBM VM/370 and VM/XA
Assembler language and code optimization

Recent Private Equity News

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Recent Private Equity News
Date: 01 Jan, 2024
Blog: Facebook
The Pentagon Road to Venture Capital
https://www.nytimes.com/2023/12/30/us/politics/the-pentagon-road-to-venture-capital.html
'This Can't Go On for Much Longer.' Private Equity's Deal Lament
https://www.wsj.com/finance/investing/this-cant-go-on-for-much-longer-private-equitys-deal-lament-493a4bbb
Hospital Adverse Events Rise Post Private Equity Acquisition
https://www.medscape.com/viewarticle/hospital-adverse-events-rise-after-private-equity-2023a1000wya

Originally gov. corporate charters were for serving the people and society ... when it was done, they dissolved and replaced by new ones. The problem was that they could become only interested in serving themselves and maintaining the status quo.

False Profits: Reviving the Corporation's Public Purpose
https://www.uclalawreview.org/false-profits-reviving-the-corporations-public-purpose/
I Origins of the Corporation. Although the corporate structure dates back as far as the Greek and Roman Empires, characteristics of the modern corporation began to appear in England in the mid-thirteenth century.[4] "Merchant guilds" were loose organizations of merchants "governed through a council somewhat akin to a board of directors," and organized to "achieve a common purpose"[5] that was public in nature. Indeed, merchant guilds registered with the state and were approved only if they were "serving national purposes."[6]

... snip ...

... however there has been significant pressure to give corporate charters to entities operating in self-interest ... followed by extending constitutional "people" rights to corporations. The supreme court was scammed into extending 14th amendment rights to corporations (with faux claims that was what the original authors had intended).
https://www.amazon.com/We-Corporations-American-Businesses-Rights-ebook/dp/B01M64LRDJ/
pgxiv/loc74-78:
Between 1868, when the amendment was ratified, and 1912, when a scholar set out to identify every Fourteenth Amendment case heard by the Supreme Court, the justices decided 28 cases dealing with the rights of African Americans--and an astonishing 312 cases dealing with the rights of corporations.

... snip ...

The Price of Inequality: How Today's Divided Society Endangers Our Future
https://www.amazon.com/Price-Inequality-Divided-Society-Endangers-ebook/dp/B007MKCQ30/
pg35/loc1169-73:
In business school we teach students how to recognize, and create, barriers to competition -- including barriers to entry -- that help ensure that profits won't be eroded. Indeed, as we shall shortly see, some of the most important innovations in business in the last three decades have centered not on making the economy more efficient but on how better to ensure monopoly power or how better to circumvent government regulations intended to align social returns and private rewards

... snip ...

How Economists Turned Corporations into Predators
https://www.nakedcapitalism.com/2017/10/economists-turned-corporations-predators.html
Since the 1980s, business schools have touted "agency theory," a controversial set of ideas meant to explain how corporations best operate. Proponents say that you run a business with the goal of channeling money to shareholders instead of, say, creating great products or making any efforts at socially responsible actions such as taking account of climate change.

... snip ...

A Short History Of Corporations
https://newint.org/features/2002/07/05/history
After Independence, American corporations, like the British companies before them, were chartered to perform specific public functions - digging canals, building bridges. Their charters lasted between 10 and 40 years, often requiring the termination of the corporation on completion of a specific task, setting limits on commercial interests and prohibiting any corporate participation in the political process.

... snip ...

... a residual of that is current law that can't use funds/payments from government contracts for lobbying. After the turn of the century there was huge upswing in private equity buying up beltway bandits and government contractors, PE owners then transfer every cent possible to their own pockets, which can be used to hire prominent politicians that can lobby congress (including "contributions") to give contracts to their owned companies (resulting in huge increase in gov. outsourcing to private companies) ... can snowball since gov. agencies aren't allowed to lobby (contributing to claims that congress is most corrupt institution on earth)
http://www.motherjones.com/politics/2007/10/barbarians-capitol-private-equity-public-enemy/
"Lou Gerstner, former ceo of ibm, now heads the Carlyle Group, a Washington-based global private equity firm whose 2006 revenues of $87 billion were just a few billion below ibm's. Carlyle has boasted George H.W. Bush, George W. Bush, and former Secretary of State James Baker III on its employee roster."

... snip ...

... also promoting the Success of Failure culture (especially in the military/intelligence-industrial complex)
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
military(/intelligence/congressional/)-industrial complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
success of failure complex posts
https://www.garlic.com/~lynn/submisc.html#success.of.failure
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pushing money up to PE owners where it then can be used for lobbying might be considered a form of "money laundering"
https://www.garlic.com/~lynn/submisc.html#money.laundering

--
virtualization experience starting Jan1968, online at home since Mar1970

How IBM Stumbled onto RISC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: How IBM Stumbled onto RISC
Date: 02 Jan, 2024
Blog: Facebook
How IBM Stumbled onto RISC
https://hackaday.com/2023/12/31/how-ibm-stumbled-onto-risc/

At a ad-tech conference, we presented the 16-way SMP 370 and the 801 group presented 801/RISC. Then 1980, there was effort to change from wide variety of CISC microprocessors to 801/risc for the next low/mid range 370s, controllers, S/38 follow-on, etc ... in part to converge on single micrprocessor programming language PL.8 (from a different microprogramming language for each custom microprocessor) .... 16-bit Iliad chip. For various reasons the efforts floundered ... and saw several risc chip engineers leaving for risc efforts at other vendors. The survivor was ROMP which was originally for the displaywriter follow-on, when that was canceled there was decision to retarget for the unix workstation market and they got the company that had done the AT&T unix port to IBM/PC as PC/IX to do one for ROMP ... becomes PC/RT and AIX (although they had to find something to do with some 200 PL.8 programmers)

Los Gatos lab was working on the 1st 32bit "Blue Iliad" ... large, hot chip that never came to production (I had 84/85 proposal to cram large number in single rack with multiple racks).

ROMP follow-on was six chip 32bit RIOS for RS/6000. We were then doing HA/CMP ... originally started out as HA/6000 for NYTimes to port their newspaper system (ATEX) from VAXCluster to RS/6000; I rename it HA/CMP when start doing technical/scientific cluster scale-up (i.e. cram as many RS/6000s into racks) with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix). The executive we reported to then went over to head up Somerset (single chip 801) for AIM (Apple, AIM, Motorola) with 64-bit also, AS/400 finally does move to RISC (Apple moves from Motorola to POWER RISC, then Intel, then back to another risc, ARM).

Early Jan1992, we had meeting with Oracle CEO, and AWD/Hester told Oracle that we would have 16-way clusters by mid92 with 128way clusters by YE92. During Jan1992 I was bringing FSD up todate on what was going on with national labs. At end of Jan, FSD told IBM Kingston supercomputer group they were going with HA/CMP. Within possibly hrs, cluster scale-up is transferred to IBM Kingston for announce as IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors).

"Iliad" was 16-bit 801/risc ... that was going to be used for things like microprocessor for 4361&4381 (follow-on for 4331/4341). Los Gatos lab was working on 32bit "Blue Iliad" (1st 32bit 801/risc) ... that was really large & hot chip that never came to production.

I had part of wing at Los Gatos ... offices and labs ... used for HSDT (t1 and faster computer links) ... also eventually installed customed design TDMA sat. system with three nodes, two 4.5M dishes in Los Gatos and Yorktown and 7M dish at Austin. Los Gatos had done LSM ... first hardware chip logic design system. Then Endicott did larger EVE ... and there was one in disk engineering. Claim is being able to do fast turn-around between Austin and hardware verfication (with sat. network) helped bring in RIOS (RS/6000) chipset a year early.

It was funny ... I had been transferred from SJR to YKT for various misdeeds (including responsible for online computer conferencing, precursor to modern social media) ... but continued to live on west coast ... commuting to YKT a couple times a month along w/offices in SJR (later Almaden) and Los Gatos.

One of the primary people that worked on Iliad was out on the west cost as the 801/risc effort was imploding. Then gave notice that he was leaving for HP labs (and spent his IBM last two weeks in Los Gatos on "Blue Iliad"). I then was getting worried email from east coast about was I going to join him (at the time, head of HP Labs had previously been head of YKT CS). He had previously also ported part of 370/xa access registers to 3033 for dual-address mode ... trying to mitigate the 3033 pending MVS CSA disaster taking over all application 16mbyte virtual address space. He was also later one of the major Itanium architects.

Spring of 1975 (as Future System was imploding), I was con'ed by Endicott into working on ECPS for Virgil/Tully (138/148, also used later for 4331/4341). The wanted native microcode for high use parts of VM370. They said they had 6kbytes of available microcode space and most operating system instructions would translate 370 nearly byte-for-byte (and about same number of instructions). I was to identify the most highest executed paths for translating to microcode (with approx 10:1 performance increase). Following is archived post with the initial analysis (6k bytes of highest executed kernel 370 instructions accounted for approx. 80% of kernel execution):
https://www.garlic.com/~lynn/94.html#21

Very late in 70s, I got contacted by some OLE(?) people. In part for the ECPS work ... but also shortly after joining IBM at the science center, I wrote a PLI program that analyzed assembler program listing ... making an abstract representation of the instructions executed as well as execution code paths ... and then tried to generate a Pascal-like representation. Some highly optimized CP/67 modules could have conditional branches translated to if/then/else go 15 levels deep.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

How the 'Visionaries' of Silicon Valley Mean Profits Are Prioritised over True Technological Progress

From: Lynn Wheeler <lynn@garlic.com>
Subject: How the 'Visionaries' of Silicon Valley Mean Profits Are Prioritised over True Technological Progress
Date: 02 Jan, 2024
Blog: Facebook
recent corporations:
https://www.garlic.com/~lynn/2024.html#0 Recent Private Equity News

How the 'Visionaries' of Silicon Valley Mean Profits Are Prioritised over True Technological Progress
https://www.nakedcapitalism.com/2024/01/how-the-visionaries-of-silicon-valley-mean-profits-are-prioritised-over-true-technological-progress.html

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

--
virtualization experience starting Jan1968, online at home since Mar1970

How IBM Stumbled onto RISC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: How IBM Stumbled onto RISC
Date: 02 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC

related to Cadence .... after IBM downfall and one of the largest losses in history of US companies ... IBM was being reorged into "13 baby blues" in preparation for breakup of the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking for us to help with the breakup of the company. Before we get started, the board brings in the former AMEX president as CEO ... who (somewhat) reverses the breakup. However there was lots of real estate and products being sold off or unloaded, including lots of VLSI tools going to major chip tool vendor in silicon valley.

Los Gatos used Metaware's TWS for language work and had done a Pascal for VLSI tools (later was shipped as VS/Pascal). And a bunch of Los Gatos stuff was unloaded. However, standard tools were all run on SUN platform. I got hired to port a 50,000 Pascal statement (physical layout) tool to SUN. In retrospect it would have probably have been easier to rewrite in C. I don't think SUN Pascal had ever been used for anything other than educational purposes. Further complicating things, SUN had outsourced Pascal support to operation on the opposite side of the world (literally rocket science, I have a billcap from "Space City"). IBM Los Gatos included possibly couple hundred acres of undeveloped land ... and eventually was sold off (and plowed under) for housing development.

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
AMEX president & IBM CEO
https://www.garlic.com/~lynn/submisc.html#gerstner

some recent posts mentioning Steve Chen
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#22 Vintage Cray
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023d.html#34 IBM Mainframe Emulation
https://www.garlic.com/~lynn/2023.html#42 IBM AIX

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM/PC History

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM/PC History
Date: 02 Jan, 2024
Blog: Facebook
Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP67/CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

The complete history of the IBM PC, part one: The deal of the century
https://arstechnica.com/gadgets/2017/06/ibm-pc-history-part-1/
The complete history of the IBM PC, part two: The DOS empire strikes
https://arstechnica.com/gadgets/2017/07/ibm-pc-history-part-2/
The oldest-known version of MS-DOS's predecessor has been discovered and uploaded
https://arstechnica.com/gadgets/2024/01/the-oldest-known-version-of-ms-doss-predecessor-has-been-discovered-and-uploaded/

CP/67
https://en.wikipedia.org/wiki/CP-67
Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Total share: 30 years of personal computer market share figures
http://arstechnica.com/features/2005/12/total-share/
https://arstechnica.com/features/2005/12/total-share/2/
http://arstechnica.com/features/2005/12/total-share/3/
http://arstechnica.com/features/2005/12/total-share/4/
http://arstechnica.com/features/2005/12/total-share/5
https://arstechnica.com/features/2005/12/total-share/6/
https://arstechnica.com/features/2005/12/total-share/7/
https://arstechnica.com/features/2005/12/total-share/8/
https://arstechnica.com/features/2005/12/total-share/9/
https://arstechnica.com/features/2005/12/total-share/10/

recent posts mentioning Kildall
https://www.garlic.com/~lynn/2023g.html#75 The Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#35 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#27 Another IBM Downturn
https://www.garlic.com/~lynn/2023f.html#106 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#103 Microcode Development and Writing to Floppies
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#47 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023d.html#62 Online Before The Cloud
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023.html#99 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#30 IBM Change

--
virtualization experience starting Jan1968, online at home since Mar1970

How IBM Stumbled onto RISC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: How IBM Stumbled onto RISC
Date: 03 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC

New Almaden research bldg was heavily provisioned with CAT4 presumably for 16mbit T/R .. but found that 10mbit ethernet had much higher aggregate LAN throughput and lower latency. Also the 10mbit ethernet cards had much higher per card throughput (8.5mbit, @$69/card) than 16mbit token ring cards (@$800/card). Almaden air conditioning had been provisioned for normal bldg operation ... unfortunately turning off the large numbers of RS/6000 at end of day and back on at start day oscillated the heat generation greater than the air conditioning was designed to handle.

Note AWD built their own adapter cards for the PC/RT 16bit "AT-bus" ... but were told that they weren't allow to do their own microchannel cards for RS/6000, but were forced to use the PS2 microchannel cards that had all been heavily performance kneecapped by the communication group (fiercely fighting off client/server and distributed computing). Example was the PC/RT 4mbit token-ring card had higher card throughput than the PS2 microchannel 16mbit token-ring card (in theory a PC/RT 4mbit token-ring server would have higher throughput than a RS/6000 16mbit token-ring server.

My wife had been asked to co-author a response to gov. request for large campus-like, super-secure, network ... where she introduced 3-tier architecture. We were then out making customer executive presentations on 3-tier, ethernet, TCP/IP, high-performance routers, etc ... and taking all sorts of barbs in the back with misinformation generated by the communication group.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
communication group fighting off client/server and distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

More IBM Downfall

From: Lynn Wheeler <lynn@garlic.com>
Subject: More IBM Downfall
Date: 03 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#103 More IBM Downfall
https://www.garlic.com/~lynn/2023g.html#104 More IBM Downfall

Note: IBM communication group was heavily fighting the release of mainframe TCP/IP support ... then possibly some influential customers got that forced. Then the communication group changed tactics and said that since they had corporate strategic responsibility for everything that crossed the datacenter walls, it had to be release through them. What was released got aggregate 44kbytes/sec throughput using nearly whole 3090 processor. I then did "fixes" to support RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341 got sustained channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed). Later the communication group hired a silicon valley contractor to implement TCP/IP support directly in VTAM. What he initially demo'ed had TCP running much faster than LU6.2. He was then told that everybody knows that a proper TCP/IP implementation is much slower LU6.2, and they would only being paying for a "proper" implementation.

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

How IBM Stumbled onto RISC

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: How IBM Stumbled onto RISC
Date: 03 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC

Note: IBM communication group had been heavily fighting the release of mainframe TCP/IP support ... then possibly some influential customers got that forced. Then the communication group changed tactics and said that since they had corporate strategic responsibility for everything that crossed the datacenter walls, it had to be release through them. What was released got aggregate 44kbytes/sec throughput using nearly whole 3090 processor. I then did "fixes" to support RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341 got sustained channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed). Later the communication group hired a silicon valley contractor to implement TCP/IP support directly in VTAM. What he initially demo'ed had TCP running much faster than LU6.2. He was then told that everybody knows that a proper TCP/IP implementation is much slower LU6.2, and they would only being paying for a "proper" implementation.

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
communication group fighting off client/server & distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

Note: 1990 standard unix clocked 5k instruction (and 5 buffer copies) total pathlength for TCP ... while VTAM LU6.2 clocked around 160k instruction pathlengh (and 15 buffer copies, copying larger buffers could take more cache misses and processor cycles than instructions) i.e. MIPS rate counting number of benchmark iterations compared to 370/158 ... and then elapsed time for large number of operations ... convert elapsed time to no. instructions (based on 158 benchmark standard) and then divide by operations. However RS/6000 was severly limited based on communication group performance kneecapping PS2 microchannel cards.

RS/6000 engineer did take ESCON and make several tweaks ... about 10% faster, full-duplex, etc.; which made it non-interoperable with everybody else. Finally was able to convince high-end router vendor to add an RS/6000 serial interface. It already had FDDI (100mbit lan), full-duplex T1(1.5mbits ... 3mbytes aggregate) & T3(45/mbits ... 90/mbits aggregate) telco interfaces, capable of more than a dozen ethernet interfaces, multiple different mainframe (including IBM & HIPPI) channel interfaces ... which gave a RS/6000 reasonable bandwidth into a networked environment. AIX was still somewhat longer TCP pathlength than standard unix, but not really bad. With the serial attachment to high-speed router (and RFC1044 for mainframe), it made RS/6000 reasonable player in 3-tier operation.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Note: In 1988, I was asked by branch office to help LLNL standardize some serial stuff they were playing with, which quickly become fibre channel standard (FCS, including some stuff I had done in 1980, initially 1gbit, full-duplex, aggregate 200mbyte/sec). The RS/6000 engineer had wanted to start work on 800mbit serial implementation but was able to con him into joining the FCS committee and getting FCS cards for RS/6000 (and FCS interface for the high-speed router).

trivia: some POK engineers become involved in FCS and define a heavy-weight protocol that drastically reduces native FCS throughput, which is eventually released as FICON. Latest public benchmark I can find is "Peak I/O" benchmark for z196 that gets 2M IOPS using 104 FICON (running over 104 FCS). About the same time a FCS is announced for server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Aggravating FICON is 1) requirements for CKD which haven't been made for decades ... having to be simulated on industry standard fixed-block disks and 2) recommendations keeping SAPs (system assist processors that do actual I/O) to no more than 70% CPU (around 1.5M IOPS).

FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Niklaus Wirth 15feb1934 - 1jan2024

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Niklaus Wirth 15feb1934 - 1jan2024
Date: 03 Jan, 2024
Blog: Facebook
Niklaus Wirth 15feb1934 - 1jan2024
https://en.wikipedia.org/wiki/Niklaus_Wirth

The IBM Los Gatos VLSI tool lab made a lot of use of MetaWare's TWS
http://www.edm2.com/index.php/MetaWare

... including implementing PASCAL that was used for internal VLSI tools ... and eventually morphs into VS/PASCAL product (which was also used to implement the mainframe TCP/IP product). trivia "Pickens" had been at IBM doing a lot of the TWS&Pascal work, before leaving to join MetaWare

For the fun of it ... I re-implemented VM370 assembler, kernel-based spool file system in Los Gatos Pascal running in virtual address space with much higher throughput, more function and better integrity.

The IBM communication group was fiercely fighting off client/server, distributed computing and mainframe TCP/IP. Then possibly some influential customers got it approved ... and the communication group then switches their tactic, since they have corporate strategic responsibility for everything that crosses the datacenter walls, it has to be ship through them. What ships get aggregate 44kbytes/sec using nearly full 3090 processor. I then do fixes for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341 get sustained channel throughput using only modest 4341 CPU (possibly 500 times increase in throughput of bytes moved per instruction executed).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

During the IBM troubles in the early 90s (one of the largest losses in history of US companies and being reorged in preparation for breaking up the company), lots of stuff being sold off and/or offloaded, including lots of VLSI tools to major silicon valley VLSI tools vendor. Since major VLSI tools industry platform was SUN, all the IBM tools had to be ported to SUN. I (had already left IBM) get hired to port a 50,000 statement PASCAL application (physical layout) and in retrospect it would have been easier to rewrite in C (I'm not sure that sun pascal had been used for any real/major project).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

Some posts mentioning Los Gatos lab and Metaware TWS
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#68 Assembler & non-Assembler For System Programming
https://www.garlic.com/~lynn/2023c.html#98 Fortran
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2022g.html#6 "In Defense of ALGOL"
https://www.garlic.com/~lynn/2022f.html#13 COBOL and tricks
https://www.garlic.com/~lynn/2022d.html#82 ROMP
https://www.garlic.com/~lynn/2021j.html#23 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#45 not a 360 either, was Design a better 16 or 32 bit processor
https://www.garlic.com/~lynn/2021d.html#5 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#95 What's Fortran?!?!
https://www.garlic.com/~lynn/2021.html#37 IBM HA/CMP Product
https://www.garlic.com/~lynn/2018e.html#63 EBCDIC Bad History
https://www.garlic.com/~lynn/2017j.html#18 The Windows 95 chime was created on a Mac
https://www.garlic.com/~lynn/2017f.html#94 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#24 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016c.html#62 Which Books Can You Recommend For Learning Computer Programming?
https://www.garlic.com/~lynn/2015g.html#52 [Poll] Computing favorities
https://www.garlic.com/~lynn/2013m.html#36 Quote on Slashdot.org
https://www.garlic.com/~lynn/2013l.html#59 Teletypewriter Model 33
https://www.garlic.com/~lynn/2011m.html#32 computer bootlaces
https://www.garlic.com/~lynn/2010n.html#54 PL/I vs. Pascal
https://www.garlic.com/~lynn/2009o.html#11 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009l.html#36 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2008j.html#77 CLIs and GUIs
https://www.garlic.com/~lynn/2007j.html#14 Newbie question on table design
https://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2002q.html#19 Beyond 8+3

other posts mentioning VM370 Spool File System in Pascal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022e.html#8 VM Workship ... VM/370 50th birthday
https://www.garlic.com/~lynn/2022.html#85 HSDT SFS (spool file rewrite)
https://www.garlic.com/~lynn/2021j.html#26 Programming Languages in IBM
https://www.garlic.com/~lynn/2021g.html#37 IBM Programming Projects
https://www.garlic.com/~lynn/2021b.html#58 HSDT SFS (spool file rewrite)
https://www.garlic.com/~lynn/2013n.html#91 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2010k.html#26 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines
https://www.garlic.com/~lynn/2008g.html#22 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2007c.html#21 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)

--
virtualization experience starting Jan1968, online at home since Mar1970

How IBM Stumbled onto RISC

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: How IBM Stumbled onto RISC
Date: 04 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#7 How IBM Stumbled onto RISC

John liked to drink .... after being transferred to YKT (for various offenses including being blamed for online computer conferencing on the internal network late 70s/early 80s), but left to live in san jose with offices in SJR and LSG ... I would have to commute to YKT a couple times a month ... work in San Jose on Monday, take redeye to Kennedy Monday night and drive straight to the office early Tuesday ... and sometimes after work go drinking with John and not check into motel until one or two am weds ... I wouldn't be into the office early weds (returning friday afternoon to san jose).

The other person with Cray was Thornton:
The Control Data 6600 computer, regarded by many as the world's first supercomputer, was designed by Seymour Cray and James Thornton, introduced in 1964, and featured an instruction issue rate of 10 MHz, with overlapped, out-of-order instruction execution in multiple functional units and interleaved memory banks.

https://archive.computerhistory.org/resources/text/CDC/cdc.6600.thornton.design_of_a_computer_the_control_data_6600.1970.102630394.pdf
Foreward (Seymour Cray):
The reader can rest assured that the material presented is accurate and from the best authority as Mr Thorton was personally responsible for most of the detailed design of the Control Data model 6600 system.

... snip ...

Seymour left CDC for Cray Research and Thorton left (with few co-workers) to form Network Systems ... I worked with them on and off from 1980 through the early 90s

an old post mentioning thorton & network systems
https://www.garlic.com/~lynn/2014c.html#80 11 Years to Catch Up with Seymour

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

GOP Rep. Says Quiet Part Out Loud About Rejecting Border Deal

From: Lynn Wheeler <lynn@garlic.com>
Subject: GOP Rep. Says Quiet Part Out Loud About Rejecting Border Deal
Date: 04 Jan, 2024
Blog: Facebook
GOP Rep. Says Quiet Part Out Loud About Rejecting Border Deal Republicans' desire for hardline immigration policies is clashing with their commitment to deny the Biden administration any sort of victory
https://www.rollingstone.com/politics/politics-news/troy-nehls-opposing-border-deal-hurt-biden-1234940056/

... something similar during Obama's 1st term, GOP Speaker of the House said their number one priority was preventing an Obama's 2nd term (giving rise to joke about even if they had to lay waste to the country to make sure Obama looses).

Also Trump and birther movement contributed to his selection for GOP candidate

How Donald Trump Perpetuated the 'Birther' Movement for Years.
https://abcnews.go.com/Politics/donald-trump-perpetuated-birther-movement-years/story?id=42138176
A look at the trajectory of Donald Trump's role in the birther movement. Donald Trump Clung to 'Birther' Lie for Years, and Still Isn't Apologetic
https://www.nytimes.com/2016/09/17/us/politics/donald-trump-obama-birther.html
14 of Trump's most outrageous 'birther' claims - half from after 2011
https://www.cnn.com/2016/09/09/politics/donald-trump-birther/index.html

references to Gingrich weaponized politics and need to beat the other party regardless of the cost to the country
https://www.garlic.com/~lynn/2022.html#115 Newt Gingrich started us on the road to ruin. Now, he's back to finish the job
https://www.garlic.com/~lynn/2021f.html#39 'Bipartisanship' Is Dead in Washington
https://www.garlic.com/~lynn/2021d.html#4 The GOP's Fake Controversy Over Colin Kahl Is Just the Beginning
https://www.garlic.com/~lynn/2021c.html#93 How 'Owning the Libs' Became the GOP's Core Belief
https://www.garlic.com/~lynn/2021.html#29 How the Republican Party Went Feral. Democracy is now threatened by malevolent tribalism
https://www.garlic.com/~lynn/2019c.html#21 Mitch McConnell has done far more to destroy democratic norms than Donald Trump
https://www.garlic.com/~lynn/2019b.html#45 What is ALEC? 'The most effective organization' for conservatives, says Newt Gingrich
https://www.garlic.com/~lynn/2019.html#41 Family of Secrets

--
virtualization experience starting Jan1968, online at home since Mar1970

How IBM Stumbled onto RISC

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: How IBM Stumbled onto RISC
Date: 04 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#7 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#9 How IBM Stumbled onto RISC

CDC7600 in (facebook public) "At the Controls" group
https://www.facebook.com/groups/779220482206901/permalink/6670816423047248/
RISC-based architecture could deliver a peak performance of 36 MFLOPS (Floating Point Operations Per Second) using 3,360 electronic modules, each built from individual transistors and interconnected with more than 190 km of internal wiring consuming a total of 95 kW of power cooled by a liquid-freon refrigerant. In benchmark tests it was shown to be faster than the IBM System/360 Model 195.

... snip ...

and IBM

Date: 04/23/81 09:57:42
To: wheeler

your ramblings concerning the corp(se?) showed up in my reader yesterday. like all good net people, i passed them along to 3 other people. like rabbits interesting things seem to multiply on the net. many of us here in pok experience the sort of feelings your mail seems so burdened by: the company, from our point of view, is out of control. i think the word will reach higher only when the almighty $$$ impact starts to hit. but maybe it never will. its hard to imagine one stuffed company president saying to another (our) stuffed company president i think i'll buy from those inovative freaks down the street. '(i am not defending the mess that surrounds us, just trying to understand why only some of us seem to see it).

bob tomasulo and dave anderson, the two poeple responsible for the model 91 and the (incredible but killed) hawk project, just left pok for the new stc computer company. management reaction: when dave told them he was thinking of leaving they said 'ok. 'one word. 'ok. ' they tried to keep bob by telling him he shouldn't go (the reward system in pok could be a subject of long correspondence). when he left, the management position was 'he wasn't doing anything anyway. '

in some sense true. but we haven't built an interesting high-speed machine in 10 years. look at the 85/165/168/3033/trout. all the same machine with treaks here and there. and the hordes continue to sweep in with faster and faster machines. true, endicott plans to bring the low/middle into the current high-end arena, but then where is the high-end product development?


... snip ... top of post, old email index

3033&3081 were kicked off in parallel after future system implosion; 3033 started out remapping 168 logic to 20% faster chips ... 3081 was some left over from FS:
http://www.jfsowa.com/computer/memo125.htm

once 3033 was out the door, the 3033 processor engineers start on "trout" (aka 3090). email has reference to (me) being blamed for online computer conferencing on the IBM internal network ... it really took off spring of 1981 after I distributed a report about visit to Jim Gray at Tandem.

Also could claim some of the resistance to non-360 was the debacle of FS (totally different and going to completely replace 370) inside the company (including during FS, internal politics was killing off 370 projects) with FS implosion, there was mad rush to try and get stuff back into the 370 product pipelines.

Tomasulo's algoritm
https://en.wikipedia.org/wiki/Tomasulo%27s_algorithm

and then there is late 60s ACS/360 ... folklore is that it was killed because executives felt it would advance the state of the art too fast and IBM would loose control of the market, Amdahl leaves shortly later (lists features that show up in the 90s with ES/9000) ... mentions Amdahl with ACS/360 had won over the non-360 ACS proposals
https://people.computing.clemson.edu/~mark/acs_end.html

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

a few posts mentioning Tomasulo:
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022h.html#2 360/91
https://www.garlic.com/~lynn/2022e.html#61 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2021b.html#23 IBM Recruiting
https://www.garlic.com/~lynn/2019c.html#45 IBM 9020
https://www.garlic.com/~lynn/2016e.html#60 Honeywell 200
https://www.garlic.com/~lynn/2016d.html#53 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#52 PL/I advertising
https://www.garlic.com/~lynn/2015c.html#91 Critique of System/360, 1967
https://www.garlic.com/~lynn/2008g.html#38 Sad news of Bob Tomasulo's passing
https://www.garlic.com/~lynn/2003j.html#18 why doesn't processor reordering instructions affect most
https://www.garlic.com/~lynn/2000b.html#15 How many Megaflops and when?

--
virtualization experience starting Jan1968, online at home since Mar1970

THE RISE OF UNIX. THE SEEDS OF ITS FALL

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: THE RISE OF UNIX. THE SEEDS OF ITS FALL
Date: 04 Jan, 2024
Blog: Facebook
THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.youtube.com/watch?v=HADp3emVABg

above recently posted in other FACEBOOK groups ... one of the issues was both clouds and supercomputers needed freely available source (linux) for the large scale, rapidly evolving, expanding and changing cluster operations.

The 8/32 and the 7/32 were the one of the first non-DEC machines to run UNIX
https://en.wikipedia.org/wiki/UNIX/32V
https://en.wikipedia.org/wiki/Interdata_7/32_and_8/32
https://en.wikipedia.org/wiki/Interdata
Interdata bought by Perkin-Elmer
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

Note some of the MIT 7094/CTSS went to the 5th flr to do MULTICs. Others went to the IBM science center on the 4th flr and did virtual machines (CP40 on 360/40 modified with virtual memory hardware, which morphs into CP/67 (precursor to VM370) when 360/67 standard with virtual memory becomes available), corporate network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s), GML (invented in 1969). CTSS RUNOFF had been repimplemented for CP67/CMS as SCRIPT and then GML tag processing was added to SCRIPT; after a decade GML morphs into ISO standard SGML and after another decade morphs into HTML at CERN.
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

trivia1: the first webserver in the US was Stanford SLAC (CERN sister installation) on their VM370 (follow-on to science center CP/67) system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

other Interdata trivia; I was undergraduate and had been hired fulltime by the univ. responsible for OS/360 on 360/67 (originally obtained for TSS/360). Univ. shutdown datacenter on weekends and I would have the placed dedicated to myself for 48hrs (although sometimes made monday classes difficult). Jan1968 CSC came out and installed CP67 which I mostly played with during my weekend dedicated time. CP67 had automagic terminal identification (used controller SAD CCW to switch port scanner type). Univ had some number of TTY terminals and I added ASCII terminal support (including automagic terminal id). I then wanted to have a single dial-in number ("hunt group") for all terminal types ... but it didn't quite work, IBM had taken shortcut and while it allowed port scanner type to be switched, line speed was hardwired for each port. The univ then started a clone controller project, build a 360 channel attach board for Interdata/3 programmed to emulate an IBM telecommunication controller with the addition that it did automatic line speed. It was enhanced with an Interdata/4 for the channel interface and a cluster of Interdata/3s for handling ports. Interdata and then Perkin-Elmer sold it as IBM clone controller.

Early 80s, I wondered if UNIX scheduling&dispatching had originated with CTSS by way of MULTICS ... because it looked similar to CP67 initially installed in the 60s back at the univ. ... which I had completely rewrote. First six months I had CP67, I was rewriting pathlengths to improve OS/360 running in virtual machine (stand-alone 322secs, initial virtual machine 856secs, CP67 CPU 534secs, by June had CP67 down to 113secs from 534secs). Then over a few months, I worked on CP67 time to run 35 simulated interactive users ... dispatching/scheduling overhead would grow non-linear with number of users (10% of processing with 35users) which I radically cut to almost nothing. I then implemented dynamic adaptive resource management (& scheduling) w/o measurably increasing overhead.

360 plug compatible telecommunication controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management
https://www.garlic.com/~lynn/subtopic.html#fairshare
GML, SGML, HTML, etc
https://www.garlic.com/~lynn/submain.html#sgml

--
virtualization experience starting Jan1968, online at home since Mar1970

THE RISE OF UNIX. THE SEEDS OF ITS FALL

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: THE RISE OF UNIX. THE SEEDS OF ITS FALL
Date: 04 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL

lots of unix work-alike ... UCB did BSD (IBM makes available as AOS on PC/RT), UCLA did LOCUS (IBM makes available as AIX/370 & AIX/386), CMU did MACH (mach eventually used by NeXT and then Apple), and Linus did linux. GNUS & LINUX full open source
https://en.wikipedia.org/wiki/Linux

some posts mentioning BSD, LOCUS, MACH, & LINUX
https://www.garlic.com/~lynn/2017i.html#45 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017d.html#41 What are mainframes
https://www.garlic.com/~lynn/2015g.html#83 Term "Open Systems" (as Sometimes Currently Used) is Dead -- Who's with Me?
https://www.garlic.com/~lynn/2014j.html#12 The SDS 92, its place in history?
https://www.garlic.com/~lynn/2014f.html#75 Is end of mainframe near ?
https://www.garlic.com/~lynn/2013m.html#65 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2012b.html#45 Has anyone successfully migrated off mainframes?

trivia: stanford people had approached the IBM Palo Alto Science Center about vending their workstation project. For the review PASC invited several internal personal computing and workstations efforts to sit in on the review ... afterwards all the IBM efforts claimed that they were doing something better than what stanford people presented ... and IBM declined the offer. they then formed their own company

--
virtualization experience starting Jan1968, online at home since Mar1970

THE RISE OF UNIX. THE SEEDS OF ITS FALL

From: Lynn Wheeler <lynn@garlic.com>
Subject: THE RISE OF UNIX. THE SEEDS OF ITS FALL
Date: 05 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2024.html#13 THE RISE OF UNIX. THE SEEDS OF ITS FALL

comment on Wirth's passing
https://www.garlic.com/~lynn/2024.html#8 Niklaus Wirth 15feb1934 - 1jan2024
mentions IBM's original mainframe TCP/IP done in pascal

group post includes account of senior disk engineer claiming the communication group was going to be responsible for the demise of the disk division and some of the disk division countermeasures including funding the POSIX (USS) in MVS
https://www.garlic.com/~lynn/2023g.html#77
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

communication group fighting off client/server and distributed computing posts (and will be responsible for demise of disk division)
https://www.garlic.com/~lynn/subnetwork.html#terminal

this
https://en.wikipedia.org/wiki/Linux
mentions that if legal restrictions on UNIX had been lifted before he started work on Linux ... he might not have done it. Now (in number of copies) has more by far than any operating system.

--
virtualization experience starting Jan1968, online at home since Mar1970

THE RISE OF UNIX. THE SEEDS OF ITS FALL

From: Lynn Wheeler <lynn@garlic.com>
Subject: THE RISE OF UNIX. THE SEEDS OF ITS FALL
Date: 05 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2024.html#13 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2024.html#14 THE RISE OF UNIX. THE SEEDS OF ITS FALL

This post mentions ad-tech conference in the 70s (where we presented 16-processor tightly-coupled SMP, and the 801 group presented RISC) was possibly the last for awhile (as the ad-tech groups were being thrown into the breach after Future System implosion and the mad rush trying to get stuff back into the 370 product pipeline)
https://www.garlic.com/~lynn/2024.html#1

I believe the next ad-tech conference (in the company) was one I hosted at SJR 4-5Mar1982 (archived post)
https://www.garlic.com/~lynn/96.html#4a

Included was the TSS/370 UNIX for Bell Labs (SSUP, stripped down TSS/370 kernel with UNIX system services layered on top) and science center work on VM370 doing something similar for VM370 for UCB BSD (including VM370 doing address space forking).

Note PASC was working with both UCB BSD and UCLA Locus ... and then the UCB BSD work was redirected from VM370 mainframe to PC/RT as "AOS" ... and the PASC UCLA Locus work becomes AIX/370 and AIX/386.

Note both IBM mainframe UNIX and Amdahl UTS ran on VM370. Issue was that CEs demanded EREP as part of providing hardware maintenance and support. The effort to add mainframe EREP to UNIX would have been several times larger than the straight port of UNIX to mainframe.

trivia: PASC UCB BSD needed 370 C-compiler Pickens at the LSG lab had been working on C front-end to their 370 Pascal compiler ... and had just left IBM for Metaware ... so I talk PASC into contracting with Metaware for 370 C-compiler, then one for ROMP for PC/RT AOS.

https://www.garlic.com/~lynn/2024.html#8 Niklaus Wirth 15feb1934 - 1jan2024

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Billionaires Are Hoarding Trillions in Untaxed Wealth

From: Lynn Wheeler <lynn@garlic.com>
Subject: Billionaires Are Hoarding Trillions in Untaxed Wealth
Date: 05 Jan, 2024
Blog: Facebook
Billionaires Are Hoarding Trillions in Untaxed Wealth. They Want the Supreme Court to Keep It That Way. A new report from Americans for Tax Fairness found that America's richest families accumulated $8.5 trillion in untaxed capital gains in 2022
https://www.rollingstone.com/politics/politics-news/america-ultra-wealthy-trillions-untaxed-profits-2022-1234940717/

... part of the jokes about congress being the most corrupt institution on earth

tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Embraces Virtual Memory -- Finally

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Embraces Virtual Memory -- Finally
Date: 05 Jan, 2024
Blog: Linkedin
A decade ago, I was asked to track down the decision and found a staff that reported to the executive making the decision (Evans). Basically MVT storage management was so bad that regions had to be specified four times larger than used, as a result a typical 1mbyte 370/165 only ran four concurrently executing regions, insufficient to keep processor busy and justified. Going to MVT in a 16mbyte virtual address space (initially VS2/SVS) allowed number of concurrently executing regions to be increased by a factor of four times with little or no paging.

archived post with pieces of the email exchange
https://www.garlic.com/~lynn/2011d.html#73

This had been shown by Simpson (of HASP fame) with his work on virtual memory MFT as well as customers running MVT in a CP/67 16mbyte virtual machine. Boeing Huntsville had also modified MVT13 to run in virtual address space on 360/67 (w/o CP67) as part of dealing with MVT storage management.

2301 was sort of like a 2303 drum, but read/write four heads in parallel, with 1/4 the no. of tracks, each four times larger, and four times the transfer rate.

I was undergraduate but was hired full-time responsible for os/360 running on 360/67 (as 360/65, originally intended for tss/360). Univ. shutdown datacenter on weekends and I had the placed dedicated (although 48hrs w/o sleep made monday classes hard), Jan1968 CSC came out to install CP67 (precursor to VM370), 3rd installation after CSC itself and MIT Lincoln Labs ... and mostly I played with in my weekend window.

Original CP67 did fifo I/O and single page transfers per i/o. 2301 would peak about 70pages/sec. I did ordered seek for disk and multiple page transfers per i/o, for same "arm" position optimized for transfers/revolution. In case of 2301, got it to peak around 270pages/sec

note: CSC borrowed the TSS/360 2301 format .... squeezing in nine 4k pages on pair of tracks. 1st track had four 4k pages with start of a 5th 4k page which spanned to the start of the 2nd track followed by an additional four 4k pages. At 60 track revolutions per second, that is 30/sec for a pair of tracks ... 30*9 = 270

I started out optimizing pathlengths for running OS/360 in virtual machine. 322sec stand-alone benchmark ran 856secs in virtual machine, 534sec CP67 CPU. Took me a couple months to get CP67 CPU down to 113secs.

I then worked on redoing dispatch/scheduling (dynamic adaptive resource management) and demand page, page replacement algorithms for running multiple concurrent CMS users.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock

a couple recent posts mentioning univ, undergraduate, os/360, student fortran, watfor, boeing cfo, renton, etc
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Embraces Virtual Memory -- Finally

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Embraces Virtual Memory -- Finally
Date: 06 Jan, 2024
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally

part of recent reply in a facebook post
https://www.garlic.com/~lynn/2023g.html#26 Vintage 370/158

As undergraduate I had modified CP67 to do chained paging I/O optimized for maximum transfer per revolution (originally did FIFO single transfer per I/O). Standard CCW sequence was read/write 4k, tic, search, read/write 4k ... channel commands all being serially fetched from processor memory and processed all while disk was rotating. Was OK if consecutive records were on the same track, however if they were on different track (but same cylinder), it became read/write 4k, tic, seek head, search, read/write 4k. To handle the extra time to fetch/process seek head, the actually format had short, dummy block between 4k blocks. The problem with 3330 track was three 4k blocks can be formatted with short dummy block ... but not long enough for dummy blocks to handle the slow 158 integrated channel processing ... delaying the rotation of the next 4k block coming to the disk head while the seek head CCW was still being handled.

part 2/2

I wrote VM370/CMS program to format a test 3330 cylinder with maximum dummy block size possible (between 4k data blocks) and then do transfer I/O channel program trying to transfer consecutive data blocks from different tracks in single revolution. Then it would repeat reformatting with smaller and smaller dummy blocks size (to see smallest dummy block that could be used).

I got it run on a number of IBM and customer configurations, different IBM 370s and various customer IBM & non-IBM 370s with various IBM (3830) controller and non-IBM controllers and disks. The official 3330 spec called for 110-byte dummy block to handle seek head and read/write next consecutive data block in same rotation ... however 3330 track size only allowed for 101-byte dummy blocks with three 4k data blocks. 145, 4341, 168 external channels, etc would perform the switch 100% of the time (with 101byte dummy blocks). 158 and all 303x could only do it 20%-30% of the time (70%-80% of the time would miss and do a full rotation) and for similar reasons 3081 channels didn't do 100% of the time). Some customers reported back that non-IBM 370 with non-IBM controller&disks would do it 100% of the time with 50byte dummy block size.

posts getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

other posts mentioning the dummy record "problem" with 3330 and slow channel program CCW processing
https://www.garlic.com/~lynn/2022e.html#53 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2014k.html#26 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2013e.html#61 32760?
https://www.garlic.com/~lynn/2011.html#65 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2006t.html#19 old vm370 mitre benchmark
https://www.garlic.com/~lynn/2004d.html#66 System/360 40 years old today
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore

--
virtualization experience starting Jan1968, online at home since Mar1970

Huge Number of Migrants Highlights Border Crisis

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Huge Number of Migrants Highlights Border Crisis
Date: 06 Jan, 2024
Blog: Facebook
Huge Number of Migrants Highlights Border Crisis
https://www.voanews.com/a/huge-number-of-migrants-highlights-border-crisis/7424665.html

turn of century, large corporations were bringing in huge number of illegal immigrants (to work at slave wages) and funding chamber of commerce to lobby(/contributions/etc) the administration and congress to look the other way (the issue was so decisive that some local chambers were severing ties with the national organization). The Chamber of Commerce and the Corporate Capture of American Life
https://www.corporatecrimereporter.com/news/200/alyssa-katz-on-the-influence-machine-the-chamber-of-commerce-and-the-corporate-capture-of-american-life/
https://www.amazon.com/Influence-Machine-Commerce-Corporate-American/dp/0812993284/

second half of 90s, First Data (1992 had been spun of from AMEX in the largest IPO up until that time) was in competition with First Financial to buy Western Union; First Data drops out because Western Union numbers weren't that good. Then before the end of the century, First Data and First Financial merge (and First Data has to spin off Moneygram). Then by 2005 (with administration and congress looking the other way), Western Union revenue explodes (fees from immigrants sending money home) that it was as much as the whole rest (half total) of First Data. Then First Data spins off Western Union (contributing factor was Mexican president inviting First Data executives to Mexico to be thrown in jail for the egregious amount of money being made off immigrants sending money home).

Later in the decade with the economic mess imploding, Too Big To Fail which had had done over $27T, 2001-2008 in the securitized loan/mortgage bond market, were found to be money laundering for terrorists organizations and drug cartels. They were only getting their hands slapped (huge fines were small fraction of what they were making in illegal activity, aka just viewed as cost of running illegal operations) ... some conjecture that the government was concerned that they had already leaned over backwards to keep them in business (also some articles about Too Big To Fail money laundering was responsible for drug cartels being able to acquire military-grade hardware and the violence on both sides of the border). Frequently, the gov. used "deferred prosecution" if they promised to never do it again, which gov. always seemed to ignore for repeated Too Big To Fail offenders.
https://en.wikipedia.org/wiki/Deferred_prosecution

note that the military-industrial complex also made money off militarization of the drug cartels ... because then US law enforcement agencies, police and sheriffs also had to be militarized.

money laundering posts
https://www.garlic.com/~lynn/submisc.html#money.laundering
Too Big To Fail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess

a few posts mentioning: money laundering, too big to fail, economic mess, deferred prosecution, and drug cartels
https://www.garlic.com/~lynn/2017b.html#39 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#45 Western Union Admits Anti-Money Laundering and Consumer Fraud Violations, Forfeits $586 Million in Settlement with Justice Department and Federal Trade Commission
https://www.garlic.com/~lynn/2016e.html#109 Why Aren't Any Bankers in Prison for Causing the Financial Crisis?
https://www.garlic.com/~lynn/2016c.html#29 Qbasic
https://www.garlic.com/~lynn/2016b.html#73 Qbasic
https://www.garlic.com/~lynn/2016.html#10 25 Years: How the Web began
https://www.garlic.com/~lynn/2015h.html#65 Economic Mess
https://www.garlic.com/~lynn/2015h.html#44 rationality
https://www.garlic.com/~lynn/2015f.html#37 LIBOR: History's Largest Financial Crime that the WSJ and NYT Would Like You to Forget
https://www.garlic.com/~lynn/2015f.html#36 Eric Holder, Wall Street Double Agent, Comes in From the Cold
https://www.garlic.com/~lynn/2015e.html#44 1973--TI 8 digit electric calculator--$99.95

--
virtualization experience starting Jan1968, online at home since Mar1970

How IBM Stumbled onto RISC

From: Lynn Wheeler <lynn@garlic.com>
Subject: How IBM Stumbled onto RISC
Date: 07 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#7 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#9 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC

... 23jun1969 unbundling announce included starting to charge for software, se services, maintenance, etc ... but managed to make the case that kernel softare was still free. then in the early 70s, the Future System effort was totally different than 370 and was going to completely replace it
http://www.jfsowa.com/computer/memo125.htm

internal politics during FS was shutting down 370 efforts and the lack of new 370s during the period is credited with giving the clone (including Amdahl) 370 makers their market foothold. Then with FS implosion, there was mad rush to get stuff back into the 370 product pipeline, including kicking off the quick&dirty 3033&3081 efforts in parallel. Also with the rise of the clone makers, there was a decision to transition to charge for kernel software, starting with new kernel add-on software. One of my hobbies after joining IBM was enhanced production operating system for internal datacenters ... and some of my stuff was chosen for guinea pig ... and I had to spend time with planners and lawyers on kernel software charging policies.

Note 308x were only going to be multiprocessor and weren't going to have a single processor. However, the initial delivery was 3081D which was aggregate of two processor was less than Amdahl single processor. Turns out ACP/TPF airline operating system also didn't have multiprocessor support and they were concerned that the whole airline industry would migrate to Amdahl.

They double the processor cache size and release 3081K which had aggregate of two processor was about the same as Amdahl single processor, but MVS throughput was much less since MVS documentation claimed two processor was 1.2-1.5 times single processor (because of the multiprocessor overhead). Then they come out with the 3083 (a 3081 with one of the processors removed and the 10% processor slowdown removed).

I had done multiprocessor support and was getting around twice that of single processor which was some trick ... since they slowed the single processor down by 10% to help handle cross-cache protocol signling ... so two processor hardware was 2*.9=1.8 of single processor. However some cache affinity tricks offset my very efficient multiprocessor overhead and the processor cycle slow-down.

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

1975: VM/370 and CMS Demo

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 1975: VM/370 and CMS Demo
Date: 08 Jan, 2024
Blog: Facebook
1975: VM/370 and CMS Demo
https://www.youtube.com/watch?v=Mo2q7d5dJgg&si=nwE167_nmfU7odqTa

Melinda's history
https://www.leeandmelindavarian.com/Melinda#VMHist
IBM VM 50th anniversary
https://www.vm.ibm.com/history/50th/
VM History and Heritage
https://www.vm.ibm.com/history/index.html

Some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS (which also spawns UNIX). Others went to the 4th flr to do virtual machines, internal network, gml, etc (with datacenter on 2nd flr). Initially they do CP/40 on 360/40 with hardware modifications implementing virtual memory, which morphs into CP/67 when 360/67s standard with virtual memory becomes available. When decision to make virtual memory on all 370s, some of the science center splits off for the VM370 development, taking over the IBM Boston Programming Center on the 3rd flr. When they outgrow that space, they move out to the former/empty (IBM) SBC bldg at Burlington Mall (on rt.128).

When I joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters. In the morph from CP67->VM370 they drop and/or simplify a lot of features. In 1974, I start migrating lots of stuff to VM370. I had an automated benchmarking system and when I initially get it running on VM370, VM370 would consistently crash ... so one of the first features to move to VM370 was the CP67 kernel serialization functions ... enable getting reliable benchmark numbers ... as I migrated CP67 features to VM370. Late74, I had a VM370R2-base CSC/VM ready for distribution (one of my long-time customers included the world-wide, online, sales&marketing support HONE systems). Some old archived email
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

Note IBM had the "Future System" project the first half of 70s, totally different than 370 and going to completely replace it, more information
http://www.jfsowa.com/computer/memo125.htm

During FS, internal politics was shutting down 370 efforts (the lack of new 370 products during the period is credited with giving clone(/Amdahl) 370 makers their market foothold). When FS implodes, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. The head of POK (high-end mainframe) also convinces corporate to kill the VM370 product, shutdown the Burlington Mall development group and move all the people to POK for MVS/XA (or supposedly MVS/XA wouldn't be able to ship on time). They weren't planning on telling the people to minimize the numbers that might escape, however it leaked early ... and several escaped (including to the new DEC VMS organization, joke was that head of IBM POK was major contributor to VMS). There then was a hunt for the leak source, nobody gave them up (fortunately for me). The Endicott (mid-range mainframe) eventually manages to save the VM370 product mission, but had to recreate a development organization from scratch.

decade ago was asked to track down decision to add virtual memory to all 370s, archived post with pieces of the email exchange
https://www.garlic.com/~lynn/2011d.html#73

csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
auto benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
also internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internet
and GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

some posts mentioning Boston Programming Center on the 3rd flr, 545 tech sq, along with Jean Sammet and/or Nat Rochester:
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2018f.html#72 Jean Sammet — Designer of COBOL – A Computer of One's Own – Medium
https://www.garlic.com/~lynn/2014f.html#4 Another Golden Anniversary - Dartmouth BASIC
https://www.garlic.com/~lynn/2013l.html#28 World's worst programming environment?
https://www.garlic.com/~lynn/2013h.html#35 Some Things Never Die
https://www.garlic.com/~lynn/2013c.html#10 OT: CPL on LCM systems [was Re: COBOL will outlive us all]
https://www.garlic.com/~lynn/2010e.html#14 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2007l.html#58 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2006s.html#1 Info on Compiler System 1 (Univac, Navy)?
https://www.garlic.com/~lynn/2006m.html#28 Mainframe Limericks

--
virtualization experience starting Jan1968, online at home since Mar1970

1975: VM/370 and CMS Demo

From: Lynn Wheeler <lynn@garlic.com>
Subject: 1975: VM/370 and CMS Demo
Date: 09 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#21 1975: VM/370 and CMS Demo

after I transfer to west coast (san jose research), CSC/VM becomes SJR/VM
https://www.garlic.com/~lynn/2006u.html#email800429
https://www.garlic.com/~lynn/2006u.html#email800501

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

in archived post (originally posted to both usenet alt.folklore.computers and the ibm-main mainframe email discussion group)
https://www.garlic.com/~lynn/2006u.html#26
that includes discussion of CMS 3270 full-screen editors and other topics.

Note the email references latest SJR/VM already in production in bldgs14&15 (disk engineering and product test, at least 145, 4341, 3033). When I 1st transfer to SJR, I get to wander around IBM and non-IBM datacenters in silicon valley, including 14&15 across the street. At the time they were running 7x24, prescheduled, stand-alone testing and mentioned that they had recently tried MVS, but it had 15min MTBF (requiring re-ipl) in that environment. I offer to rewrite I/O supervisor making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity. I write a couple (internal only) research reports on the work and happen to mention MVS MTBF ... bringing the wrath of the MVS group down on my head.

later with 3380 about to ship, field support has regression test of 57 3380 hardware errors that are likely to occur ... MVS is failing in all 57 cases (requiring re-ipl) and in 2/3rds of the cases, no indication on what caused the failure.
https://www.garlic.com/~lynn/2007.html#email801015

posts referencing getting to play disk engineer in 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some recent posts mentioning MVS MTBF
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#105 VM Mascot
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#115 IBM RAS
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#72 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#55 IBM 3031, 3032, 3033
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards

--
virtualization experience starting Jan1968, online at home since Mar1970

The Greatest Capitalist Who Ever Lived

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Greatest Capitalist Who Ever Lived
Date: 09 Jan, 2024
Blog: Facebook
The Greatest Capitalist Who Ever Lived
https://www.theatlantic.com/magazine/archive/2024/01/ibm-greatest-capitalist-tom-watson/676147/
https://www.amazon.com/Greatest-Capitalist-Who-Ever-Lived-ebook/dp/B0BTZ257NJ/

"Greatest Capitalist" reference in recent post
https://www.garlic.com/~lynn/2023g.html#75 The Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us

my tome from 2022 about Learson trying (and failed) to block the bureaucrats, careerists and MBAs from destroying the watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

As undergraduate, univ hired me fulltime responsible for OS/360 (did lots of work on both OS/360 and CP/67). Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in an independent business unit). I think Renton datacenter is possibly largest in the world, couple hundred million in IBM gear (somebody's recent comment was that Boeing was ordering 360/65s like other companies order keypunches).

One of Boyd's stories was about being very critical that electronics across the trail wouldn't work. Possibly as punishment he is put in command of "spook base" (about the same time I'm at Boeing), he would say it had the largest air conditioned bldg in that part of the world). One of his biographies has "spook base" was a $2.5B "windfall" for IBM (ten times Renton, would have helped with the billions that went down the drain with Future System). Some "spook base" refs:
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White
and a FS ref
http://www.jfsowa.com/computer/memo125.htm
also see the 70s FS reference ("Computer Wars: The Post-IBM World") in the IBM Wild Ducks post

One Boyd conference, (former commandant) Gray wanders in after lunch ... and talks for two hrs, wasn't on the agenda and totally throws the schedule off (but nobody was going to complain). I'm sitting in the far back corner with laptop on the table. When he finishes, he makes a bee line straight for me, as he approaches all I can think of what Marines that I've offended, have set me up (trivia: Gray and I were probably the only two people in that room who knew Boyd).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner
Boyd post and web URLs
https://www.garlic.com/~lynn/subboyd.html
Future System post
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Tomasulo at IBM

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Tomasulo at IBM
Date: 10 Jan, 2024
Blog: Facebook
Tomasulo at IBM

Date: 04/23/81 09:57:42
To: wheeler

your ramblings concerning the corp(se?) showed up in my reader yesterday. like all good net people, i passed them along to 3 other people. like rabbits interesting things seem to multiply on the net. many of us here in pok experience the sort of feelings your mail seems so burdened by: the company, from our point of view, is out of control. i think the word will reach higher only when the almighty $$$ impact starts to hit. but maybe it never will. its hard to imagine one stuffed company president saying to another (our) stuffed company president i think i'll buy from those inovative freaks down the street. '(i am not defending the mess that surrounds us, just trying to understand why only some of us seem to see it).

bob tomasulo and dave anderson, the two poeple responsible for the model 91 and the (incredible but killed) hawk project, just left pok for the new stc computer company. management reaction: when dave told them he was thinking of leaving they said 'ok. 'one word. 'ok. ' they tried to keep bob by telling him he shouldn't go (the reward system in pok could be a subject of long correspondence). when he left, the management position was 'he wasn't doing anything anyway. '

in some sense true. but we haven't built an interesting high-speed machine in 10 years. look at the 85/165/168/3033/trout. all the same machine with treaks here and there. and the hordes continue to sweep in with faster and faster machines. true, endicott plans to bring the low/middle into the current high-end arena, but then where is the high-end product development?


... snip ... top of post, old email index

3033&3081 were kicked off in parallel after future system implosion; 3033 started out remapping 168 logic to 20% faster chips ... 3081 was some left over from FS:
http://www.jfsowa.com/computer/memo125.htm

once 3033 was out the door, the 3033 processor engineers start on "trout" (aka 3090). email has reference to (me) being blamed for online computer conferencing on the IBM internal network ... it really took off spring of 1981 after I distributed a report about visit to Jim Gray at Tandem. Also could claim some of the resistance to non-360 was the debacle of FS (totally different and going to completely replace 370) inside the company (including during FS, internal politics was killing off 370 projects) with FS implosion, there was mad rush to try and get stuff back into the 370 product pipelines.

Tomasulo's algoritm
https://en.wikipedia.org/wiki/Tomasulo%27s_algorithm

... side-track, Learson trying to save IBM watson culture/legacy from the bureaucrats, careerists, and MBAs
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

... well

Date: 05/12/81 13:46:19
To: wheeler
RE: Competiton for resources in IBM

Before Bob Tomasulo left to go to work for STC, he told me many stories about IBM. Around 1970, there was a project called HAWK. It was to be a 30 MIPS uniprocessor. Bob was finishing up on the 360/91, and wanted to go work on HAWK as his next project. He was told by management that "there were already too many good people working over there, and they really couldn't afford to let another good person work on it"! They forced him to work on another project that was more "deserving" of his talents. Bob never forgave them for that.

I guess IBM feels that resources are to be spread thinly, and that no single project can have lots of talent on it. Any project that has lots of talent will be raided sooner or later.


... snip ... top of post, old email index

also Amdahl's ACS/360 ... folklore is that executives killed it because they were afraid it would advance the state-of-the-art too fast and IBM would loose control of the market. Amdahl leaves shortly later. following mentions some features that show up in the 90s with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html

trivia: not long after joining IBM, I get con'ed into working with the 370/195 group on multiprocessor support ... bascially hyperthreading mentioned in ACS reference. Issue was 195 didn't have branch prediction and speculative execution, as result conditional branches drained the pipeline ... so 195 typically ran around half rated throughput. Two threads could keep machine running at throughput. However, then there was decision to add virtual memory to all 370s ... and it was deemed too hard to get it working on 195 and further 370/195 work was dropped.

Decade ago, I was asked to track down decision for 370 virtual memory ... basically MVT storage management was so bad that region sizes had to be specified four times larger than actually used, as a result a typical 1mbyte 370/165 only ran four concurrent regions, insufficient to keep it busy and justified, going to 16mbyte virtual memory allowed concurrent regions to be increased by a factor of four times (with little or no paging, sort of like running MVT in a CP67 16mbyte virtual machine). Archived post with pieces of the email exchange
https://www.garlic.com/~lynn/2011d.html#73

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some recent posts mentioning adding virtual memory to 370
https://www.garlic.com/~lynn/2023g.html#86 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#81 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#96 Conferences
https://www.garlic.com/~lynn/2023f.html#90 Vintage IBM HASP
https://www.garlic.com/~lynn/2023f.html#88 Vintage IBM 709
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#40 Rise and Fall of IBM
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023f.html#24 Video terminals

some recent posts mentioning Amdahl and ACS/360
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#103 More IBM Downfall
https://www.garlic.com/~lynn/2023g.html#44 Amdahl CPUs
https://www.garlic.com/~lynn/2023g.html#23 Vintage 3081 and Water Cooling
https://www.garlic.com/~lynn/2023g.html#11 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#80 Vintage Mainframe 3081D
https://www.garlic.com/~lynn/2023f.html#72 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#69 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023d.html#94 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023d.html#63 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989

other recent posts mentioning 370/195 multithreading
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2022h.html#32 do some Americans write their 1's in this way ?
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#12 Computer Server Market
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#60 370/195
https://www.garlic.com/~lynn/2022.html#31 370/195

--
virtualization experience starting Jan1968, online at home since Mar1970

1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
Date: 11 Jan, 2024
Blog: Facebook
1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.youtube.com/watch?si=v6qeLPTpozCgr3Ui&v=npgvV_-Nh60&feature=youtu.be

look at bitsavers functional characteristics with instruction timings
http://bitsavers.org/pdf/ibm/360/functional_characteristics/

also 360/30 had single byte memory access

360/65 (and higher) had double word (interleaved) memory access and single (32bit) and double (64bit) precision math

https://en.wikipedia.org/wiki/Extended_precision
The IBM System/360 supports a 32-bit "short" floating-point format and a 64-bit "long" floating-point format.[4] The 360/85 and follow-on System/370 add support for a 128-bit "extended" format.[5] These formats are still supported in the current design, where they are now called the "hexadecimal floating-point" (HFP) formats.

... snip ...

from:
https://people.computing.clemson.edu/~mark/acs_end.html
As the quote above indicates, the ACS-1 design was very much an out-of-the-ordinary design for IBM in the latter part of the 1960s. In his book, Data Processing Technology and Economics, Montgomery Phister, Jr., reports that as of 1968:

Of the 26,000 IBM computer systems in use, 16,000 were S/360 models (that is, over 60%). [Fig. 1.311.2]

Of the general-purpose systems having the largest fraction of total installed value, the IBM S/360 Model 30 was ranked first with 12% (rising to 17% in 1969). The S/360 Model 40 was ranked second with 11% (rising to almost 15% in 1970). [Figs. 2.10.4 and 2.10.5]

Of the number of operations per second in use, the IBM S/360 Model 65 ranked first with 23%. The Univac 1108 ranked second with slightly over 14%, and the CDC 6600 ranked third with 10%. [Figs. 2.10.6 and 2.10.7]


... snip ...

As undergraduate in 60s, univ had hired me full time responsible for OS/360 (360/67 originally for tss/360, but ran as 360/65). Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit, including offering services to non-Boeing entities). I think Renton datacenter largest in the world, couple hundred million in IBM gear; 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room (somebody recently commented that Boeing was ordering 360/65s like other companies ordered keypunches).

some recent posts mentioning working in Boeing CFO office
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#42 IBM Koolaid
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#31 Mainframe Datacenter
https://www.garlic.com/~lynn/2023g.html#28 IBM FSD
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#15 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#58 Almost IBM class student
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices

--
virtualization experience starting Jan1968, online at home since Mar1970

1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)

From: Lynn Wheeler <lynn@garlic.com>
Subject: 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
Date: 11 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)

turn of century was brought into large datacenter to look at performance ... some 40+ max. configured systems ... constantly being upgraded, none older than 18month ... all running 450K cobol statement app ... doing some 50% of issuing credit card processing in the US ... found 14% improvement. Another large datacenter was doing some 70% of merchant credit card processing in the US (had a large clone telecommunication interdata/perkin-elmer controller handling POS dialup terminals).

trivia: as undergraduate in the 60s, the univ had hired me fulltime responsible for OS/360 (360/67 originally acquired for tss/360 ... but being run as 360/65, see above). Univ. shutdown datacenter for weekend and I would have the datacenter dedicated, although 48hrs w/o sleep made monday classes hard). Then CSC came out and installed CP/67 (3rd after CSC and MIT Lincoln Labs) which I mostly played with during my weekend window. CSC had implemented dynamical terminal type (1052 & 2714, using SAD CCW to switch terminal type port scanner for line).

Univ. had TTY/ASCII terminals and I integrated with dynamic terminal type identification. I then wanted to do single dial-in number for all terminals. Didn't quite work since while IBM allowed terminal type port scanner to be switched for each line, but had hardwired for line speed. This kicks off univ. project to do our own clone controller; build channel interface board for Interndata/3 programmed to emulate an IBM controller but also did dynamic line speed. Later it is upgraded with Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Interdata started selling it as clone controller and four of us get written up for (some part of) IBM clone controller business. Perkin-Elmer then buys Interdata and continues selling controller (and a descendant of those boxes that was handling credit card dialup POS terminals)

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone 360 controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
Interdata
https://en.wikipedia.org/wiki/Interdata
Interdata bought by Perkin-Elmer
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
and then spun off as Concurrent
https://en.wikipedia.org/wiki/Concurrent_Computer_Corporation

some recent posts mentioning 450k Cobol statement application handling issuing credit card
https://www.garlic.com/~lynn/2023g.html#87 Mainframe Performance Analysis
https://www.garlic.com/~lynn/2023g.html#50 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023c.html#99 Account Transaction Update
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs

--
virtualization experience starting Jan1968, online at home since Mar1970

HASP, ASP, JES2, JES3

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HASP, ASP, JES2, JES3
Date: 12 Jan, 2024
Blog: Facebook
some amount of HASP/JES in archived post about decision to move to virtual memory for all 370s
https://www.garlic.com/~lynn/2011d.html#73

Simpson had been made fellow and him and Crabtree were up at SPD Harrison (1000 Westchester?) and had been doing "RASP", virtual memory MFT with single-level-store like filesystem (similar to TSS/360 and what was proposed for Future System). Simpson then leaves for Amdahl (Dallas) and Crabtree goes down to Gburg to head up JES group.

My (future) wife reported to Crabby for awhile, after FS was killed, catcher for ASP/JES3 and co-author of JESUS specification (JES Unified System, all the features of the two systems that the respective customers couldn't live w/o, for various reasons was never done), before being con'ed into going to POK responsible for loosely-coupled architecture; she didn't remain long, 1) frequent battles with communication group trying to force her to use VTAM for loosely coupled operation) and 2) little uptake (until much later with SYSPLEX and Parallel SYSPLEX), except for IMS hot-standby (she has story about asking Vern Watts who he would ask for permission, he replied nobody, would just tell them when it was all done.

posts mentioning POK loosely-coupled (peer-coupled shared data) architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
HASP, ASP, JES2, JES3, NJI/NJE posts
https://www.garlic.com/~lynn/submain.html#hasp
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

IBM filed legal action against Amdahl and court experts checked the Amdahl "RASP" for identical code with IBM "RASP" (even though IBM wasn't going to use any of Simpson's MFT "RASP" code, only finding a couple trivial matches). I would see several of the silicon valley Amdahl people at the monthly computer group meetings hosted by Stanford SLAC and they would periodically talk about Amdahl activities ... it seemed that the (GOLD, for AU or Amdahl Unix, becomes UTS) people were sometimes in conflict with Simpson for resources.

Trivia: I had done a page-mapped filesystem for (CP67 precursor to VM370) CMS and (during FS) would claim I learned what not to do from TSS/360 (which nobody in FS seemed to understand). However, the demise of FS appeared to give all paged filesystems a bad reputation around IBM. The 360 operating systems were keeping CCWs and channel program paradigm for virtual memory. Mentioned periodically dropping by to see Ludlow ... who was doing initial VS2/SVS ... very similar to running MVT in CP67 16mbyte virtual machine and needed to make a copy of channel programs passed to EXCP/SVC0 ... substituting real addresses for virtual ... and hacked a copy of CP67 CCWTRANS into EXCP.

cms page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

trivia: early 80s, I redid the VM370 kernel spool file system in Pascal running in a virtual address space. RSCS/VNET used the VM370 spool file system ... but had a synchronous 4k block interface and tended to be limited to 4-8 4k/sec. I had HSDT project with T1 and faster computer links (both terrestrial and satellite). For RSCS driving even single full-duplex T1, I need 300kbytes/sec (not 30kbytes), needed asynchronous interface with contiguous allocation and multi-block read and write transfers.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Posts mentioning pascal, virtual memory, vm370 spool file system
https://www.garlic.com/~lynn/2023g.html#68 Assembler & non-Assembler For System Programming
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022.html#85 HSDT SFS (spool file rewrite)
https://www.garlic.com/~lynn/2017e.html#24 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2011e.html#29 Multiple Virtual Memory
https://www.garlic.com/~lynn/2010k.html#26 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines
https://www.garlic.com/~lynn/2008g.html#22 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2007c.html#21 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)

recent posts mentioning decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2024.html#21 1975: VM/370 and CMS Demo
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#81 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#96 Conferences
https://www.garlic.com/~lynn/2023f.html#90 Vintage IBM HASP
https://www.garlic.com/~lynn/2023f.html#89 Vintage IBM 709
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#40 Rise and Fall of IBM
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023f.html#24 Video terminals
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#49 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#43 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#71 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#103 2023 IBM Poughkeepsie, NY
https://www.garlic.com/~lynn/2023b.html#44 IBM 370
https://www.garlic.com/~lynn/2023b.html#41 Sunset IBM JES3
https://www.garlic.com/~lynn/2023b.html#24 IBM HASP (& 2780 terminal)
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#50 370 Virtual Memory Decision
https://www.garlic.com/~lynn/2023.html#34 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Disks and Drums

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Disks and Drums
Date: 12 Jan, 2024
Blog: Facebook
Some of the MIT CTSS/7094 went to the 5th flr to do Multics (which also spawns UNIX) ... others went to the IBM science center on the 4th flr and did virtual machines, internal network, lots of apps, invented GML 1969 (after a decade morphs into ISO standard SGML and after another decade morphs into HTML at CERN). CSC was expecting to get the virtual memory mission ... but didn't ... lots more in Melinda's history
https://www.leeandmelindavarian.com/Melinda#VMHist

CSC was expecting to get 360/50 to add virtual memory (but all extra 50s was going to FAA ATC program) and had to settle for 360/40 and did (virtual machine) CP40/CMS. When 360/67 standard with virtual memory becomes available, CP40 morphs into CP67. I was at one of the univ sold a 360/67 for tss/360 but ran it as 360/65 with os/360 ... I was undergraduate, but hired fulltime responsible for OS/360.

Then CSC comes out to install CP67 (3rd install after CSC itself and MIT Lincoln Labs) and I mostly played with it in my dedicated 48hr weekend window ... over next few months rewrote code to improve OS/360 running in virtual machine; test ran 322secs stand-alone, initially 856secs in virtual machine (CP67 CPU 534secs), getting it down to CP67 CPU 113secs.

Then started work on multi-user, interactive CMS for I/O, paging, dispatching, scheduling, etc. Primary paging was on fixed-head 2301 drum (sort of 2303 drum, but transferring four heads in parallel, 1/4 the "tracks", and each track four times larger). CP67 I/O was FIFO and all paging I/O was single 4k transfer. I redo FIFO to ordered seek for disks and multiple chained transfers for drum & disk page I/Os (for same head position) ... improves peak 2301 paging from about 70/sec to 270/sec peak (queued requests chained to maximize transfers per revolution). Dispatching/Scheduling possibly from CTSS (overhead somewhat proportional to total number of users, active or not, by 35users it was running 10% of cpu), no real page thrashing control and very rudimentary page replacement. I redid dispatching/scheduling with minimal pathlength dynamic adaptive resource management that included page thrashing control and signficantly better page replacement algorithm.

dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement, working set, page thrashing, etc, posts
https://www.garlic.com/~lynn/subtopic.html#clock

trivia: CP67 was then publicly announced at SHARE user group meeting in Houston and then CSC scheduled a CP67/CMS week class a couple months later at Beverley Hills Hilton. I arrive on Sunday expecting to take my 1st (ever/only) IBM class, but am asked to teach the CP67 class. Turns out the people that were going to teach had given notice on Friday to go to NCSS (which would do VP/CSS) ... I never did get to attend an IBM class.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Move from 360 (& 2301) to 370, got 2305 fixed-head paging disk, that had three times the capacity and slightly faster transfer. However, in the early 80s, inside in IBM, got vendor's electronic paging device as "1655", it could be configured to emulate 2305 1.5mbyte/sec "CKD" disk (needed for MVS that didn't have FBA support) or a FBA 3mbyte/sec disk.

DASD, CKD, FBA, multi-track search, etc, posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Disks and Drums

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Disks and Drums
Date: 13 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#28 IBM Disks and Drums

trivia: 2305 introduced "multiple exposures" ... multiple device addresses, each capable of having channel program active .... and hardware capable of selecting channel program doing the next data transfer (based on rotation). 3350 disk could have "fixed head" feature ... a limited number of cylinders that had fixed-head per track (like 2305), put most cylinders serviced by moveable arm.

I wanted to introduce multiple exposure for the 3350FH .... multiple addresses capable of different channel programs that would enable data transfer with fixed-head area overlapped while disk arm was moving (with a different channel program). However there was "VULCAN" ... POK electronic disk program for paging and they were afraid that 3350FH multiple exposure (for paging) could impact the forecast for VULCAN paging ... and got it vetoed. Later VULCAN got told that IBM was selling every memory chip it was making for processor memory at higher markup ... and they were canceled ... but it was too late to resurrect 3350FH multiple exposure. Then the "1655s" for internal datacenter paging from a vendor.

posts getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

past posts mentioning VULCAN, 3350FH, multiple exposure
https://www.garlic.com/~lynn/2023f.html#49 IBM 3350FH, Vulcan, 1655
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2021k.html#97 IBM Disks
https://www.garlic.com/~lynn/2021j.html#65 IBM DASD
https://www.garlic.com/~lynn/2021f.html#75 Mainframe disks
https://www.garlic.com/~lynn/2017k.html#44 Can anyone remember "drum" storage?
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#36 National Telephone Day
https://www.garlic.com/~lynn/2017d.html#65 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017b.html#57 3350 disks
https://www.garlic.com/~lynn/2014b.html#63 Mac at 30: A love/hate relationship from the support front
https://www.garlic.com/~lynn/2013c.html#74 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2009k.html#61 Z/VM support for FBA devices was Re: z/OS support of HMC's 3270 emulation?
https://www.garlic.com/~lynn/2006s.html#59 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#45 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future
https://www.garlic.com/~lynn/2000d.html#53 IBM 650 (was: Re: IBM--old computer manuals)
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Disks and Drums

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Disks and Drums
Date: 13 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#28 IBM Disks and Drums
https://www.garlic.com/~lynn/2024.html#29 IBM Disks and Drums

Later 3090 has expanded store ... another kind of electronic paging ; folklore is that packaging of additional memory would exceed spec for processor timing ... so the additional was put at the end of high-speed bus that would transfer 4kbytes (to/from) with a high-speed (synchronous) instructions ... which also helped bypass the huge MVS pathlength for performing (asynchronous) I/O operations. Later systems when packaging allowed connecting all memory for processor, there was articles about using PRSM/LPAR to specify some processor memory as "expanded store", because it improved the operation of page replacement algorithms (however that could be better than negated with changes/improvements in the page replacement algorithms, having all memory specified as processor)

demand paging, page I/O, page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock

past posts mentioning 3090 "expanded store"
https://www.garlic.com/~lynn/2021k.html#110 Network Systems
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2018e.html#71 PDP 11/40 system manual
https://www.garlic.com/~lynn/2017k.html#11 thrashing, was Re: A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2016b.html#23 IBM's 3033; "The Big One": IBM's 3033
https://www.garlic.com/~lynn/2014j.html#56 R.I.P. PDP-10?
https://www.garlic.com/~lynn/2013m.html#72 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2013h.html#3 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2011p.html#122 Deja Cloud?
https://www.garlic.com/~lynn/2011p.html#39 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011f.html#69 how to get a command result without writing it to a file
https://www.garlic.com/~lynn/2010n.html#39 Central vs. expanded storage
https://www.garlic.com/~lynn/2010g.html#43 Interesting presentation
https://www.garlic.com/~lynn/2010f.html#13 What was the historical price of a P/390?
https://www.garlic.com/~lynn/2010e.html#35 Why does Intel favor thin rectangular CPUs?
https://www.garlic.com/~lynn/2010d.html#69 LPARs: More or Less?
https://www.garlic.com/~lynn/2010.html#86 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2008i.html#10 Different Implementations of VLIW
https://www.garlic.com/~lynn/2008f.html#6 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2007s.html#9 Poster of computer hardware events?
https://www.garlic.com/~lynn/2007c.html#23 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006k.html#57 virtual memory
https://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005.html#17 Amusing acronym
https://www.garlic.com/~lynn/2003p.html#46 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2003p.html#41 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?

--
virtualization experience starting Jan1968, online at home since Mar1970

MIT Area Computing

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MIT Area Computing
Date: 14 Jan, 2024
Blog: Facebook
Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

and did virtual machines (originally CP40/CMS on 360/40 with hardware mods for virtual memory, morphs into CP67/CMS when 360/67 became available standard with virtual memory, precursor to VM370), the IBM internal network, invented GML in 1969 (decade later morphs into ISO Standard SGML, after another decade morphs into HTML at CERN), lots of other stuff. More history
https://www.leeandmelindavarian.com/Melinda#VMHist

Person responsible for the internal network ported PDP1 space war
https://www.computerhistory.org/pdp-1/08ec3f1cf55d5bffeb31ff6e3741058a/
https://en.wikipedia.org/wiki/Spacewar%21
to CSC's 1130M4 (included 2250)
https://en.wikipedia.org/wiki/IBM_2250
i.e. had 1130 as controller
http://www.ibm1130.net/functional/DisplayUnit.html

One of the inventors of GML
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

... then science center "wide area network" morphs into the corporate network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s)
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM "missed") references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

technology also used for the corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/BITNET

trivia: I was undergraduate and univ hired me fulltime responsible for OS/360 (360/67 originally for tss/360, but was being run as 360/65). Then CSC came out to install CP67/CMS (3rd installation after CSC itself, and MIT Lincoln Labs). I mostly got to play with it during my 48hr weekend dedicated time (univ. shutdown datacenter on weekends). CSC had 1052&2741 support, but univ. had some number of TTY/ASCII terminals, so I added TTY/ASCII support ... and CSC picked up and distributed with standard CP67 (as well as lots of my other stuff). I had done a hack with one byte values for TTY line input/output. Tale of MIT Urban Lab having CP/67 (in tech sq bldg across quad from 545). Somebody down at Harvard got an ascii device with 1200(?) char length ... they modified field for max. lengths ... but didn't adjust my one-byte hack ... crashing system 27 times in single day.
https://www.multicians.org/thvv/360-67.html

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc
https://www.garlic.com/~lynn/submain.html#sgml
posts mentioning BITNET
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Jargon: FOILS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Jargon: FOILS
Date: 14 Jan, 2024
Blog: Facebook
from IBM Jargon:
foil - n. Viewgraph, transparency, viewfoil - a thin sheet or leaf of transparent plastic material used for overhead projection of illustrations (visual aids). Only the term Foil is widely used in IBM. It is the most popular of the three presentation media (slides, foils, and flipcharts) except at Corporate HQ, where even in the 1980s flipcharts are favoured. In Poughkeepsie, social status is gained by owning one of the new, very compact, and very expensive foil projectors that make it easier to hold meetings almost anywhere and at any time. The origins of this word have been obscured by the use of lower case. The original usage was FOIL which, of course, was an acronym. Further research has discovered that the acronym originally stood for Foil Over Incandescent Light. This therefore seems to be IBM's first attempt at a recursive language.

... snip ...

Overhead projector
https://en.wikipedia.org/wiki/Overhead_projector
The use of transparent sheets for overhead projection, called viewfoils or viewgraphs, was largely developed in the United States. Overhead projectors were introduced into U.S. military training during World War II as early as 1940 and were quickly being taken up by tertiary educators,[14] and within the decade they were being used in corporations.[15] After the war they were used at schools like the U.S. Military Academy.[13] The journal Higher Education of April 1952 noted;

... snip ...

Transparency (projection)
https://en.wikipedia.org/wiki/Transparency_(projection)

Nick Donofrio stopped by and my wife showed him five hand drawn charts for project, originally HA/6000 for NYTimes to move their newspaper system (ATEX) off VAXcluster to RS/6000. I rename it HA/CMP when started doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) ... 16-way/systems by mid92 and 128-way/systems by ye92 ... mainframe was complaining it would be far ahead of them (it gets transferred for announce as IBM supercomputer and we were told we couldn't work on any anything with more than four processors).

My wife and I did long EU marketing trip, each of us 3-4 HA/CMP marketing presentations a day and then move on to next European city ... finally ending in Cannes ... practicing our Oracle World marketing presentations

We got 6670 printer (basically copier3 with mainframe connection) in IBM San Jose Research late 70s for placing in departmental areas ... and had colored paper in the alternate paper drawer that was used for printing separator pages. Since the page was nearly all blank, we got very early jargon file ... and a couple other files ... for selecting random quotations for printing on the separator page.

trivia: SJR also did "SHERPA" implementation ... aka all-points-addressable 6670 (used more bandwidth) could also be used for scanning and input ... and got boulder to accept it for the product.

a couple posts mentioning foils, 6670 separator page and ibm jargon
https://www.garlic.com/~lynn/2021b.html#37 HA/CMP Marketing
https://www.garlic.com/~lynn/2012e.html#77 Just for a laugh... How to spot an old IBMer

I was blamed for online computer conferencing on the internal network in the internal network in the late 70s and early 80s (precursor to social media), it really took off spring of 1981 when I distributed a trip report visiting Jim Gray at Tandem ... came to be called "Tandem Memos", only about 300 participated, but claims upwards of 25,000 were reading (folklore is when corporate executive committee was told, 5of6 wanted to fire me). Original entry for "Tandem Memo" ended with: If you have not seen the memos, try reading the November 1981 Datamation summary, which isn't there in later editions:
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products.

... snip ...

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Practicing marketing presentation at Cannes for Oracle world
https://www.garlic.com/~lynn/lhwcannes.jpg

lhw at Cannes

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 15 Jan, 2024
Blog: Facebook
I've made a few posts about our (early 90s) HA/CMP product (started out HA/6000), recent
https://www.garlic.com/~lynn/2024.html#32 IBM Jargon: Foils
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#29 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#22 Vintage Cray
https://www.garlic.com/~lynn/2023g.html#17 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#13 Vintage Future System

Trivia: after first transferring to SJR, got roped into working on System/R, original SQL/relational DBMS; we were able to do tech transfer to Endicott ("under the radar", company was pre-occupied with EAGLE, next great DBMS) for SQL/DS; then when EAGLE implodes, there was request for how fast could System/R be ported to MVS (eventually released as DB2). In any case, HA/CMP cluster scale-up were interconnected with very fast mesh links and implemented a distributed RDBMS cache consistency protocol (somewhat analogous to processor cache protocol) and FCS shared disk farm operation (with high-performance FCS non-blocking switches). Traditional mainframe started to complain that if we continued, it would be years ahead of them. End of Jan92, HA/CMP cluster scale-up is transferred for announce as IBM supercomputer (for "technical/scientific" *ONLY*) and we are told we couldn't work on anything with more than four processors (we leave IBM a few months later). Early 90s:
eight processor ES/9000-982 : 408MIPS
RS6000/990 : 126MIPS; 16-way: 2016MIPS, 128-way: 16,128MIPS (16BIPS)


system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

More trivia: IBM branch had asked me if I could help LLNL (national lab) get some serial stuff they were playing with "standardized", which quickly becomes fibre channel standard ("FCS", including some stuff I had done in 1980); initially 1gibt, full-duplex, 200mbyte/sec aggregate). Then in early 90s, POK ships ESCON with ES/9000 when it is already obsolete (17mbytes/sec). Then some POK engineers become involved in FCS and define a heavy weight protocol that significantly cuts the native throughput, eventually released as FICON. Most recent public benchmark I've found is "Peak I/O" z196 that gets 2M IOPS using 104 FICON running over 104 FCS. About the same time a FCS is announced for E5-2600 blades claiming over million IOPS, two such FCS having higher throughput than 104 FICON (running over FCS). Also note IBM pubs claim SAPs (system assist processors that actually do I/O) should be kept to no more than 70% CPU, more like 1.5M IOPS. Max configured z196 benchmarked at 50BIPS, z196-era E5-2600 blades benchmarked at 500BIPS (same benchmark, no. program iterations compared to 370/158 assumed to be 1MIPS); the BIPS spread between latest z16 (200+ BIPS) and latest blades (multiple TIPS) has increased.

FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

trivia the IBM S/88 Product Administrator started taking us around to her S/88 customers ... able to show higher "nines" than S/88. She also got me to write a (RS/6000) HA/CMP section for the corporate continuous available strategy document .... but it got pulled when both Rochester (AS/400) and POK (traditional) mainframe complained.

Note: AWD did their own cards for the PC/RT (PC/AT bus) including 4mbit T/R. RS/6000 follow-on had microchannel and were told they couldn't do their own cards, but had to use the (heavily performance kneecapped) PS2 cards. The PS2 16mbit T/R card had lower throughput than the PC/RT 4mbit card (joke was PC/RT 4mbit T/R server would have higher throughput than RS/6000 16mbit T/R server). The new Almaden research bldg was heavily provisioned with CAT4, assuming 16mbit T/R, but they found 10mbit ethernet had higher aggregate LAN throughput and lower latency ... and $69 10mbit ethernet cards had higher throughput than $800 16mbit T/R cards.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
communication group battling client/server and distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 15 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe

Mid-80s, I got con'ed into trying to turn a baby bell S/1 developed application into type-1 IBM product (with migration to RS/6000). They had done a S1 combination NCP/VTAM implementation that owned the resources with a lot better function, availability, performance and price/performance ... with channel interface that emulated "cross-domain" to host VTAMs. Various participants were familiar with how the communication group operated and did their best to wall off what they could do. What happened then to kill the project can only be described as truth is stranger than fiction. Part of presentation I made at fall86 SNA ARB meeting in Raleigh (in this archived post)
https://www.garlic.com/~lynn/99.html#67
part of baby bell presentation at spring '86 (S1) COMMON user group
https://www.garlic.com/~lynn/99.html#70

IMS was really interested, while their "hot standby" could fall over in a few minutes, it could take well over an hour for VTAM to get all the sessions back up. The S1 implementation could create & maintain "shadow sessions" on the IMS hot standby systems (analogous to IMS maintaining the hot standby IMS).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

trivia: in late 70s and early 80s (nearly 50yrs), I was blamed for online computer conferencing (precursor to social media) on the internal network ... it really took off spring of 1981 when I distributed trip report of visit to Jim Gray at Tandem (see: "Tandem Memos" in IBM Jargon), only about 300 participated but claim upwards of 25,000 were reading. Folklore is when corporate executive committee was told, 5of6 wanted to fire me. Some results: taskforces to analyze the phenomena, official forum software, officially sanction and moderated forums. A researcher was paid to sit in the back of my office for nine months studying how I communicated, took notes on telephone and face-to-face, got copies of all my incoming and outgoing email and logs of all instant messages. Results were IBM reports, conference talks & papers, books and a Stanford Phd (joint with language and computer AI, winograd was advisor on the computer side).

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 15 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe

When I transferred to San Jose Research, I get to wander around IBM and non-IBM datacenters in silicon valley, including 14&15 across the street. At the time they were running 7x24, prescheduled, stand-alone testing and mentioned that they had recently tried MVS, but it had 15min MTBF (requiring re-ipl) in that environment. I offer to rewrite I/O supervisor making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity. I write a couple (internal only) research reports on the work and happen to mention MVS MTBF ... bringing the wrath of the MVS group down on my head. Bldg15 gets 1st engineering 3033 (outside 3033 processor group) and since testing took only a percent or two of CPU, we scrounge a 3830 contrroller and string of 3330 drives for our own private online service. Then bldg15 gets 1st engineering 4341 and somebody in the branch hears about it and cons me into doing a benchmark (jan1979) for national lab that was looking at getting 70 4341s for compute farm, sort of the leading edge of the coming cluster supercomputing tsunami.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

late 80s, Nick Donofrio comes through and my wife shows him five hand drawn charts to do HA/6000, initially for NYTimes to port their newspaper system (ATEX) from DEC VAXCluster to RS/6000, and he approves the project. I then rename it HA/CMP when I start doing technical/scientific cluster processing scale-up with national labs and commercial cluster processing scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres). Early Jan1992, we have meeting with Oracle CEO and AWD/Hester tells him that we would have 16-way/systems by mid92 and 128-way/systems by ye92. During Jan92, I'm bringing FSD up to speed about HA/CMP with national labs. End of Jan92, FSD tells kingston supercomputer group they are going with HA/CMP (instead of the one kingston had been working on). Possibly within hrs, cluster scale-up is transferred to IBM Kingston for announce as IBM supercomputer (for technical/scientific *ONLY*) and we were told can't work on anything with more than four processors (we leave IBM a few months later).

Mixed in with all of this, mainframe DB2 had been complaining if we were allowed to proceed, it would be at least 5yrs ahead of them. Computerworld news 17feb1992 (from wayback machine) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
cluster supercomputer for technical/scientific only
https://www.garlic.com/~lynn/2001n.html#6000clusters1
more news 11may1992, IBM "caught" by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2

Early 90s:
eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
RS6000/990 : 126MIPS; 16-way: 2016MIPS, 128-way: 16,128MIPS (16BIPS)


trivia: along the way, IBM S/88 Product Administrator started taking us around to her S/88 customers ... able to show higher "nines" than S/88. She also got me to write a (RS/6000) HA/CMP section for the corporate continuous available strategy document .... but it got pulled when both Rochester (AS/400) and POK (traditional) mainframe complained.

continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

RIOS (six chip) chipset didn't support multiprocessor cache coherency so scale-up was cluster. Executive we reported to (when doing HA/CMP), then went over to head up Somerset (i.e. AIM, apple, ibm, motorola) single-chip processor and I somewhat characterize as adding motorola 88k risc multiprocessor cache coherency ... then can have large scalable clusters of multiprocessor systems (rather than single processor systems)

trivia: 1988, IBM branch office asks me to help LLNL get some serial stuff they are playing with, standardized, which quickly becomes fibre-channel standard (including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, 200mbyte/sec. Then in 1990, POK gets some of their serial stuff announced with ES/9000 as ESCON, when it is already obsolete (17mbyte/sec). For high-end HA/CMP scale-up was planning on using FCS for large disk farms (with high-performance large non-blocking FCS switches). Also had a high-speed protocol similar to multiprocessor CPU cache coordination for handling large cluster scalable RDBMS cache coordination. We had also been working w/Hursley on 9333/Harrier serial, able to do full-duplex (concurrent transfer in both directions) 80mbit/sec, increasing to 160mbit/sec full-duplex and planning on it becoming interoperable with FCS. After we leave, it becomes SSA instead.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

then some POK engineers become involved with FCS and define a heavy weight protocol that radically reduces the native throughput, that eventually ships as FICON. The most recent public benchmark I can find is z196 "Peak I/O" that gets 2M IOPS when using 104 FICON (running over 104 FCS). About the same time, a FCS is announced for E5-2600 server blades claiming over million IOPS (two getting higher throughput than 104 FICON). Note "z" pubs say keep SAPs (system assist processors that actually do the I/O) at/below 70% CPU, which would make it around 1.5M IOPS (less than the 2M "Peak I/O"). Max configured z196 benchmarked at 50BIPS while E5-2600 blade running same benchmark (number of program iterations compared to 370/158, not actual instruction count) was 500BIPS (ten times z196).

FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 15 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe

After Future System implosion in the mid-70s, there was mad rush to get stuff back into the 370 product pipeline (including kicking off quick&dirty 3033&3081 in parallel). I get asked to help with a 16-way 370 multiprocessor that everybody thought was really great and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than than remapping 168 logic to 20% faster chips). Then somebody tells the head of POK it could be decades before POK favorite son operating system (MVS) has effective 16-way support. Some of us are then asked to never visit POK again and 3033 processor engineers instructed to stop being distracted. POK doesn't ship 16-way support until after the turn of the century, z900 16-processor 2.5BIPS aggregate, 156MIPS/processor.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

other trivia: my wife had been in the gburg JES group and one of the catchers for ASP/JES3 and co-author of JESUS specification (all of the features that the JES2&3 respective customers couldn't live w/o), for what ever reason never got done. She was then con'ed into going to POK to be in charge of loosely-coupled architecture (POK for cluster). She didn't remain long because 1) on going battles with communication group trying to force her to use VTAM for loosely-coupled operation and 2) little uptake (until much later with SYSPLEX and Parallel SYSPLEX) except for IMS hot-standby (she has story about asking Vern Watts who he was going to ask for permissions, he replies nobody, he would tell them when it was all done.

peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 16 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#36 RS/6000 Mainframe

... about same time (1988) that I was ask to help LLNL (national lab) get some serial stuff standardized ... which quickly becomes fibre-channel standard (FCS, including some stuff I had done in 1980)

FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

... I was also asked to participate in SLAC's "SCI"
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface

it defined memory bus with 64 positions. Convex (bought by HP) did 64 two-processor (snake/risc) shared cache boards for 128-way shared memory multiprocessor. Data General and Sequent did 64 four-processor (initially i486) shared cache boards for 256-way shared memory multiprocessor ... I did some consulting for Steve Chen when he was CTO at Sequent ... before being bought and shutdown by IBM (note IBM earlier had been funding Chen when he founded Chen Supercomputing)
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
IBM, Chen & non-cluster, traditional supercomputer
https://techmonitor.ai/technology/ibm_bounces_steve_chen_supercomputer_systems

It wasn't just POK, but also the communication group was fiercely fighting off client/server and distributed computing and had gotten the PS2 microchannel cards severely performance kneecapped. AWD had done their own cards (PC/AT bus) for PC/RT, including 4mbit token/ring card. For RS/6000 with microchannel bus, AWD was told they couldn't do their own cards but had to use PS2 cards. Example was the PS2 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card (joke that PC/RT 4mbit t/r server would have higher throughput than rs/6000 16mbit t/r card server).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Late 80s, senior disk engineer got a talk scheduled at world-wide, annual, internal communication group conference, supposedly on 3174 performance, but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division saw drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (with their corporate responsibility for everything that crossed datacenter walls). The GPD/Adstar VP of software as partial work-around was investing in distributed computing startups that would use IBM disks and would periodically ask us to drop by his investments. He also funded the unix/posix implementation in MVS.

communication group battling client/server and distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

Communication group stranglehold on datacenters weren't just affecting disks and a couple years later, IBM has one of the losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex that (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

some posts mentioning SCI, Sequent, and Chen
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#22 Vintage Cray
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2022g.html#91 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#29 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022.html#118 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2021i.html#16 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2019d.html#81 Where do byte orders come from, Nova vs PDP-11
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#32 Cluster Systems
https://www.garlic.com/~lynn/2018d.html#57 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018b.html#53 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2015g.html#74 100 boxes of computer books on the wall
https://www.garlic.com/~lynn/2015g.html#72 100 boxes of computer books on the wall
https://www.garlic.com/~lynn/2014m.html#140 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#50 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013h.html#6 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2011f.html#85 SV: USS vs USS
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2010f.html#48 Nonlinear systems and nonlocal supercomputing
https://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?
https://www.garlic.com/~lynn/2009s.html#5 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2009e.html#7 IBM in Talks to Buy Sun
https://www.garlic.com/~lynn/2009.html#5 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2006y.html#38 Wanted: info on old Unisys boxen
https://www.garlic.com/~lynn/2006q.html#9 Is no one reading the article?
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2002h.html#42 Looking for Software/Documentation for an Opus 32032 Card

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 16 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#36 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe

TCP/IP tome

One of the inventors of GML (GML invented at science center in 1969, decade later becomes ISO standard SGML, after another decade morphs into HTML at CERN)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

GML, SGML, HTML, etc
https://www.garlic.com/~lynn/submain.html#sgml
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

... Edson, co-worker at science center and responsible for "wide area network" morphs into the corporate network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

Ed and I transfer out to SJR in 1977

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

technology also used for the corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/BITNET

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet/earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

Starting in the early 80s, I also got HSDT project, T1 and faster links, both terrestrial and satellite ... lots of infighting with communication group (that were limited to 56kbits/sec). Put up early T1 sat. link between the IBM Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston that had a bunch of floating point system boxes (latest ones had 40mbyte/sec disk arrays).
https://en.wikipedia.org/wiki/Floating_Point_Systems

We were also working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running), From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
NSFNET related email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

Communication group had also been fighting off release of (traditional) mainframe TCP/IP support, then possibly some influential customers got that changed ... and the communication group change their tactic; since the communication group had corporate strategic responsibility for everything that crossed the datacenter walls, it had to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did the enhancements for RFC1044 support and in some tuning tests at Cray Research between a Cray and IBM 4341, got sustained channel throughput using only modest amount of 4341 processor (around 500 times improvement in bytes moved per instruction executed).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

First webserver in the US was on Stanford SLAC VM370 sysstem
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

Not long after cluster scale-up is transferred, we are told we can't work on anything with more than four processors and we leave IBM, I get brought in as consultant into a small client/server startup.

Two former Oracle people (that we worked with on HA/CMP scale-up, and were in Ellison's meeting when AWD/Hester told them about 16-way & 128-way) are there responsible for "commerce server" and they wanted to do payment transactions on the server; the startup had also invented this technology they call "SSL" that they want to use, it is now frequently called "electronic commerce". I had responsibility for everything between webservers and the financial payment networks.

Early deployment saw explosive uptake and HTTP/HTTPS use of TCP session had huge problem with linear scan of FINWAIT2 list (thousands of pending TCP session closes) ... webservers were spending 95% of CPU scanning FINWAIT2 list. Netscape eventually installs large multiprocessor SEQUENT server that had already solved the problem in DYNIX ... it was another six months before other vendors shipped solution.

SSL, electronic commerce posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

I put together a talk on "Why Internet Isn't Business Critical Dataprocessing" on all the stuff that I had to do for "electronic commerce" and Postel
https://web.archive.org/web/20240612205634/https://www.postel.org/jon-postel/
(Internet RFC standards editor) sponsored the talk.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

posts mentioning "Why Internet Isn't Business Critical Dataprocessing" talk
https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#94 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021j.html#10 System Availability
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019b.html#100 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017j.html#42 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017j.html#31 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017e.html#90 Ransomware on Mainframe application ?
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017d.html#92 Old hardware
https://www.garlic.com/~lynn/2016d.html#17 Cybercrime
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2012k.html#19 SnOODAn: Boyd, Snowden, and Resilience
https://www.garlic.com/~lynn/2009o.html#62 TV Big Bang 10/12/09
https://www.garlic.com/~lynn/2007m.html#36 Future of System/360 architecture?
https://www.garlic.com/~lynn/2007g.html#38 Can SSL sessions be compromised?
https://www.garlic.com/~lynn/2006v.html#2 New attacks on the financial PIN processing
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2005d.html#42 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2002e.html#18 Opinion on smartcard security requested
https://www.garlic.com/~lynn/aepay3.htm#votec (my) long winded observations regarding X9.59 & XML, encryption and certificates

--
virtualization experience starting Jan1968, online at home since Mar1970

Card Sequence Numbers

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Card Sequence Numbers
Date: 16 Jan, 2024
Blog: Facebook
when CP67/CMS was initially installed at univ (3rd after cambridge itself and MIT Lincoln labs), CP67 source was still maintained&assembled on OS/360 ... however it wasn't long before source was moved to CMS filesystem and the cms update command was used for source change/maintenance ... insert, replace, and delete syntax was all based on the sequence numbers in "card" images ... default was originally source had sequence numbers incremented by the tens, leaving room for insert/adds ... however if exceeded 9 statements, then had to "replace" some, putting them back in with new sequence; however, had to manual type in sequence numbers for new inserted/replaced statements. At the univ. I was making so many source changes that I invented the "$" convention for preprocessing for the update command that generated the sequence numbers for new source statements. Later as part of adding multi-level source file updates that "$" convention was picked up as part of CMS standard

Melinda ... and her history efforts:
https://www.leeandmelindavarian.com/Melinda#VMHist

in mid-80s, sent me email asking if I had copy of the original CMS multi-level source update implementation ... which was exec implementation repeatedly applying the updates ... first for original source, creating temporary file, and then repeatedly applying updates to series of the temporary files. I had a huge archive of files from 60s & 70s, replicated on multiple tapes in the IBM Almaden Research tape library ... and was able to pull off the original implementation. Melinda was fortunate since a few weeks later, Almaden had a operational problem where random tapes were being mounted as scratch ... and I lost my complete 60&70s archive.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Melinda email
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850908
in this archived post
https://www.garlic.com/~lynn/2006w.html#42

other posts mentioning Melinda's request for original multi-level source update
https://www.garlic.com/~lynn/2023e.html#28 Copyright Software
https://www.garlic.com/~lynn/2021k.html#51 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2014e.html#35 System/360 celebration set for ten cities; 1964 pricing for oneweek
https://www.garlic.com/~lynn/2011c.html#3 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2006w.html#48 vmshare

--
virtualization experience starting Jan1968, online at home since Mar1970

UNIX, MULTICS, CTSS, CSC, CP67

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: UNIX, MULTICS, CTSS, CSC, CP67
Date: 16 Jan, 2024
Blog: Facebook
Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
and did virtual machines (initially CP40 for modified 360/40 with virtual memory, morphs into CP67 when 360/67 standard with virtual memory becomes available, precursor to vm/370), internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s), invented GML in 1969 (decade later morphs into ISO standard SGML and after anther decade morphs into HTML at CERN), misc. time-sharing apps, etc. more history
https://www.leeandmelindavarian.com/Melinda#VMHist other history
https://en.wikipedia.org/wiki/CP-67
https://en.wikipedia.org/wiki/CP/CMS
https://en.wikipedia.org/wiki/Time-sharing

I was undergraduate and univ. hired me fulltime responsible for os/360 (360/67 original for tss/360, but run as 360/65), univ shutdown datacenter on weekends and I would have it dedicate for 48hrs, although monday classes could be difficult. CSC comes out to install CP/67 (3rd installation after CSC itself and MIT Lincoln Labs) and I mostly played with it on my weekend window ... where I rewrote lots of the code. Six months later, CP/67 had been announced and made it publicly available and CSC announced a week class at the Beverly Hills Hilton. When I arrived Sunday to attend, I was asked to teach CP67, turns out the people that were suppose to teach it had resigned on Friday to join one of the two 60s, CSC spinoff, CP/67, online, interactive commercial services.

In the early 80s, when I saw (AT&T) Unix source, I wondered how much had been inherited from CTSS (by way of MULTICS) ... some of it looked similar to the original CP67 release 1 (that might have also come from CTSS) that I completely rewrote as an undergraduate back in the 60s.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Interdata trivia: Original CP67 had 1052 & 2741 terminal support with automatic terminal type identification, updating the IBM controller to associate the appropriate terminal type port scanner with each line. Univ had some TTY/ASCII terminals, so I added ASCII terminal support integrated with automatic terminal type identification. I then wanted to have single dial-up number for all terminals ... didn't quite work, while IBM controller could change terminal type port scanner for each line, line-speed had been hard wired. Univ. then kicks off clone controller project, build a channel interface board for Interdata/3 programmed to emulate IBM controller with addition it could do automatic line speed. It was later updated with an Interdata/4 for the channel interface and cluster of Interdata/3s for port interfaces. There was article about four of us being responsible for (some part) of IBM clone controller business ... Interdata selling boxes as IBM clone controller
https://en.wikipedia.org/wiki/Interdata
and then Perkin-Elmer
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
and then spun off as Concurrent
https://en.wikipedia.org/wiki/Concurrent_Computer_Corporation

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 17 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#36 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe

new almaden research bldg was heavily provisioned with CAT4 supposedly for 16mbit token-ring, but they found that not only did $69 10bit ethernet card capable of higher throughput (8.5mbit) than $800 16mbit token-ring (performance kneecapped) microchannel card, but that 10mbit ethernet LAN had higher aggregate throughput and lower latency than 16mbit token-ring lan.

performance kneecap 16mbit token-ring cards were targeted at having 300 PCs all doing host 3270 terminal emulation sharing same LAN bandwidth (effectively 300 sharing a couple mbits, say 7kbits/PC). They could get a high-speed (TCP/IP) router with channel interface and more than dozen 10mbit ethernet LANs (effectively each LAN around 8.5mbits) and spread 300 R6K across the LANs (around 15 R6K/LAN sharing 8.5mbits, 600kbits/R6K) ... also high-end router and 300 $69 enet cards much less expensive than 300 $800 t/r cards.

ALM found another problem with lots of RS/6000, one in nearly every office ... that they generated so much heat ... that the bldg air conditioning couldn't adjust with heat generation change with them being turned off at the end of the day and then all turned back on the next morning (it could handle steady state amount of heat, it was the fluctuation).

.. other trivia: 1988 ACM SIGCOMM had detailed analysis of ethernet (including a test of 30 stations sharing 10mbit ethernet all with low-level device driver loop constantly transmitting minimum size packets, effective LAN throughput drops off from 8.5mbit to 8mbit) about the same time Dallas E&S center came out with publication comparing 16mbit t/r to ethernet. The Ethernet numbers Dallas used I conjectured was from 3mbit prototype before 10mbit standard with listen before transmit.

communication group fighting off client/server and distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

posts mentioning 1988 acm sigcomm article:
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#19 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#80 Channel I/O
https://www.garlic.com/~lynn/2022b.html#67 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021b.html#45 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#17 IBM Kneecapping products
https://www.garlic.com/~lynn/2019.html#74 21 random but totally appropriate ways to celebrate the World Wide Web's 30th birthday
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
https://www.garlic.com/~lynn/2017k.html#18 THE IBM PC THAT BROKE IBM
https://www.garlic.com/~lynn/2017d.html#29 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2017d.html#28 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2015h.html#108 25 Years: How the Web began
https://www.garlic.com/~lynn/2015d.html#41 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014m.html#128 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013m.html#30 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013m.html#18 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013b.html#32 Ethernet at 40: Its daddy reveals its turbulent youth
https://www.garlic.com/~lynn/2012g.html#39 Van Jacobson Denies Averting 1980s Internet Meltdown
https://www.garlic.com/~lynn/2004e.html#17 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2000f.html#39 Ethernet efficiency (was Re: Ms employees begging for food)

--
virtualization experience starting Jan1968, online at home since Mar1970

Los Gatos Lab, Calma, 3277GA

From: Lynn Wheeler <lynn@garlic.com>
Subject: Los Gatos Lab, Calma, 3277GA
Date: 17 Jan, 2024
Blog: Facebook
los gatos lab let me have part of wing, they had some number of 3277GA with tektonics tubes
https://en.wikipedia.org/wiki/IBM_3270#3277
... but also a couple of large ge calma workstations
https://en.m.wikipedia.org/wiki/Calma

3728s came out around time there were a lot of publications about productivity of qtr second response. 3272/3277 had .086 sec hardware response .. so .16sec system response gave quarter sec. For 3278 they moved lots of electronics back to the 3274 controller (eliminated doing things like 3277ga) ... driving up the coax protocol traffic and latency, so hardware response .3sec-.5sec ... making it impossible to achieve quarter second. memos to 3278 product administrator met with response that 3278 weren't for interactive computing ... but data entry (aka electronic keypunch). MVS users never noticed since their system response was rarely even one second (the SHARE "turkey" mascot for MVS). Later ibm/pc 3277 emulation cards had 4-5 times the transfer throughput of 3278 emulation cards.

some posts mentioning Calma
https://www.garlic.com/~lynn/2023c.html#75 IBM Los Gatos Lab
https://www.garlic.com/~lynn/2022.html#63 Calma, 3277GA, 2250-4
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016g.html#53 IBM Sales & Marketing
https://www.garlic.com/~lynn/2010c.html#91 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2009.html#37 Graphics on a Text-Only Display
https://www.garlic.com/~lynn/2007m.html#58 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2006q.html#16 what's the difference between LF(Line Fee) and NL (New line) ?
https://www.garlic.com/~lynn/2005u.html#6 Fast action games on System/360+?
https://www.garlic.com/~lynn/2005r.html#24 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)

--
virtualization experience starting Jan1968, online at home since Mar1970

Univ, Boeing Renton and "Spook Base"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Univ, Boeing Renton and "Spook Base"
Date: 17 Jan, 2024
Blog: Facebook
I took two credit hr into to fortran/computers and at the end of semester got a job rewriting 1401 MPIO in assembler for 360/30. The univ. had been sold 360/67 for tss/360 to replace 709 (tape->tape) / 1401 (unit record front-end, manually moving tapes between 1401 and 709 drives) ... and got a 360/30 temporarily replacing 1401 pending arrival of 360/67. Univ. shutdown datacenter on the weekend and I would have the place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a lot of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. Within a few weeks I had 2000 card assembler program, assembling under os/360 on 360/30 took 30mins (and ipl'ed using BPS loader). I then added assembler option for generating version running under os/360 with system services which took 60mins to assemble (5-6mins/DCB macro).

Early on, I learned the 1st thing to do coming in sat. morning was clean all the tape drives, clean the 1403, disassemble the 2540 reader/punch, clean and reassembler. Sometimes when I came in, production had finished early and they had powered everything off ... and I would find the 360/30 wouldn't power on. Manuals & trial&error figured out to put all controllers in CE-mode, power-on the 360/30 and controllers individually, and then return controllers to normal mode.

Within a year of taking intro class, 360/67 arrived (replacing 360/30 and 2311s with 2314s) and I was hired fulltime responsible for OS/360 (tss/360 never came to production, so ran as 360/65). Student fortran ran under second on 709, initial on OS/360 360/65 ran over minute. Installed HASP and cut time in half. Then started redoing stage2 sysgen (beginning with MFT/R11) 1) run in production job stream with hasp, 2) reorganize steps/statements to better place datasets and PDS members for improved ordered seek and multi-track search ... cuts another 2/3rds to 12.9secs ... never got better than 709 until I install Univ. Waterloo WATFOR.

Before I graduate, I'm hired into small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit, including offering services to non-Boeing entities). I thought Renton datacenter possibly largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room (a recent comment was Boeing was ordering 360/65s like other companies ordered keypunches).

In the early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM. He had story about being very vocal that the electronics across the trail wouldn't work ... and was then put in command of "spook base" (about the same time I'm at Boeing) ... his biographies claims "spook base" was a $2.5B windfall for IBM (something like ten times Renton) ... some "spook base" refs:
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

He was posted to Elgin where he was stealing computer time, designing the YF16 (that becomes the F16) ... and internal USAF politics wanted to presecute him. Some more in "The Mad Major"
https://www.usni.org/magazines/proceedings/1997/july/genghis-john
To make matters even worse, Boyd had no right to design airplanes--he worked at Eglin Air Force Base, Florida, where rednecks tested bombs designed by others, whereas the airplane designers worked at Wright-Patterson Air Force Base in Dayton Ohio, the home of the Wright brothers and the mecca for aeronautical engineering. For a man like Boyd, there was only one thing to do. He concocted a daring plan to steal thousands of hours of computer time by making it appear that the computer was being used for something else.

... snip ...

... misc:
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/
https://web.archive.org/web/20141108103851/http://www.dnipogo.org/fcs/boyd_thesis.htm
https://thetacticalprofessor.net/2018/04/27/updated-version-of-boyds-aerial-attack-study/
some more:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Boyd Posts & URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

some recent posts mentioning Univ, Boeing Renton, and "Spook Base"
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 18 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#36 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe

RIOS (RS6000) chip-set didn't have provision for multiprocessor cache consistency ... as a result, doing HA/CMP scale-up was purely cluster (then HA/CMP cluster scale-up was transferred for announce as IBM supercomputer for technical/scientific *ONLY*, and we were told we couldn't work on anything with more than four processors)
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

... 1993 era:
eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
RS6000/990 : 126MIPS; 16-way: 2016MIPS, 128-way: 16,128MIPS


we heard that from lots of companies ... about IBM prices, national labs also, they started doing racks of server PCs (essentially same as what large cloud operators were doing) with 100mbit ethernet that appeared in 1995

Also 1995, 3yrs after we left, HA/CMP Product Administrator claimed that HA/CMP had IBM's 2nd highest software revenue (after VTAM) that year.

The executive we reported to doing our HA/CMP, goes over to head up SOMERSET (AIM, Apple, IBM, Motorola) doing single chip 801 ... originally Power/PC ... and I've commented appears to adapt Motorola's 88k/RISC multiprocessor cache consistency ... so could have POWERPC SMP multiprocessor systems ... as well as clusters of multiprocessor SMP systems.
https://en.wikipedia.org/wiki/AIM_alliance
https://en.wikipedia.org/wiki/IBM_Power_microprocessors
https://en.wikipedia.org/wiki/Motorola_88000
In the early 1990s Motorola joined the AIM effort to create a new RISC architecture based on the IBM POWER architecture. They worked a few features of the 88000 (such as a compatible bus interface[10]) into the new PowerPC architecture to offer their customer base some sort of upgrade path. At that point the 88000 was dumped as soon as possible

... snip ...

https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/IBM_Power_microprocessors#PowerPC
After two years of development, the resulting PowerPC ISA was introduced in 1993. A modified version of the RSC architecture, PowerPC added single-precision floating point instructions and general register-to-register multiply and divide instructions, and removed some POWER features. It also added a 64-bit version of the ISA and support for SMP.

... snip ...

after turn of century (Dec2000), POK mainframe ships z900, 16 processors, 2.5BIPS (156MIPS/proc)

but in 1999 IBM PowerPC 440 Hits 1,000MIPS (>six times faster processor)
https://www.cecs.uci.edu/~papers/mpr/MPR/19991025/131403.pdf
also 1999, Intel Pentium III hits 2,054MIPS (13times z900 processor)

2003, Pentium4 hits 9,726MIPS

also 2003, z990 32 processor 9BIPS (281MIPS/proc) and single P4 faster than z990 32 processor SMP

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Hospitals owned by private equity are harming patients

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Hospitals owned by private equity are harming patients
Date: 18 Jan, 2024
Blog: Facebook
Hospitals owned by private equity are harming patients, reports find Hospital ratings dive and medical errors rise when private equity firms are in charge.
https://arstechnica.com/health/2024/01/hospitals-slash-staff-services-quality-of-care-when-private-equity-takes-over/

... note after turn of the century PE was buying up beltway bandits and gov. contractors and hiring prominent politicians to lobby congress to outsource gov. to their companies (laws were in place that blocked the companies from lobbying directly, but this was a way of skirting the laws). They cut corners to skim as much money as possibly, example outsourcing for security clearances found companies doing the paper work, but not actually doing background checks.

Then last decade they started moving into medical practices, hospitals, rest homes, etc.

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 19 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#36 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe

Note: Not traditional POK mainframe or RS/6000, but IBM was selling (IBM branded) server blades, with processing and I/O throughput matching or better than their other "mainframes" aka 2003 Pentium 4, 9,726MIPS (9.7BIPS); 2003 z990 32 processor 9BIPS

Note RISC had lots of throughput advantages over CISC ... by turn of century, i86 chip makers, had hardware layer that translated i86 instructions to RISC micro-ops starting to match traditional RISC throughput (each instruction taking more cycles to complete, but more instructions being processed concurrently significantly increased throughput)
https://www.anandtech.com/show/1998/3
The most common x86 instructions are translated into a single micro-op by the 3 simple decoders. The complex decoder is responsible for the instructions that produce up to 4 micro-ops. The really long and complex x86 instructions are handled by a microcode sequencer. This way of handling the complex most CISC-y instructions has been adopted by all modern x86 CPU designs, including the P6, Athlon (XP and 64), and Pentium 4.

... snip ...

... along with out-of-order execution, branch prediction, speculative execution, etc. A lot of this was observation that memory (& cache miss) latency ... when measured in count of CPU cycles was comparable to 60s disk latency when measured in count of 60s CPU cycles (memory is new disk). Other refs:
https://news.ycombinator.com/item?id=12353489
https://stackoverflow.com/questions/5806589/why-does-intel-hide-internal-risc-core-in-their-processors
https://electronics.stackexchange.com/questions/188268/difference-between-micro-operations-in-risc-and-cisc-processors
https://en.wikipedia.org/wiki/Intel_Microcode
https://ieeexplore.ieee.org/abstract/document/1281676
https://link.springer.com/chapter/10.1007/978-3-540-93799-9_4
http://sunnyeves.blogspot.com/2009/07/intel-x86-processors-cisc-or-risc-or.html


2010, max configured z196, 80 processor, 50BIPS,
      625MIPS/processor, $30M, $600K/BIPS

2010, E5-2600 server blade, two 8core chips, 16processor, 500BIPS,
      31BIPS/processor, IBM base list price $1815, $3.63/BIPS

Since turn of century, large cloud vendors claim that they assemble their own server blades for 1/3rd the cost of brand name blades ... or $1.21/BIPS. Shortly after press articles about server chip vendors were shipping half their product directly to cloud megadatacenters, IBM sells off its server business. Trivia: large cloud operations have a dozen or more megadatacenters around the world, each with 500,000 or more systems (possibly equivalent to 5M POK mainframes) with massive automation (megadatacenter getting by with 70-80 staff).

Not long later, there were articles that IBM (traditional POK) mainframe hardware revenue had dropped to a few percent of total, but total mainframe group revenue was 25% of total IBM revenue (and 40% of profit), mostly software and services.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

some recent posts mentioning E5-2600 server blades
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#107 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#2 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#96 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#43 AI Scale-up
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#98 Mainframe Cloud
https://www.garlic.com/~lynn/2022f.html#12 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#72 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022c.html#12 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022c.html#7 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#125 Google Cloud
https://www.garlic.com/~lynn/2022b.html#63 Mainframes
https://www.garlic.com/~lynn/2022b.html#57 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#96 370/195
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O

POK mainframe system MIPS this century, more recently had to extrapolate based on public statements regarding newer system compared to previous:


z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019
• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)
z16, 200 processors, 222BIPS* (1111MIPS/proc), Sep2022
• pubs say z16 1.17 times z15 (1.17*190BIPS or 222BIPS)

z196/jul2010, 50BIPS, 625MIPS/processor
z16/sep2022, 222BIPS, 1111MIPS/processor

12yrs, z196->z16,
   222/50=4.4times total system BIPS;
   1111/625=1.8times per processor MIPS.

--
virtualization experience starting Jan1968, online at home since Mar1970

3330, 3340, 3350, 3380

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3330, 3340, 3350, 3380
Date: 19 Jan, 2024
Blog: Facebook
3344 ... actually multiple 3340s emulated on hardware that is basically a 3350 physical drive
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_3344
3350 product information (3350 native, 3330 1&2 emulation, 3344)
https://ed-thelen.org/comp-hist/IBM-ProdAnn/3350.pdf

TYMSHARE provided their CMS-based online computer conferencing (precursor to social media) free to (user group) SHARE in AUG1976 ... from archives, 3380 stiction/sticktion
http://vm.marist.edu/~vmshare/browse.cgi?fn=3380&ft=PROB#104

getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some recent posts mention 3350fh
https://www.garlic.com/~lynn/2024.html#29 IBM Disks and Drums
https://www.garlic.com/~lynn/2023g.html#84 Vintage DASD
https://www.garlic.com/~lynn/2023f.html#49 IBM 3350FH, Vulcan, 1655
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2021k.html#97 IBM Disks
https://www.garlic.com/~lynn/2021j.html#65 IBM DASD
https://www.garlic.com/~lynn/2021f.html#75 Mainframe disks

--
virtualization experience starting Jan1968, online at home since Mar1970

VAX MIPS whatever they were, indirection in old architectures

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: VAX MIPS whatever they were, indirection in old architectures
Newsgroups: comp.arch
Date: Fri, 19 Jan 2024 16:17:32 -1000
EricP <ThatWouldBeTelling@thevillage.com> writes:
For single precision the 780 is slightly faster for "coded BLAS" and the 158 is about 50% faster for compiled code.

trivia: jan1979, I was asked to run cdc6600 rain benchmark on (engineering) 4341 (before shipping to customers, the engineering 4341 was clocked about 10% slower than what shipped to customers) for national lab that was looking at getting 70 4341s for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). I also ran it on 158-3 and 3031. A 370/158 ran both the 370 microcode and the integrated channel microcode; a 3031 was two 158 engines, one with just the 370 microcode and a 2nd with just the integrated channel microcode.


cdc6600: 35.77secs
158:     45.64secs
3031:    37.03secs
4341:    36.21secs

... 158 integrated channel microcode was using lots of processing cycles, even when no i/o was going on.

a few past posts mention national lab rain benchmark for 4341
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2016h.html#44 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead

--
virtualization experience starting Jan1968, online at home since Mar1970

Card Sequence Numbers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Card Sequence Numbers
Date: 20 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#39 Card Sequence Numbers

people came out from science center to install CP67 (3rd installation after cambridge itself and MIT Lincoln Labs, precursor vm370) and I mostly played with it during my weekend time. At that time, CP67 source was kept and assembled on OS/360, txt decks punched, arranged in card tray behind BPS loader and whole tray of cards IPL'ed; it would invoke CPINIT which would write the storage image to disk for (system) IPL. note: as individual modules were assembled, would take each assembled TXT deck, do a diagonal stripe across the top of the deck with colored marker and write the module name, before placing in the tray, in order. then later when individual modules were changed and re-assembled ... it was easy to identify/find the module cards/deck (in card tray) to be replaced.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

a few other posts mentioning cp67 card ipl
https://www.garlic.com/~lynn/2023g.html#83 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2012l.html#98 PDP-10 system calls, was 1132 printer history

--
virtualization experience starting Jan1968, online at home since Mar1970

Slow MVS/TSO

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Slow MVS/TSO
Date: 20 Jan, 2024
Blog: Facebook
slow MVS&TSO ... also SHARE MVS group selects "turkey" as its mascot and theme song MVS (MVS song sung at SHARE HASP sing-along)
http://www.mxg.com/thebuttonman/boney.asp
also revealed that customers weren't installing MVS as expected/desired and so IBM added $4k sales bonus

note about same time, CERN did a report to SHARE about MVS/TSO comparison with VM370/CMS ... copies (outside IBM) were freely available, but inside IBM, copies were stamped "IBM Confidential - Restricted" (second highest security classification), available on need to know, only.

Decade ago, I was asked to track down decision to add virtual memory to all 370s ... basically MVT storage management was so bad that regions sizes had to be specified four times larger than used, as a result a standard 1mbyte 370/165 would only run four concurrent regions, insufficient to keep system busy and justified ... laying MVT out in 16mbyte virtual memory (similar to running MVT in CP67 16mbyte virtual machine) allowed number of concurrent regions to be increased by a factor of four times with little or no paging. Old archived post with pieces of the email exchange:
https://www.garlic.com/~lynn/2011d.html#73

then a copy of IBM confidential 370 virtual memory document leaked to the industry press (well before announcement) ... which kicked off a search for the source of the leak. One of the outcomes was all company copier machines had a serial number added ... that would appear on all pages copied (identifying the machine). The $4K sales bonus was during the "Future system" period ... which was completely different from 370 and was going to completely replace 370 (internal politics during FS was killing off 370 efforts ... the lack of new 370 is credited with giving clone/Amdahl 370 makers their market foothold, jokes about IBM sales had to resort to serious FUD). For FS documents, they went to specially modified VM370 with FS softcopy documents, allowed read/only on designated 3270 terminals ... some FS
http://www.jfsowa.com/computer/memo125.htm

when FS implodes, there is mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 in parallel. 370/XA specification documents were "IBM Confidential - Registered" (highest security) ... each copy printed had serial number on each page, would be registered to specific person, required to be kept in double lock cabinets and subject to surprise audits (and referred to as '811' for their Nov1978 publication date).

Head of POK also managed to convince corporate to kill the VM370 product (possibly in part for the CERN review), shutdown the development group, and move all the people to POK for MVS/XA (or supposedly MVS/XA wouldn't ship on time). Eventually Endicott manages to save the VM370 product mission, but had to reconstitute a development group from scratch. Old post about Learson trying to block bureaucrats, careerists, and MBAs from destroying Watson culture/legacy, then significantly accelerated by FS
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Note MVS had a design problem, inheriting pervasive pointer-passing API from OS/360. To address it, an 8mbyte image of the MVS kernel occupied every application 16mbyte address space. Then subsystems were placed in their own separate 16mbyte virtual address spaces. To support pointer-passing API (between applications and subsystems), a one mbyte segment was mapped into every virtual address space, CSA or "Common Segment Area" for passing data back&forth between applications and subsystems. However, the requirement for CSA space was somewhat proportional to number of concurrently running applications and subystems ... quickly becoming "Common System Area" and by 3033 was regularly 5-6mbytes (leaving only 2-3mbytes for applications) and threatening to become 8mbytes (8mbyes for kernel and 8mbytes for CSA, leaving nothing for applications) ... desperately needing the transition to MVS/XA and 31-bit addressing.

Then in early 80s POK was facing problem similar to SVS->MVS, but in this case MVS->MVS/XA. Amdahl was having better success because it had (microcode) HYPERVISOR (virtual machine) able to concurrently run MVS and MVS/XA on same machine. POK had the VMTOOL, which was for MVS/XA development only and never intended for release to customers. In order to respond to the lack of customer MVS->MVS/XA (and the Amdahl success), VMTOOL (which was dependent on really slow SIE facility, the lack of 3081 microcode space required it to be paged in&out) is released as VM/MA (migration aid) and then VM/SF (system facility). The trout (aka 3090) group seeing the VM requirement, designed high-performance SIE implementation ... old archived post
https://www.garlic.com/~lynn/2006j.html#27
with part of email exchange
https://www.garlic.com/~lynn/2006j.html#email810630

... but it still wasn't until nearly the end of the decade that 3090 was able to ship PR/SM & LPAR in response to Amdhal's HYPERVISOR. Logical Partition
https://en.wikipedia.org/wiki/Logical_partition

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

a few posts mentioning "CSA", mvs/xa, hypervisor, vm/ma, vm/sf
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System

--
virtualization experience starting Jan1968, online at home since Mar1970

VAX MIPS whatever they were, indirection in old architectures

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: VAX MIPS whatever they were, indirection in old architectures
Newsgroups: comp.arch
Date: Sun, 21 Jan 2024 16:37:40 -1000
Michael S <already5chosen@yahoo.com> writes:
Did I read it right? Brand new mid-range IBM mainframe barely matched 15 y.o. CDC machine that was 10 years out of production ? That sounds quite embarrassing.

re:
https://www.garlic.com/~lynn/2024.html#48 VAX MIPS whatever they were, indirection in old architectures

national lab was looking at getting 70 4341s because of price/performance ... sort of the leading edge of the coming cluster scale-up supercomputing tsunami.

decade later had project originally HA/6000 for NYTimes to move their newspaper system (ATEX) off (DEC) VaxCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres). Early Jan1992, meeting with Oracle CEO, who is told 16-way cluster mid-92 and 128-way cluster ye-92. However, end of Jan1992, cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific *ONLY*, possibly because of commercial cluster scale-up "threat") and we are told we couldn't work on anything with more than four processors (we leave IBM a few months later). A couple weeks later, IBM (cluster) supercomputer group in the press (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

First half 80s, IBM 4300s sold into the same mid-range market as VAX and in about the same numbers for single and small unit orders ... big difference was large companies ordering hundreds of 4300s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami).

old archived post with vax sales, sliced and diced by model, year, us/non-us
https://www.garlic.com/~lynn/2002f.html#0

2nd half of 80s, mid-range market was moving to workstation and large PC servers ... affecting both VAX and 4300s

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 22 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#36 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe

POK mainframe system MIPS this century, more recently had to extrapolate based on public statements about newer system compared to previous:
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000 z990, 32 processors, 9BIPS, (281MIPS/proc), 2003 z9, 54 processors, 18BIPS (333MIPS/proc), July2005 z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008 z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010 EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012 z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015 z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017 z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019 • pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS) z16, 200 processors, 222BIPS* (1111MIPS/proc), Sep2022 • pubs say z16 1.17 times z15 (1.17*190BIPS or 222BIPS)

z196/jul2010, 50BIPS, 625MIPS/processor z16/sep2022, 222BIPS, 1111MIPS/processor

12yrs, z196->z16, 222/50=4.4times total system BIPS; 1111/625=1.8times per processor MIPS.


RIOS/RS6000 didn't have cache consistency for tightly-coupled, shared memory, multiprocessor (so HA/CMP scale-up was purely cluster). The executive we reported to doing HA/CMP, went over to head up Somerset (AIM, Apple, IBM, Motorola) doing single chip 801 ... originally Power/PC ... and I've commented appears to adapt Motorola's 88k/RISC multiprocessor cache consistency ... so could have POWERPC SMP multiprocessor systems ... as well as clusters of multiprocessor SMP systems.
https://en.wikipedia.org/wiki/AIM_alliance
https://en.wikipedia.org/wiki/IBM_Power_microprocessors
https://en.wikipedia.org/wiki/Motorola_88000
In the early 1990s Motorola joined the AIM effort to create a new RISC architecture based on the IBM POWER architecture. They worked a few features of the 88000 (such as a compatible bus interface[10]) into the new PowerPC architecture to offer their customer base some sort of upgrade path. At that point the 88000 was dumped as soon as possible

https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/IBM_Power_microprocessors#PowerPC
After two years of development, the resulting PowerPC ISA was introduced in 1993. A modified version of the RSC architecture, PowerPC added single-precision floating point instructions and general register-to-register multiply and divide instructions, and removed some POWER features. It also added a 64-bit version of the ISA and support for SMP.
... snip...

1996 article that IBM hasn't chosen to offer shared-memory, multiprocessor PowerPC servers ... possibly for same reason that HA/CMP cluster scale-up was transferred for announce as IBM supercomputer for technical/scientific *ONLY* ... would "threaten" POK mainframe commercial market.
https://www.hpcwire.com/1996/02/02/bull-offers-a-powerpc-smp-server-the-escala/

In 1999 IBM PowerPC 440 Hits 1,000MIPS (>six times faster processor than z900 and 20yrs before z15 1BIPS processor)
https://www.cecs.uci.edu/~papers/mpr/MPR/19991025/131403.pdf

Note RISC had lots of throughput advantages over CISC ... by turn of century, i86 chip makers, had hardware layer that translated i86 instructions to RISC micro-ops for execution and starting to match traditional RISC throughput (each i86 instruction taking more cycles to complete, but more instructions being processed concurrently significantly increasing throughput)
https://www.anandtech.com/show/1998/3
The most common x86 instructions are translated into a single micro-op by the 3 simple decoders. The complex decoder is responsible for the instructions that produce up to 4 micro-ops. The really long and complex x86 instructions are handled by a microcode sequencer. This way of handling the complex most CISC-y instructions has been adopted by all modern x86 CPU designs, including the P6, Athlon (XP and 64), and Pentium 4.
... snip...

... along with out-of-order execution, branch prediction, speculative execution, etc. A lot of this was observation that memory (& cache miss) latency ... when measured in count of CPU cycles was comparable to 60s disk latency when measured in count of 60s CPU cycles (memory is new disk) ... also i86 processor performance increasing much faster than IBM "z" processors.

1999, Intel Pentium III hits 2,054MIPS (13times z900 processor & twice PowerPC 440) 2003, Pentium4 hits 9,726MIPS, single P4 processor faster than z990 32 processor SMP

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

Happy Birthday John Boyd!

From: Lynn Wheeler <lynn@garlic.com>
Subject: Happy Birthday John Boyd!
Date: 23 Jan, 2024
Blog: Facebook
Happy Birthday John Boyd!
https://thewhirlofreorientation.substack.com/p/happy-birthday-john-boyd

Boyd quote from the dedication of Boyd Hall, United States Air Force Weapons School, Nellis Air Force Base, Nevada. 17 Sept 1999:
"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

... snip ...

... I thot Boyd Hall was little strange; by the time he had passed in 1997, USAF had pretty much disowned him and it was the Marines at Arlington and his effects go to the Gray library & research center in Quantico. Recent IBM related Boyd reference:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Boyd related posts & URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 24 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#36 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe

trivia: 1988 branch office had asked me to see about helping LLNL (national labs) standardize some serial stuff they were playing with, which quickly becomes fibre-channel standard (FCS, including some stuff I had done in 1980, initially 1gbit, full-duplex, aggregate 200gbyte/sec). We were planning on using it for some HA/CMP scale-up, interconnecting large processor clusters with large disk farms. Then some POK engineers become involved and define heavy weight protocol for FCS that radically reduces throughput, that is eventually released as FICON. Latest public numbers I have found is z196 "Peak I/O" benchmark getting 2M IOPS using 104 FICONs (running over 104 FCS). About the same time a FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS with higher throughput than 104 FICON). Also max. configured z196 benchmarked at 50BIPS while E5-2600 server blade benchmarked at 500BIPS. Note POK publications recommended that SAPs (system assist processors that actually do I/O) should be held to 70% CPU, which would have limited benchmark to 1.5M IOPS (not 2M IOPS) ... also no CKD DASD have been made for decades, all being simulated on industry standard fixed-block disks.

About same time (1988) was also asked if I could participate in (SLAC/Gustavson) SCI standards activity
https://www.slac.stanford.edu/pubs/slacpubs/5000/slac-pub-5184.pdf
supporting shared memory scalable multiprocessor. After leaving IBM was doing some "electronic commerce" work with NETSCAPE and then they got a large Sequent multiprocessor and the Sequent people said that they were doing most of the multiprocessor scale-up work, passed two processors, on NT (this was before the sequent SCI numa-q 256 processor and doing some consulting work for Steve Chen who was then Sequent CTO at the time ... and before IBM bought them and shut them down).

There was windows NT work supporting non-i86 processors, including DEC's (risc) alpha and AIM's (apple, ibm, motorola risc) power/pc. RS/6000 and later Power/PCs ... came in desktop, desk-side and server rack configurations. Servers easily outran largest POK mainframes

FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
SMP, shared memory multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
"electronic commerce" gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
CKD DASD, FBA, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

EARN 40th Anniversary Conference

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: EARN 40th Anniversary Conference
Date: 24 Jan, 2024
Blog: Facebook
EARN 40th Anniversary Conference
https://www.earn2024.net/

Email from former co-worker at the science center. I have been blamed for online computer conferencing in the late 70s and early 80s on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s), it really took off spring of 1981 when I distributed a trip report of visit to Jim Gray at Tandem (folklore when IBM corporate executive committee was told about it, 5of6 wanted to fire me), only about 300 directly participated, but claims upwards of 25,000 were reading.

Date: 03/20/84 15:15:41
To: wheeler

Hello Lynn,

I have left LaGaude last September for a 3 years assignement to IBM Europe, where I am starting a network that IBM helps the universities to start.

This network, called EARN (European Academic and Research Network), is, roughly speaking, a network of VM/CMS machines, and it looks like our own VNET. It includes some non IBM machines (many VAX, some CDC, UNIVAC and some IBM compatible mainframes). EARN is a 'brother' of the US network BITNET to which it is connected.

EARN is starting now, and 9 countries will be connected by June. It includes some national networks, such as JANET in U.K., SUNET in Sweden.

I am now trying to find applications which could be of great interest for the EARN users, and I am open to all ideas you may have. Particularly, I am interested in computer conferencing.


... snip ... top of post, old email index, HSDT email

European Academic and Research Network
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
The History of the EARN Network
https://earn-history.net/technology/the-network/

another former co-worker was responsible for technology used for the internal network, BITNET, and EARN.
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

Ed and I transfer out to SJR in 1977

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

other trivia; from one of the inventors of GML at the science center in the 60s (precursor to SGML, HTML, XML, etc)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

... then science center "wide area network" morphs into the corporate network (BITNET and EARN).
https://en.wikipedia.org/wiki/BITNET

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (&/or EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML&SGML posts
https://www.garlic.com/~lynn/submain.html#sgml
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

other related refs: Cambridge Scientific Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
History of CP/CMS
https://en.wikipedia.org/wiki/History_of_CP/CMS
CP.CNS
https://en.wikipedia.org/wiki/CP/CMS
IBM System/360 Model 67
https://en.wikipedia.org/wiki/IBM_System/360-67
IBM Generalized Markup Language
https://en.wikipedia.org/wiki/IBM_Generalized_Markup_Language
also Melinda's history documents
https://www.leeandmelindavarian.com/Melinda#VMHist

--
virtualization experience starting Jan1968, online at home since Mar1970

Did Stock Buybacks Knock the Bolts Out of Boeing?

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Did Stock Buybacks Knock the Bolts Out of Boeing?
Date: 24 Jan, 2024
Blog: Facebook
Did Stock Buybacks Knock the Bolts Out of Boeing?
https://lesleopold.substack.com/p/did-stock-buybacks-knock-the-bolts
Since 2013, the Boeing Corporation initiated seven annual stock buybacks. Much of Boeing's stock is owned by large investment firms which demand the company buy back its shares. When Boeing makes repurchases, the price of its stock is jacked up, which is a quick and easy way to move money into the investment firms' purse. Boeing's management also enjoys the boost in price, since nearly all of their executive compensation comes from stock incentives. When the stock goes up via repurchases, they get richer, even though Boeing isn't making any more money.

... snip ...

2016, one of the "The Boeing Century" articles was about how the merger with MD has nearly taken down Boeing and may yet still (infusion of military industrial complex culture into commercial operation)
https://issuu.com/pnwmarketplace/docs/i20160708144953115

The Coming Boeing Bailout?
https://mattstoller.substack.com/p/the-coming-boeing-bailout
Unlike Boeing, McDonnell Douglas was run by financiers rather than engineers. And though Boeing was the buyer, McDonnell Douglas executives somehow took power in what analysts started calling a "reverse takeover." The joke in Seattle was, "McDonnell Douglas bought Boeing with Boeing's money."

... snip ...

Crash Course
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution
Sorscher had spent the early aughts campaigning to preserve the company's estimable engineering legacy. He had mountains of evidence to support his position, mostly acquired via Boeing's 1997 acquisition of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft plant in Long Beach and a CEO who liked to use what he called the "Hollywood model" for dealing with engineers: Hire them for a few months when project deadlines are nigh, fire them when you need to make numbers. In 2000, Boeing's engineers staged a 40-day strike over the McDonnell deal's fallout; while they won major material concessions from management, they lost the culture war. They also inherited a notoriously dysfunctional product line from the corner-cutting market gurus at McDonnell.

... snip ...

Boeing's travails show what's wrong with modern capitalism. Deregulation means a company once run by engineers is now in the thrall of financiers and its stock remains high even as its planes fall from the sky
https://www.theguardian.com/commentisfree/2019/sep/11/boeing-capitalism-deregulation

stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

a few recent posts mentioning Boeing troubles
https://www.garlic.com/~lynn/2023g.html#104 More IBM Downfall
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#69 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#40 Boeing Built an Unsafe Plane, and Blamed the Pilots When It Crashed

--
virtualization experience starting Jan1968, online at home since Mar1970

EARN 40th Anniversary Conference

From: Lynn Wheeler <lynn@garlic.com>
Subject: EARN 40th Anniversary Conference
Date: 25 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#55 EARN 40th Anniversary Conference

periodically reposted: early 80s, we had HSDT project, T1 and faster computer links (lots of battles with the company communication group that were stuck at 56kbits/sec links) also working with the NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers; then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
NSFNET related email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
company communication group fighting off TCP/IP, client/server, distributed computing posts (trying to preserve their dumb terminal paradigm and install base)
https://www.garlic.com/~lynn/subnetwork.html#terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

Sales of US-Made Guns and Weapons, Including US Army-Issued Ones, Are Under Spotlight in Mexico Again

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Sales of US-Made Guns and Weapons, Including US Army-Issued Ones, Are Under Spotlight in Mexico Again
Date: 26 Jan, 2024
Blog: Facebook
Sales of US-Made Guns and Weapons, Including US Army-Issued Ones, Are Under Spotlight in Mexico Again
https://www.nakedcapitalism.com/2024/01/us-made-weapons-are-once-again-under-the-spotlight-in-mexicos-war-on-drugs.html

... after the economic mess imploded ... and gov avoided prosecuting ("deferred prosecution") and Federal Reserve bailing out too big to fail ... there were articles about too big to fail were being caught money laundering for terrorists and drug cartels ... but were just subject to more "deferred prosecutions" ... also the too big to fail money laundering enabling drug cartels being able to acquire military grade weapons and equipment and responsible for the increase in violence on both sides of the border (in other cases the executives would be in jail and the institutions shutdown).

a few articles from the period (some gone 404, but live on at the wayback machine):

Too Big to Jail - How Big Banks Are Turning Mexico Into Colombia
https://web.archive.org/web/20100808141220/http://www.taipanpublishinggroup.com/tpg/taipan-daily/taipan-daily-080410.html
Banks Financing Mexico Gangs Admitted in Wells Fargo Deal
https://www.bloomberg.com/news/articles/2010-06-29/banks-financing-mexico-s-drug-cartels-admitted-in-wells-fargo-s-u-s-deal
Wall Street Is Laundering Drug Money And Getting Away With It
http://www.huffingtonpost.com/zach-carter/megabanks-are-laundering_b_645885.html
Banks Financing Mexico Drug Gangs Admitted in Wells Fargo Deal
https://web.archive.org/web/20100701122035/http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2010/06/28/bloomberg1376-L4QPS90UQVI901-6UNA840IM91QJGPBLBFL79TRP1.DTL

money laundering posts
https://www.garlic.com/~lynn/submisc.html#money.laundering
too big to fail, too big to prosecute, too big to jail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess

some specific posts mentioning too big to jail money laundering
https://www.garlic.com/~lynn/2021h.html#58 Mexico sues US gun-makers over flow of weapons across border
https://www.garlic.com/~lynn/2021h.html#13 'A Kleptocrat's dream': US real estate a safe haven for billions in dirty money, report says
https://www.garlic.com/~lynn/2021g.html#88 Mexico sues US gun-makers over flow of weapons across border
https://www.garlic.com/~lynn/2019c.html#56 It's time we tear up our economics textbooks and start over
https://www.garlic.com/~lynn/2018b.html#45 More Guns Do Not Stop More Crimes, Evidence Shows
https://www.garlic.com/~lynn/2016c.html#29 Qbasic
https://www.garlic.com/~lynn/2015e.html#92 prices, was Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014j.html#81 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014c.html#103 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014c.html#43 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014c.html#28 Royal Pardon For Turing
https://www.garlic.com/~lynn/2013d.html#42 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013.html#4 HSBC's Settlement Leaves Us In A Scary Place
https://www.garlic.com/~lynn/2012p.html#64 IBM Is Changing The Terms Of Its Retirement Plan, Which Is Frustrating Some Employees
https://www.garlic.com/~lynn/2012p.html#48 Search Google, 1960:s-style
https://www.garlic.com/~lynn/2012p.html#30 Search Google, 1960:s-style
https://www.garlic.com/~lynn/2012p.html#24 OCC Confirms that Big Banks are Badly Managed, Lack Adequate Risk Management Controls
https://www.garlic.com/~lynn/2012k.html#37 If all of the American earned dollars hidden in off shore accounts were uncovered and taxed do you think we would be able to close the deficit gap?
https://www.garlic.com/~lynn/2011p.html#96 Republicans Propose Bill to Treat Mexican Drug Cartels as 'Terrorist Insurgency'
https://www.garlic.com/~lynn/2011n.html#49 The men who crashed the world
https://www.garlic.com/~lynn/2011k.html#53 50th anniversary of BASIC, COBOL?
https://www.garlic.com/~lynn/2011f.html#52 Are Americans serious about dealing with money laundering and the drug cartels?
https://www.garlic.com/~lynn/2011.html#50 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2010m.html#24 Little-Noted, Prepaid Rules Would Cover Non-Banks As Wells As Banks

--
virtualization experience starting Jan1968, online at home since Mar1970

RUNOFF, SCRIPT, GML, SGML, HTML

From: Lynn Wheeler <lynn@garlic.com>
Subject: RUNOFF, SCRIPT, GML, SGML, HTML
Date: 26 Jan, 2024
Blog: Facebook
CTSS runoff
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
was adapted to CP67/CMS (precursor to VM370/CMS) as SCRIPT at the science center

after GML was invented in 1969 at the science center, GML tag processing was added to CMS SCRIPT

after a decade, GML morphs into ISO standard SGML ... and after another decade, morphs into HTML at CERN. Trivia, 1st webserver in the states is the stanford SLAC VM370/CMS system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

as mentioned upthread, Goldfarb's website seems has gone 404 ... but pages are still available on the wayback machine
https://web.archive.org/web/20230703135757/http://www.sgmlsource.com/history/sgmlhist.htm
https://web.archive.org/web/20231001185033/http://www.sgmlsource.com/history/roots.htm
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Science center documents were done with script .... one of the 1st mainstream IBM documents was the 370 architecture "red book" (for its distribution in red 3-ring binders). It had script command line option to either generate the full architecture "red book" or the 370 principle of operations subset

analysis of early html files at CERN ... tags somewhat taken from Waterloo script gml user's guide
http://infomesh.net/html/history/early/
However, HTML suffered greatly from the lack of standardization, and the dodgy parsing techniques allowed by Mosaic (in 1993). If HTML had been precisely defined as having to have an SGML DTD, it may not have become as popular as fast, but it would have been a lot architecturally stronger.

... snip ...

XHTML ... after another decade (3 decades after gml invented at science center in 1969)
https://en.wikipedia.org/wiki/XHTML

GML/SGML posts
https://www.garlic.com/~lynn/submain.html#sgml

specific posts that reference sgmlsource web pages, infomesh web page and CTSS RUNOFF
https://www.garlic.com/~lynn/2022b.html#111 The Rise of DOS: How Microsoft Got the IBM PC OS Contract
https://www.garlic.com/~lynn/2022.html#3 GML/SGML/HTML/Mosaic
https://www.garlic.com/~lynn/2017i.html#25 progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017f.html#105 The IBM 7094 and CTSS
https://www.garlic.com/~lynn/2017f.html#93 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2016h.html#87 [CM] 40 years of man page history
https://www.garlic.com/~lynn/2015g.html#98 PROFS & GML
https://www.garlic.com/~lynn/2013o.html#21 CTSS DITTO
https://www.garlic.com/~lynn/2013m.html#37 Why is the mainframe so expensive?
https://www.garlic.com/~lynn/2013.html#72 IBM documentation - anybody know the current tool? (from Mislocated Doc thread)
https://www.garlic.com/~lynn/2012.html#64 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2010k.html#61 GML
https://www.garlic.com/~lynn/2010k.html#55 GML
https://www.garlic.com/~lynn/2010k.html#53 Idiotic programming style edicts
https://www.garlic.com/~lynn/2008j.html#86 CLIs and GUIs

--
virtualization experience starting Jan1968, online at home since Mar1970

IOS3270 Green Card and DUMPRX

From: Lynn Wheeler <lynn@garlic.com>
Subject: IOS3270 Green Card and DUMPRX
Date: 27 Jan, 2024
Blog: Facebook
... somebody in UK did do a CMS IOS3270 version ... I've done a q&d conversion to HTML
https://www.garlic.com/~lynn/gcard.html
info
https://www.garlic.com/~lynn/gcard.html#greencard

note: IBM had mainframe bootstrap diagnostic procedure that started with scoping individual components. In transition to 3081 & TCMs, it was no longer possible to scope. They then developed proceedure with "service processor" connected to lots of probes into the TCMs ... and the service processor could be "scoped". For the 3090, the decided to go to a 4331 running a highly modified version of VM370 Release 6 and all the service screens done in IOS3270, which was upgraded to pair of (redundant) 4361s.

Trivia: old email from the 3092 (service processor) group about including one of my diagnostic tools
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

In early days of REX (before renamed and released to customers), I wanted to show it wasn't just another pretty scripting language and decided to rewrite the large assembler dump reader program in REX with ten times the function and ten times the performance (slight of hand to get interpreted REX faster than assembler) in 3months elapsed time working half time. I finished early so did a library of automated scripts that searched for common failure signatures. I thought it would then be shipped to customers, but wasn't (even though in use by almost every internal datacenter and PSR). I eventually got permission to give talks at user groups meeting on how I did the implementation ... and within a few months similar customer versions started showing up.

DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx

a few recent green card ios3270 posts
https://www.garlic.com/~lynn/2023g.html#66 2540 "Column Binary"
https://www.garlic.com/~lynn/2023f.html#28 IBM Reference Cards
https://www.garlic.com/~lynn/2023.html#67 IBM "Green Card"
https://www.garlic.com/~lynn/2022h.html#101 PSR, IOS3270, 3092, & DUMPRX
https://www.garlic.com/~lynn/2022f.html#69 360/67 & DUMPRX
https://www.garlic.com/~lynn/2022b.html#86 IBM "Green Card"
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#116 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021i.html#29 OoO S/360 descendants
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021e.html#44 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021.html#9 IBM 1403 printer carriage control tape

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Microcode Assist

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Microcode Assist
Date: 27 Jan, 2024
Blog: Facebook
1st VM microcode assist was for 370/158 where control register was set whether or not running in virtual machine mode ... then certain (virtual machine) supervisor instructions were executed in "VM" mode directly ... rather than interrupting into the kernel for simulation.

Then (originally) for 138/148, I was con'ed into helping Endicott move parts of the VM kernel into microcode "ECPS" (and new instructions were inserted into the kernel to execute the microcode rather than the 370 code, at startup if the kernel was running on a non-ECPS machine, all those ECPS instructions were no-oped). I was told that they had 6k bytes of microcode space, and 370 instructions would translate to microcode for about same number of bytes and I need to select the highest executed 370 paths for moving into microcode. Old archived post with the initial analysis (6k bytes of kernel pathlengths accounted for approx. 80% of kernel CPU and move to microcode would be approx ten times speed up).
https://www.garlic.com/~lynn/94.html#21

Then Endicott tried to get corporate permission to ship every 138&148 pre-installed with VM370 (sort of like current PR/SM & LPAR) but were vetoed. This was in the period when the head of POK (high-end mainframes) was in the process of convincing corporate to kill the VM370 product, shutdown the development group and move all the people to POK for MVS/XA (at least Endicott eventually managed to save the VM370 product mission, but had to reconstitute a development group from scratch). Endicott did bring ECPS forward for 4300s.

360, 370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode

posts mentioning ECPS and SIE microcode
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2019b.html#78 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018e.html#30 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2017e.html#48 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017c.html#81 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#80 Great mainframe history(?)
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2011p.html#114 Start Interpretive Execution
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2010m.html#74 z millicode: where does it reside?
https://www.garlic.com/~lynn/2007o.html#47 Virtual Storage implementation
https://www.garlic.com/~lynn/2007o.html#42 mainframe performance, was Is a RISC chip more expensive?
https://www.garlic.com/~lynn/2006n.html#44 Any resources on VLIW?
https://www.garlic.com/~lynn/2005u.html#43 POWER6 on zSeries?
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2002p.html#48 Linux paging
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2001b.html#29 z900 and Virtual Machine Theory

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Microcode Assist

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Microcode Assist
Date: 27 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#61 VM Microcode Assist

Instructions commonly were purely one-at-a-time, serially (whether hardward or microcode) ... it has been moving to lots of concurrent execution, the dynamic i86 translation to risc micro-ops may increase the elapsed time for each i86 instruction, but executing as risc micro-ops has helped increase concurrent execution, greatly increasing throughput. From comments in recent RS6000 posts (including reference to 2003 i86 P4 single processor has performance of 2003 max configured 32 processor z990)
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe

Note RISC had lots of throughput advantages over CISC ... by turn of century, i86 chip makers, had hardware layer that translated i86 instructions to RISC micro-ops starting to match traditional RISC throughput (each instruction may take more cycles to complete, but more instructions being processed concurrently significantly increased throughput)
https://www.anandtech.com/show/1998/3
The most common x86 instructions are translated into a single micro-op by the 3 simple decoders. The complex decoder is responsible for the instructions that produce up to 4 micro-ops. The really long and complex x86 instructions are handled by a microcode sequencer. This way of handling the complex most CISC-y instructions has been adopted by all modern x86 CPU designs, including the P6, Athlon (XP and 64), and Pentium 4.

... snip ...

... along with out-of-order execution, branch prediction, speculative execution, etc. A lot of this was observation that memory (& cache miss) latency ... when measured in count of CPU cycles was comparable to 60s disk latency when measured in count of 60s CPU cycles (memory is new disk)

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Millicode in an IBM zSeries processor
https://www.researchgate.net/publication/224103049_Millicode_in_an_IBM_zSeries_processo
Because of the complex architecture of the zSeries® processors, an internal code, called millicode, is used to implement many of the functions provided by these systems. While the hardware can execute many of the logically less complex and high-performance instructions, millicode is required to implement the more complex instructions, as well as to provide additional support functions related primarily to the central processor. This paper is a review of millicode on previous zSeries CMOS systems and also describes enhancements made to the z990 system for processing of the millicode. It specifically discusses the flexibility millicode provides to the z990 system.

... snip ...

360, 370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Microcode Assist

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Microcode Assist
Date: 27 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#61 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#63 VM Microcode Assist

Low/mid range IBM processors used vertical microcode, normal looking instructions, avg ten native instructions for each 370 instruction emulated (not all that different from "hercules" 370 emulator running on intel processors).
https://en.wikipedia.org/wiki/Hercules_(emulator)

High-end processors had horizontal microcode, could activate/control multiple hardware functional units in single instruction ... much more complex/difficult to program and had to keep track of when something started, how long it might take and when it might end.

I was given permissions to give talks on how 138/148 ECPS was done at the monthly user group baybunch meetings (hosted by stanford slac) ... lots of attendance by silicon valley people including Amdahl ... and we would frequently adjourn to local watering holes. The Amdahl people briefed me that they were in process of implementing HYPERVISOR using "MACROCODE" (370-like instructions running in microcode-mode, much simpler and easier to program than horizontal microcode, created originally to quickly respond to a series of trivial 3033 new features that were constantly being required by MVS) ... and would grill me about ECPS.

360, 370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode

specific posts mentioning amdahl, macrocode, hypervisor, sie, 3090, pr/sm, lpar
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2010m.html#74 z millicode: where does it reside?
https://www.garlic.com/~lynn/2007n.html#96 some questions about System z PR/SM
https://www.garlic.com/~lynn/2007b.html#1 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2006p.html#42 old hypervisor email

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4300s

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4300s
Date: 28 Jan, 2024
Blog: Facebook
The 4341 mainframe computer system was introduced by IBM on June 30, 1979.
https://web.archive.org/web/20190105032753/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP4341.html

Bldg15 got an engineering 4341 in Jan1979 for disk testing and I was con'ed into doing benchmarks for national lab that was looking at getting 70 for compute farm (sort of (b)leading edge of the coming cluster supercomputing tsunami). National lab benchmark was Fortran "RAIN" that came from the 60s and ran on CDC6600 in 35.77secs and in 36.13secs on engineering 4341 (the engineering 4341 had processor cycle reduced by 10-20% from what would ship in production machines). Cluster of 5 production models had more throughput, less expensive, smaller footprint, less power and cooling than 3033. Then large companies were also making orders for hundreds of vm4341s at a time for placing out in departmental areas (sort of (b)leading edge of the coming distributed computing tsunami).

In the 1st half of 70s, the Future System effort (totally different than 370 and was going to completely replace it) was shutting down 370 activity, lack of new 370s is credited with given the clone 370 makers there market foothold ... and IBM marketing had to really hone their FUD skills). When FS imploded there was mad rush to get stuff back into the product pipelines, including kicking off 303x&3081 efforts in parallel. For the 303x channel director, they took a 370 engine w/o 370 microcode and just the integrated channel microcode. A 3031 was two 158 engines, one with 370 microcode and one with integrated channel microcode. A 3032, was a 168-3 modified to use 303x channel director for external channels. A 3033 started out with 168-3 logic being remapped to 20% faster chips.


Rain benchmarks

60s cdc6600: 35.77secs
158:         45.64secs
3031:        37.03secs
4341:        36.21secs

The 3031 benchmark showed integrated channel microcode taking lots of 158 engine processing, even when not doing any I/O. Trivia: disk engineers did some tweaks to 4341 microcode to be able to use it for 3380 3mbyte/sec testing.

For the 3081, some conjecture had to go to TCM to pack the enormous number of circuits for reasonable sized machines
http://www.jfsowa.com/computer/memo125.htm
The 370 emulator minus the FS microcode was eventually sold in 1980 as as the IBM 3081. The ratio of the amount of circuitry in the 3081 to its performance was significantly worse than other IBM systems of the time; its price/performance ratio wasn't quite so bad because IBM had to cut the price to be competitive. The major competition at the time was from Amdahl Systems -- a company founded by Gene Amdahl, who left IBM shortly before the FS project began, when his plans for the Advanced Computer System (ACS) were killed. The Amdahl machine was indeed superior to the 3081 in price/performance and spectaculary superior in terms of performance compared to the amount of circuitry.]

... snip ...

A 3081D processor were supposedly faster than 3033 ... but some benchmarks didn't show it that way. They then came out 3081K that doubled the processor cache size claiming to make it 40% faster than 3081D ... although the two processor 3081K was about the same processing as Amdahl single processor (and throughput much less with MVS which was claiming two processor 370 throughput was only 1.2-1.5 times a single processor of the same machine ... a MVS 3081K two processor throughput might only be 60% of the Amdahl single processor).

Amdahl left IBM shortly after his ACS/360 was killed in the 60s. Following mentions some ACS/360 features that showup in the 90s with ES/9000.
https://people.computing.clemson.edu/~mark/acs_end.html

truncated old archived email from somebody in the trout/3090 group (as soon as 3033 out the door, the 3033 processor engineers start on 3090) referencing Endicott started on 4381:

Date: 04/23/81 09:57:42
To: wheeler

... in some sense true. but we haven't built an interesting high-speed machine in 10 years. look at the 85/165/168/3033/trout. all the same machine with treaks here and there. and the hordes continue to sweep in with faster and faster machines. true, endicott plans to bring the low/middle into the current high-end arena, but then where is the high-end product development?


... snip ... top of post, old email index

... sort of repeating the theme that IBM lived in defeat after the FS implosion.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

posts with above email
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022e.html#61 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2021b.html#23 IBM Recruiting
https://www.garlic.com/~lynn/2019c.html#45 IBM 9020

4381 reference
https://en.wikipedia.org/wiki/IBM_4300#IBM_4381
http://bitsavers.org/pdf/ibm/4381/GC20-2021-2_A_Guide_to_the_IBM_4381_Processor_Apr1986.pdf

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
getting to play disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

posts mention future system and 4300s tsunamis
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022h.html#48 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#9 Cloud Timesharing
https://www.garlic.com/~lynn/2021h.html#107 3277 graphics
https://www.garlic.com/~lynn/2019.html#63 instruction clock speed
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018f.html#93 ACS360 and FS
https://www.garlic.com/~lynn/2018e.html#30 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2017d.html#5 IBM's core business
https://www.garlic.com/~lynn/2017c.html#50 Mainframes after Future System
https://www.garlic.com/~lynn/2016h.html#48 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2014l.html#11 360/85
https://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframes and Education Infrastructure

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframes and Education Infrastructure
Date: 28 Jan, 2024
Blog: Facebook
Account of Learson trying to block the bureaucrats, careerists and MBAs from destroying the Watson legacy/culture
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

then the Future System disaster, significantly accelerated the rise of the bureaucrats, careerists, and MBAs .... From Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

... more FS:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

Early 1980s, large companies had orders for hundreds of vm/4341s at a time, for placing out in departmental, non-datacenter areas (sort of leading edge of the coming distributed computing tsunami) ... MVS looked at the huge increase in deployments but the only new CKD was (datacenter) 3380, only new non-datacenter were FBA ... eventually they release CKD 3375 (emulated on FBA 3370) for MVS ... but didn't do MVS much good. Distributed computing was looking at dozens/scores of vm/4341 systems per support person while MVS still required dozens of support people per system.

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

science center co-worker was responsible for networking technology for science center CP67 wide-area network which morphs into the internal corporate network and also used for the corporate sponsored BITNET and EARN univ. networks (internal corporate network was larger than arpanet/internet from just about the beginning until sometime mid/late 80s; BITNET/EARN also larger than arpanet/internet for a period). GML also invented at science center in 1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
IBM internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

BITNET & EARN
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
https://earn-history.net/technology/the-network/

late 80s, a senior disk engineer got a talk scheduled at the annual, world-wide, internal communication group conference supposedly on 3174 performance ... but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. The issue was that communication group had corporate strategic ownership of everything that crosses the datacenter walls and was fighting off distributed computing and client/server trying to preserve its dumb terminal paradigm and install base. The disk division was seeing data fleeing the datacenter to more distributed computing friendly platforms with drop in disk sales. The disk division had come up with a number of solutions that were constantly being vetoed by the communication group.

communication group fiercely fighting off client/server and distributed computing, trying to preserve their dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

The communication group datacenter stranglehold wasn't limited to disks and a couple short years later the company has one of the largest losses in history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

Further Discussion of Boeing's Slow Motion Liquidation: Rational Given Probable Airline Industry Shrinkage?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Further Discussion of Boeing's Slow Motion Liquidation: Rational Given Probable Airline Industry Shrinkage?
Date: 30 Jan, 2024
Blog: Facebook
Further Discussion of Boeing's Slow Motion Liquidation: Rational Given Probable Airline Industry Shrinkage?
https://www.nakedcapitalism.com/2024/01/further-discussion-of-boeings-slow-motion-liquidation-rational-given-probable-airline-industry-shrinkage.html
In our post yesterday, we discussed how Boeing management was on a path that would result in the liquidation of the company. We did not dwell overmuch on how significant a development that would be, given Boeing's heft as a company (for instance, it is America's biggest exporter) and its critical role as a supplier to civilian airlines and as a defense contractor. Boeing is an apex play in two very large economic ecosystems.

... snip ...

Looting of Boeing Set to Continue, Epitomizing Decline of Late-Stage AngloSphere Capitalism
https://www.nakedcapitalism.com/2024/01/looting-of-boeing-set-to-continue-epitomizing-decline-of-late-stage-anglosphere-capitalism.html
Tolstoy wasn't quite right when he said each unhappy family was unhappy in its own way. Boeing's accelerating rate of customer-profit harming and passenger-inconveniencing product defects represents an unusually pure and unadulterated form of executive looting of a company. However, this pattern of behavior is common in Corporate America in its less extreme form.

... snip ...

recent Boeing trouble's post: Did Stock Buybacks Knock the Bolts Out of Boeing?
https://www.garlic.com/~lynn/2024.html#56 Did Stock Buybacks Knock the Bolts Out of Boeing?

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military(/intelligence/congressional/)-industrial complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

a few other recent posts mentioning Boeing
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Microcode Assist

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Microcode Assist
Date: 30 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#61 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#62 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#63 VM Microcode Assist

From some recent RS/6000 posts (RS/6000, before POWER/PC with cache coherency and SMP shared-memory support)
eight processor ES/9000-982 : 408MIPS, 51MIPS/processor

RS6000/990 : 126MIPS (2.5times ES/9000 processor); 16-way cluster: 2016MIPS, 128-way cluster: 16,128MIPS


Millicode in an IBM zSeries processor
https://www.researchgate.net/publication/224103049_Millicode_in_an_IBM_zSeries_processo

360, 370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode

The executive we reported to when doing our HA/CMP cluster, goes over to head up SOMERSET (AIM, Apple, IBM, Motorola) doing single chip 801 ... originally Power/PC ... and I've commented appears to adapt Motorola's 88k/RISC multiprocessor cache consistency ... so could have POWERPC SMP multiprocessor systems ... as well as clusters of multiprocessor SMP systems.
https://en.wikipedia.org/wiki/AIM_alliance
https://en.wikipedia.org/wiki/IBM_Power_microprocessors
https://en.wikipedia.org/wiki/Motorola_88000
In the early 1990s Motorola joined the AIM effort to create a new RISC architecture based on the IBM POWER architecture. They worked a few features of the 88000 (such as a compatible bus interface[10]) into the new PowerPC architecture to offer their customer base some sort of upgrade path. At that point the 88000 was dumped as soon as possible

... snip ...

https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/IBM_Power_microprocessors#PowerPC
After two years of development, the resulting PowerPC ISA was introduced in 1993. A modified version of the RSC architecture, PowerPC added single-precision floating point instructions and general register-to-register multiply and divide instructions, and removed some POWER features. It also added a 64-bit version of the ISA and support for SMP.

... snip ...

after turn of century (Dec2000), POK mainframe ships z900, 16 processors, 2.5BIPS (156MIPS/proc) but in 1999 IBM PowerPC 440 Hits 1,000MIPS (>six times faster processor), 20yrs before z15 processor hits 1BIPS
https://www.cecs.uci.edu/~papers/mpr/MPR/19991025/131403.pdf

Note RISC had lots of throughput advantages over CISC ... by turn of century, i86 chip makers, had hardware layer that translated i86 instructions to RISC micro-ops starting to match traditional RISC throughput
https://www.anandtech.com/show/1998/3
The most common x86 instructions are translated into a single micro-op by the 3 simple decoders. The complex decoder is responsible for the instructions that produce up to 4 micro-ops. The really long and complex x86 instructions are handled by a microcode sequencer. This way of handling the complex most CISC-y instructions has been adopted by all modern x86 CPU designs, including the P6, Athlon (XP and 64), and Pentium 4.

... snip ...

... along with out-of-order execution, branch prediction, speculative execution, etc. A lot of this was observation that memory (& cache miss) latency ... when measured in count of CPU cycles was comparable to 60s disk latency when measured in count of 60s CPU cycles (memory is new disk). Other refs:
https://news.ycombinator.com/item?id=12353489
https://stackoverflow.com/questions/5806589/why-does-intel-hide-internal-risc-core-in-their-processors
https://electronics.stackexchange.com/questions/188268/difference-between-micro-operations-in-risc-and-cisc-processors
https://en.wikipedia.org/wiki/Intel_Microcode
https://ieeexplore.ieee.org/abstract/document/1281676
https://link.springer.com/chapter/10.1007/978-3-540-93799-9_4
http://sunnyeves.blogspot.com/2009/07/intel-x86-processors-cisc-or-risc-or.html

and 1999, Intel Pentium III hits 2,054MIPS (13times z900 processor)
also 2003, Pentium4 hits 9,726MIPS, z990 32 processor 9BIPS (281MIPS/proc), single P4 processor faster than z990 32 processor SMP


801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

How IBM Stumbled Onto RISC
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#7 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#9 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC

other recent posts mentioning RISC and/or RS/6000
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#54 RS/6000 Mainframe

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3270

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3270
Date: 30 Jan, 2024
Blog: Facebook
When 3274/3278 came out (about the time there was publications about productivity of qtr second response) it was much slower than 3272/3277 ... they had moved lots of the terminal electronics back into the (shared) 3274 controller ... which significantly drove up the coax protocol chatter (cut 3278 manufacturing costs). 3272/3277 had .086sec hardware response (needed at least .161 system response for human to see quarter sec. response, back in the early 80s when the interactive computing productivity studies were all the rage). 3274/3278 had .3-.5+ sec response (depending on data). Complaints sent to 3278 product administrator about it being much worse for interactive computing ... got back response that 3278s were intended for "data entry" (i.e. electronic keypunch).

Later IBM/PC 3277 emulation card had 3-4 times higher upload/download than 3278 emulation cards. From long ago and far away

Date: 01/17/86 12:37:14
From: wheeler
To: (losgatos distribution)

I was in YKT this week & visited xxxxx yyyy. He is shipping me two PCCAs now ... since I couldn't remember the address out here ... he is sending them care of zzzzz. The demo they had with PCCA on PCNET was various host connections was quite impressive, both terminal sessions and file transfer. Terminal sessions supported going "both ways" ... PVM from PCDOS over PCNET to AT with PCCA, into 370 PVM and using PVM internal net to log on anywhere. A version of MYTE with NETBIOS support is used on the local PC machine. They claim end-to-end data rate of only 70kbytes per second now ... attributed to bottlenecks associated with NETBIOS programming. They could significantly improve that with bypassing NETBIOS and/or going to faster PC-to-PC interconnect (token ring, ethernet, etc). However, 70kbytes/sec is still significantly better than the 15kbytes/sec that Myte gets using TCA support thru 3274.


... snip ... top of post, old email index

The communication group was fiercely fighting off client/server and distributed computing and also trying to prevent mainframe TCP/IP from being announced. When they lost, they changed their tactic and claimed that since they had corporation strategic (stranglehold) responsibility on everything that crossed datacenter walls, TCP/IP had to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did the "fixes" to support RFC1044 and in some tuning tests at Cray Research, between a 4341 and a Cray, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

AWD had done their own (PC/AT bus) 4mbit T/R card for PC/RT. Then for RS/6000 with microchannel, AWD was forced to use PS2 cards (that had been heavily performance kneecapped by communication group), example was the (PS2 $800) 16mbit T/R microchannel card had lower card throughput than PC/RT 4mbit T/R card. By comparison $69 10mbit ethernet card could sustain 8.5mbit.

communication group fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
RFC1044 support posts
https://www.garlic.com/~lynn/subnetwork.html#1044
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

some posts mentioning myte
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2021i.html#73 IBM MYTE
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2013g.html#17 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2010c.html#24 Processes' memory
https://www.garlic.com/~lynn/2005r.html#17 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2000e.html#56 Why not an IBM zSeries workstation?

--
virtualization experience starting Jan1968, online at home since Mar1970

NIH National Library Of Medicine

From: Lynn Wheeler <lynn@garlic.com>
Subject: NIH National Library Of Medicine
Date: 31 Jan, 2024
Blog: Facebook
early 90s (after leaving IBM), i had opportunity to spend some time with people at NIH's national library of medicine. at the time, they had a mainframe bdam implementation that dated from the late 60s. two of the people that had worked on the original implementation from the 60s were still around. we were able to exchange some war stories

... undergraduate at univ in 60s, I had been hired fulltime responsible for os/360. The univ. library got an ONR grant to do online catalog (some of it went for a 2321 datacell) and project was selected for original CICS product beta test ... and supporting CICS was added to my tasks. First problem was CICS wouldn't come up, turns out that CICS had some (undocumented) hard coded BDAM options and library had built BDAM datasets with different set of options.

in the 90s, somebody commented that there was something like 40k professional nlm librarians world-wide. the process was that they would sit down with a doctor or other medical professional for a couple hrs, take down their requirements and then go off for 2-3 days and do searches ... eventually coming back with some set of results.

nlm had passed the search threashold of extremely large number of articles back around 1980 and had a severe bimodel keyword search problem. out to five to eight keywords ... there would still be hundreds of thousands of responses ... and then adding one more keyword (to the search) would result in zero responses. the holy grail of large search infrastructures has been to come up with the number of responses greater than zero and less than a hundred.

in early 80s, nlm got a (apple) interface, grateful med. grateful med could ask for the return of the number of responses ... rather than the actual responses. grateful med would keep track of searches and count of responses. the person doing the search seemed to involve a slightly, semi-directed random walk ... looking for a query that met the holy grail ... greater than zero responses and less than one hundred.

some past posts
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians
https://www.garlic.com/~lynn/2017i.html#4 EasyLink email ad
https://www.garlic.com/~lynn/2017f.html#34 The head of the Census Bureau just quit, and the consequences are huge
https://www.garlic.com/~lynn/2013g.html#87 Old data storage or data base
https://www.garlic.com/~lynn/2009q.html#25 Old datasearches
https://www.garlic.com/~lynn/2009m.html#88 Continous Systems Modelling Package
https://www.garlic.com/~lynn/2006l.html#31 Google Architecture

posts mentioning CICS and/or BDAM
https://www.garlic.com/~lynn/submain.html#bdam

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AIX

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AIX
Date: 31 Jan, 2024
Blog: Facebook
801/RISC ROMP was for Displaywriter followon, when that got canceled, they decided to retarget for the UNIX workstation market and got the company that had done AT&T unix port for IBM/PC PC/IX, to do one for "PC/RT" as AIX.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Palo Alto was working with BSD from USB and Locus from UCLA (unix work alikes). Originally BSD under VM370; with vm370 modifications where VM370 could generated virtual address spaces for unix forks (Mar1982, I ran an internal advanced technology conference where they presented the VM370 BSD work). They then got retarged to do USB for PC/RT instead. I had been working with los gatos lab and Metaware's TWS, which two LSG used to implement 370 Pascal, one of the LSG was then working on C-language front-end ... but then left for Metaware, and I talked Palo Alto into using Metaware for a 370 C-compiler and then also a ROMP C-compiler used for PC/RT ... eventually released as AOS. Old archived post of the Mar1982 conference:
https://www.garlic.com/~lynn/96.html#4a

Palo Alto had originally ported Locus to Series/1. UCLA Locus was eventually released as AIX/370 and AIX/386.

Trivia: "SPM" had originally developed for CP67 ... superset of VMCF, IUCV, and SMSG combination. It was ported by POK to VM370 (for many uses, including automated operator) and I included it my internal VM370 distribution starting late 1974. trivia2: the author of REX used SPM for a multi-user client/server 3270 spacewar game (since RSCS included SPM support, even product version shipped to customers, even tho VM370 SPM never shipped to customers) ... so clients didn't have to be on the same machine (almost immediately robot clients appeared beating all human players, to level the playing field, server was changed to increase energy use non-linear as client responses faster than humans).

some early VM370 work
https://www.garlic.com/~lynn/94.html#2
csc/vm email
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

old spm email
https://www.garlic.com/~lynn/2006k.html#email851017
some SPM documentation
https://www.garlic.com/~lynn/2006w.html#16

csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

1980 STL cons me into doing channel-extender support (STL was bursting at the seams and they were moving 300 people to offsite bldg with dataprocessing back to STL datacenter) so they could install channel attached 3270 controllers at offsite bldg (no percention difference in human factors between offsite and STL). That eventually morphs into HSDT project with T1 and faster computer links (both terrestrial and satellite). Initially got a T1 satellite link between IBM Kingston (Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab) and San Jose with tail-circuit to Los Gatos. Then got customed designed TDMA system with 4.5M dishes at Los Gatos and Yorktown and 7M dish at Austin.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

I could run both mainframe RSCS and TCP/IP. RSCS had real problem using spool file interface using 4k page block synchronous transfer, limited to approx 5-8 4k blocks/sec, 20kbytes-30kbytes ... I needed more like 300kbytes/sec per T1 link. I did a pilot VM370 spool file system in Pascal running in virtual address space, with asynchronous interface, contiguous allocation, read-ahead, write-behind, multi-block transfers, etc).

When the communication group lost blocking the release of mainframe TCP/IP support, they insisted that it had to be released through them; what shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did RFC1044 support and in some tuning tests at Cray Research between IBM 4341 and Cray, got 4341 sustained channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed). Claim was the HSDT satellite link to the EVE in San Jose helped bring the RIOS (RS/6000) chips in a year early.

RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

Lots of battles with the communication group since they were kneecapped at 56kbit/sec links. They did analysis for executive committee that customers were interested in T1 until sometime in the 90s. They had "fat pipe" support where multiple parallel 56kbit/sec links could be treated as single logical link and found customer installations didn't have "fat pipe" configuration passed six parallel links. What they didn't know (or didn't want to tell executive committee) was that typical telco T1 tariff was about the same as 5or6 56kbit links ... so customers just moved to non-IBM box (and non-IBM software).

HSDT was also working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers, then congress cuts the budget, some other things happened and eventually an RFP was release (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AIX

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AIX
Date: 31 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#70 IBM AIX

1988, IBM branch office asked if I could help LLNL standardize some serial stuff they were playing with, which quickly becomes fibre-channel standard (FCS, including some stuff I had done in 1980) ... initially 1gbit/sec, full-duplex, 200mbytes/sec aggregate (POK eventually released their stuff with ES/9000 when it is already obsolete, as ESCON, around 17mbytes/sec).

FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

Last product we did at IBM was HA/CMP. Late 80s, Nick Donofrio comes through and my wife shows him five hand drawn charts to do HA/6000, initially for NYTimes to port their newspaper system (ATEX) from DEC VAXCluster to RS/6000, and he approves the project. I then rename it HA/CMP when I start doing technical/scientific cluster processing scale-up with national labs and commercial cluster processing scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres, I did high-performance distributed lock manager with an API emulating VAXCluster, and also had sort of piggy-backed a high-performance distributed RDBMS cache on the DLM). For high-end was planning on using FCS (and follow-ons) and for mid-range had been working with Harrier/9333 and wanted it to morph into fractional FCS (but we leave IBM and it morphs into SSA).

Early Jan1992, we have meeting with Oracle CEO and AWD/Hester tells him that we would have 16-way/systems by mid92 and 128-way/systems by ye92. During Jan92, I'm bringing FSD up to speed about HA/CMP with national labs. End of Jan92, FSD tells kingston supercomputer group they are going with HA/CMP (instead of the one kingston had been working on). Possibly within hrs, cluster scale-up is transferred to Kingston for announce as IBM supercomputer (for technical/scientific *ONLY*) and we were told can't work on anything with more than four processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Note, late 80s, senior disk engineer got a talk scheduled at world-wide, annual, internal communication group conference, supposedly on 3174 performance, but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division saw drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (with their corporate responsibility for everything that crossed datacenter walls). The GPD/Adstar VP of software as partial work-around was investing in distributed computing startups that would use IBM disks and would periodically ask us to drop by his investments. He also funded the unix/posix implementation in MVS (since it didn't "directly" cross datacenter walls, communication group couldn't directly veto it).

communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

Not long after leaving IBM was brought into a small client/server startup, two former Oracle people (that we had worked with on HA/CMP and were at Ellison meetings) were there responsible for something called commerce server and wanted to do payment transactions on the server, the startup had also invented this technology called "SSL" they wanted to use, it is now frequently called "electronic commerce" ... and I had responsibility for everything between e-commerce webservers and financial payment networks. I was then doing some stuff with Postel (internet RFC editor)
https://web.archive.org/web/20240612205634/https://www.postel.org/jon-postel/
and he sponsored a talk at ISI & USC, I put together "Why Internet Isn't Business Critical Dataprocessing" based on all the stuff that I had to do for "electronic commerce".

E-commerce gateway to payment networks
https://www.garlic.com/~lynn/subnetwork.html#gateway

Trivia: the communication group stranglehold on datacenters wasn't just disks and within a couple short years, IBM has one of the largest losses in history of US companies and the company was being reorged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone). Long-winded account
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

other trivia: some POK mainframe engineers become involved with FCS and define a heavyweight protocol for FCS that radically reduces the native throughput that ships as FICON. Latest published benchmark I can find is z196 "Peak I/O" getting 2M IOPS using 104 FICON. About sametime a FCS is announced for E5-2600 blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also "z" documents recommend not running SAPs (system assist processors that handle actual I/O) over 70% CPU (normally would mean more like 1.5M IOPS).

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AIX

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AIX
Date: 31 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2024.html#71 IBM AIX

Date: 01/19/82 15:33:37
To: SJRLVM1 WHEELER
FROM: SPD HQ, WHITE PLAINS (TRANTOR)

Hi there.

-

I have some news you may be interested in.

-

First on VM/Unix*. It looks like a viable Unix* can be built on VM using writeable shared segments. Some primitives will have to be added to CP but Rip thinks its a relatively small amount of work. On the order of 2K Locs.

Second, do you know of anyone who would want to work with a vendor defining and then developing a Unix* implementation to be fit on top of the modified CP? (We, meaning Rip, add the necessary Unix oriented primitives to CP and then we ask a Unix skilled vendor to put a Unix on top.) We need CP expertise to go sit with the vendor's people and explain virtual machines, diagnoses, etc. Also there would probably be a need to represent the vendor's requirements back to IBM. (Rip's smart but no one has all the answers.)

-

Yes, this is a headhunting request. But maybe you know of someone. Please don't just distribute all that information on the network. I'd like the project to stay confidential. How about:

-

"Wanted: CP expertise for a temporary assignment supporting a vendor implementation of an end user system on VM. Would transfer information from IBM to vendor and requirements for CP modifications back from the vendor to IBM. Temporary relocation required. Ranging from 3 months up to 18 months. Does not involve actual VM development. Performance and functional knowledge of CP required."

-

Third, the VM/SI issue was not solved by the investigation of Unix*/VM. The use of writeable shared segments eliminated the need to argue that issue to a conclusion. However, there will be a discussion to a conclusion. Sometime soon. I keep throwing in your name, so don't be surprised if your help is requested.


... snip ... top of post, old email index

Trivia: we had done VM/370 writable shared segments as part of System/R (original sql/relational) implementation and it was part of the technology transfer to Endicott for SQL/DS ... however Endicott dropped back to not using any VM/370 mods for shipping SQL/DS ... at the time, the corporation was pre-occupied with the next great DBMS: EAGLE. After EAGLE implodes, there is a request for how fast could System/R be ported to MVS ... which is eventually released as DB2.

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

Date: 05/07/85 07:28:45
TO: SPD HQ, WHITE PLAINS (TRANTOR)
FROM: SJRLVM1 WHEELER

locus is a UNIX add-on that hits the base kernal (although in a relatively high-level way so that it is applicable to lots of different unixs) plus lots of interface code. There is a Locus corp. started by a UCLA prof. that Palo Alto has a joint dev. contract with (fyi, so does at&t).

Palo Alto (& unix corp) has Locus running on top of a PC unix, S/1 unix, & VAX unix. It supports both remote files & remote execution in a transparent manner (high level networking that supports almost all types of connections, Ethernet, bisynch, etc. etc). They have also modified most code generators so that files containing executable code has identifier record that indicates what machine the code was targeted for.

I've alwas figured that if IBM completely blows the software for my RMN* proposal ... I can use Palo Alto's LOCUS on top of Palo Alto's 4.2UNIX. Run 42UNIX+LOCUS on PC/370s, PCs, ATs, cluster machines, and on mainframe 370 (using Palo Alto's IX mods + UNIX mods). Cluster machines could easily be a combination of 370 and 801 processor cards. Majority of software and system support is already there. LOCUS mods. also fit on top of System/V.


... snip ... top of post, old email index

trivia: "Palo Alto IX mods" for bsd on vm/370 (before getting redirected to do bsd for pc/rt instead). "RMN*" was my proposal for cramming large number of 370 processor chips & 801 processor chips in racks (predating the later work for HA/CMP). Also "RMN" ... Boeblingen had done "Roman" 370 3chip set with performance of 168-3 ... and I wanted to see how many I could cram in a rack and Los Gatos was working on "Blue Iliad" ... 1st 32bit 801/risc chip (really large and hot, never came to production).

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
https://www.garlic.com/~lynn/subtopic.html#801

some posts mentioning ROMAN and Blue Iliad
https://www.garlic.com/~lynn/2023g.html#17 Vintage Future System
https://www.garlic.com/~lynn/2022c.html#77 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021b.html#50 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021.html#65 Mainframe IPL
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2012l.html#82 zEC12, and previous generations, "why?" type question - GPU computing
https://www.garlic.com/~lynn/2011m.html#24 Supervisory Processors
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2010l.html#42 IBM zEnterprise Announced
https://www.garlic.com/~lynn/2006n.html#37 History: How did Forth get its stacks?
https://www.garlic.com/~lynn/2002l.html#27 End of Moore's law and how it can influence job market

Footnote: Austin originally had something like a couple hundred Pl.8 programmers for ROMP followon to displaywriter. When it was killed, decision was to shift to unix workstation market. They hire the company that had done AT&T unix "PC/IX" for IBM/PC to do one for ROMP which becomes AIX for PC/RT. For all the PL.8 programmers they justify the VRM ... an abstract virtual machine facility claiming that it would greatly simplify implementing AIX and it will be much shorter elapsed time and take much less total VRM+AIX resources than if the company had to directly implement AIX to the bare hardware. Now a common unix workstation activity was writing device drivers for unsupported hardware ... however for PC/RT, needed to do two new drivers one for AIX *and* one for VRM. Also when Palo Alto was directed to do UCB BSD for PC/RT (rather than 370), it was directly to ROMP bare hardware (for "AOS") and it took greatly less resources and faster than either the original VRM or AIX efforts.

--
virtualization experience starting Jan1968, online at home since Mar1970

UNIX, MULTICS, CTSS, CSC, CP67

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: UNIX, MULTICS, CTSS, CSC, CP67
Date: 01 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#40 UNIX, MULTICS, CTSS, CSC, CP67

709 (replaced by 360/67) ran student fortran in less than second. When I first start fulltime, student fortran ran over a minute. I install HASP and it cuts time in half. I then start doing SYSGEN STAGE2 to 1) run in production jobstream and 2) order datasets and PDS members to optimize arm seek and multi-track search ... cutting student fortran another 2/3rds to 12.9secs (it never gets better than 709 until I install Waterloo Watfor). Old post with part of 60s SHARE presentation

First six months playing w/CP67 I concentrate on pathlengths for running OS/360 in virtual machine. OS/360 test stream ran in 322secs stand-alone and initially ran in 856secs in virtual machine, CP67 CPU 534secs. After 6months I have CP67 CPU down to 113secs.
https://www.garlic.com/~lynn/94.html#18

I then do ordered disk arm seek, multiple page I/O maximize transfers per revolution for disk and drum, page replacement algorithm, and dynamic adaptive resource management/dispatching ... for multi-task optimization.

CP/67 page replacement algorithm
https://www.garlic.com/~lynn/subtopic.html#clock
CP/67 dynamic adaptive resource management
https://www.garlic.com/~lynn/subtopic.html#fairshare

Before I graduated, I was hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I thot Renton datacenter largest in the world, couple hundred million in IBM gear, 360/65s were arriving faster than they could be installed, boxes constantly staged in hallways in machine room (believe former plane assembly bldg). Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge it to install a 360/67 for me to play with when I'm not doing other stuff). Then when I graduate, I join the science center.

recent posts mentioning 709, student fortran, watfor, os/360, cp/67, boeing cfo
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#67 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s

--
virtualization experience starting Jan1968, online at home since Mar1970

UNIX, MULTICS, CTSS, CSC, CP67

From: Lynn Wheeler <lynn@garlic.com>
Subject: UNIX, MULTICS, CTSS, CSC, CP67
Date: 02 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#40 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67

When Jim Gray left IBM Research, he left behind a tome called "MIP envy" ... copy here
https://www.garlic.com/~lynn/2007d.html#email800920
slightly later version
http://jimgray.azurewebsites.net/papers/mipenvy.pdf
... which resulted in trip reports of visits to other dataprocessing operations ... examples Bell Labs Holmdel, Murray Hill
https://www.garlic.com/~lynn/2006n.html#56
Xerox SDD
https://www.garlic.com/~lynn/2006t.html#37
other summary of the visits summer 1981
https://www.garlic.com/~lynn/2001l.html#61

"2006n.html#56" is 2006 archived post in (usenet) a.f.c. thread: AT&T Labs vs. Google Labs - R&D History

from IBM Jargon:
MIP envy - n. The term, coined by Jim Gray in 1980, that began the Tandem Memos (q.v.). MIP envy is the coveting of other's facilities - not just the CPU power available to them, but also the languages, editors, debuggers, mail systems and networks. MIP envy is a term every programmer will understand, being another expression of the proverb The grass is always greener on the other side of the fence.

... snip ...

Late 70s to early 80s, I was blamed for online computer conferencing on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) ... it really took off the spring of 1981 when I distributed trip report of visit to Jim Gray at Tandem (not because of the the "MIP envy" tome ... possibly some corporate misdirection). also from IBM Jargon:
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

earlier post
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing
longer post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Slow MVS/TSO

From: Lynn Wheeler <lynn@garlic.com>
Subject: Slow MVS/TSO
Date: 02 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
a couple other recent references to MVS "turkey"
https://www.garlic.com/~lynn/2023g.html#99 VM Mascot
https://www.garlic.com/~lynn/2022h.html#39 IBM Teddy Bear

Slow MVS "turkey" response isn't just bloated software and pathlengths but also extensive use of multi-track search. At one point, IBM SJR, San Jose Research replaced the 370/195 with a 168 for MVS and a 158 for VM/370. There were 3830 controllers and 3330 strings designated for MVS and 3830 controllers and 3330 designated for VM/370 ... even tho all 3830 controllers were twin-tailed channels to both systems. There was a hard&fast rule to never mount a "MVS" pack on a VM/370 string. Around 10am, an operator violated the rule and mounted a MVS 3330 on a "VM/370" string ... and within five minutes operations was getting irate phone calls from all over the bldg about what did they do to VM370 response. It turns out that multi-track search doesn't just lock up the device for the duration, but also the controller is locked for the duration of the multi-track operation (blocking VM370 from getting to any other drive connected to the controller).

Demands that operations move the "MVS" pack to a "MVS" string ... were met with, operations will do it sometime offshift. Turns out that we had a highly optimized VS1 system that was also tuned for running under VM370 and put it up on a MVS string. Even though the (highly optimized) VS1 system was running on a heavily loaded 158 vm/370 system, it was still much faster than the 168 MVS and was able to bring MVS to its knees, mostly alleviating the interference it was causing for CMS interactive users. At that point, operations agreed to move the MVS pack off the VM370 string to a MVS string (if we would move the VS1 system).

some past posts mentioning the incident
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2019b.html#15 Tandem Memo
https://www.garlic.com/~lynn/2018.html#93 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2011.html#36 CKD DASD

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3270

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3270
Date: 02 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#68 IBM 3270

related, in 1980 STL (since rename SVL) was bursting at the seams and moving 300 people from the IMS group to offsite bldg (approx. 10miles away) with dataprocessing service back to the STL datacenter. They had tried "remote" 3270 and found the human factors unacceptable. I get con'ed into doing channel extender support being able to put 3270 channel-attached controllers at the offsite bldg and found the human factors no different than in STL. A side-effect was that it improved those mainframe system throughputs by 10-15%. STL had spread the 3270 controllers across the same channels with disk controllers and it turns out that they were much slower than disk controllers ... while the channel extender boxes were faster than disk controllers ... so the same 3270 traffic had much less channel busy interference with disk activity. There were some suggestions to move all 3270 controllers (even those inside STL) to channel extender boxes, .... in order to improve the throughput of all systems.

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing's Shift from Engineering Excellence to Profit-Driven Culture: Tracing the Impact of the McDonnell Douglas Merger on the 737 Max Crisis

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing's Shift from Engineering Excellence to Profit-Driven Culture: Tracing the Impact of the McDonnell Douglas Merger on the 737 Max Crisis
Date: 03 Feb, 2024
Blog: Facebook
Boeing's Shift from Engineering Excellence to Profit-Driven Culture: Tracing the Impact of the McDonnell Douglas Merger on the 737 Max Crisis
https://www.airguide.info/boeings-shift-from-engineering-excellence-to-profit-driven-culture-tracing-the-impact-of-the-mcdonnell-douglas-merger-on-the-737-max-crisis/
Boeing's journey, particularly with its 737 Max, reflects a dramatic shift in the company's core values and operational philosophy, a change significantly influenced by its late-1990s merger with McDonnell Douglas. This pivotal event marked a departure from Boeing's storied commitment to engineering superiority and a safety-first mindset, pivoting towards a business model heavily emphasizing cost efficiency and rapid production, often at the expense of product quality and safety.

... snip ...

a couple other recent Boeing (culture) posts:
https://www.garlic.com/~lynn/2024.html#66 Further Discussion of Boeing's Slow Motion Liquidation: Rational Given Probable Airline Industry Shrinkage?
https://www.garlic.com/~lynn/2024.html#56 Did Stock Buybacks Knock the Bolts Out of Boeing?

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

some recent posts mentioning having been hired as undergraduate into small group in Boeing CFO office to help with the formation of Boeing Computer Services
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#31 Mainframe Datacenter
https://www.garlic.com/~lynn/2023g.html#28 IBM FSD
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#99 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#101 Operating System/360
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#68 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#66 Economic Mess and IBM Science Center
https://www.garlic.com/~lynn/2023c.html#15 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Performance Optimization

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Performance Optimization
Date: 04 Feb, 2024
Blog: Facebook
After the 23Jun1969 unbundling announcement (starting to charge for software, SE services, maint, etc), they couldn't figure out how not to charge for trainee SEs at the customer premises (part of training had been in large SE group at customer accounts). Solution was branch office access to CP67/CMS datacenters where branch SE could practice their skills with guest operating systems in virtual machines. Science Center had also ported APL\360 as CMS\APL (redoing storage management to large demand pages virtual memories, instead of OS/360 small 16kbyte swapped operation; also added system services API for things like file operations, enabling lots of real world applications, one of the early users were corporate business planners in Armonk loading all the sensitive customers information on the Cambridge system ... trivia: had to show strong security, since there were staff&students from Boston area institutions also using the system) and HONE started providing APL-based sales&marketing support applications ... which came to dominate all HONE activity (guest operating system use fading away).

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and HONE was long time customer. Note in the morph of CP67->VM370, lots of features were dropped and/or greatly simplified. In 1974, I started migrating lots of CP67 to VM370 for internal datacenters (including HONE). In the mid-70s the US HONE datacenters were consolidated in Silicon Valley (VM370 enhanced to support single-system image, loosely-coupled, shared DASD farm with load-balancing and fall-over, on par with the large ACP/TPF loosely-coupled airline configurations). I then migrated CP67 tightly-coupled multiprocessor support enabling US HONE to add a 2nd processor (making it the largest in the world, since ACP/TPF didn't have tightly-coupled support until much later). trivia: HONE clones started being installed all over the world (and I was also asked to do the first couple).

One of the co-workers at the science center had done an analytical system model in APL and it was provided on HONE as the Performance Predictor ... branch people could enter customer workload and configuration information and ask what-if questions about changes to workload or configuration. A modified version was also used by HONE for making the single-system image load balancing decisions.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE &/or APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969 unbundling announcement posts
https://www.garlic.com/~lynn/submain.html#unbundle
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

Much later after leaving IBM, turn of the century, was brought into large financial operation (40+ max configured mainframes @$30M, constantly being upgraded, number needed to finish financial settlement in the overnight batch window, all running a 450K statement Cobol application) to look a throughput. They had large performance group responsible for decades but had gotten somewhat myopically focused. I used a different set of analysis from 60s&70s science center and found 14% improvement. At the same time they brought in a performance consultant from Europe who had acquired a descendent of the Performance Predictor in the early 90s, during the IBM troubles when lots of real-estate and other resources were being unloaded/divested ... and ran it through a APL->C converter ... who used it to find another 7% improvement.

performance predictor and optimizing 450k statement cobol application posts
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
https://www.garlic.com/~lynn/2014b.html#83 CPU time
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

Benchmarks

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Benchmarks
Date: 04 Feb, 2024
Blog: Facebook
Industry standard MIPS benchmark has been number of iterations of benchmark program compared to 370/158 assumed to be one MIPS (not actual instruction count). Comment reply in post giving mainframes this century
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe

more comments/replies, also discussing it with respect to CISC i86 and RISC (using same benchmark programs, since not counting actual instructions, more like processing throughput).
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#62 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe

... other benchmarks are standardized transactions/sec and cost/transaction
http://www.tpc.org/

A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
http://bits.blogs.nytimes.com/2008/05/31/a-tribute-to-jim-gray-sometimes-nice-guys-do-finish-first/
During the 1970s and '80s at I.B.M. and Tandem Computer, he helped lead the creation of modern database and transaction processing technologies that today underlie all electronic commerce and more generally, the organization of digital information. Yet, for all of his impact on the world, Jim was both remarkably low-key and approachable. He was always willing to take time to explain technical concepts and offer independent perspective on various issues in the computer industry

... snip ...

Tribute to Honor Jim Gray
https://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html
Gray is known for his groundbreaking work as a programmer, database expert and Microsoft engineer. Gray's work helped make possible such technologies as the cash machine, ecommerce, online ticketing, and deep databases like Google. In 1998, he received the ACM A.M. Turing Award, the most prestigious honor in computer science. He was appointed an IEEE Fellow in 1982, and also received IEEE Charles Babbage Award.

... snip ...

webcast of the event
https://web.archive.org/web/20080604010939/http://webcast.berkeley.edu/event_details.php?webcastid=23082
https://web.archive.org/web/20080604072804/http://webcast.berkeley.edu/event_details.php?webcastid=23083
https://web.archive.org/web/20080604072809/http://webcast.berkeley.edu/event_details.php?webcastid=23087
https://web.archive.org/web/20080604072815/http://webcast.berkeley.edu/event_details.php?webcastid=23088

Jim Gray Tribute
http://www.theregister.co.uk/2008/06/03/jim_gray_tribute/
"A lot of the core concepts that we take for granted in the database industry - and even more broadly in the computer industry - are concepts that Jim helped to create," Vaskevitch says, "But I really don't think that's his main contribution."

... snip ...

SIGMOD (DBMS) lost at sea, search, tribute
https://web.archive.org/web/20111118062042/http://www.sigmod.org/publications/sigmod-record/0806

The TPC honors Jim Gray for his seminal contributions to the TPC and the field of database benchmarks
http://www.tpc.org/information/who/gray5.asp

disclaimer: I worked with Jim Gray (and Vera Watson) at SJR on System/R (original sql/relational implementation)

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 04 Feb, 2024
Blog: Facebook

https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#54 RS/6000 Mainframe

trivia: in 360s, single processor 360/67 was basically 360/65 with virtual memory support. However, multiprocessor 360/67 had "channel director" and multi-ported memory ... where there could concurrent processor and channel i/o access to memory ... and all processors could access all channels.

This is compared to 360/65 (and 370) multiprocessor which basically had processor shared access to memory ... but channels were processor dedicated (as if they were single processors) ... they emulated multiprocessor channels by having twin-tailed controllers that were at same address on processor specific channels.

posts mentioning 360/67 channel director and multi-ported memory
https://www.garlic.com/~lynn/2023f.html#48 IBM 360/65 and 360/67
https://www.garlic.com/~lynn/2022.html#6 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2021c.html#72 I/O processors, What could cause a comeback for big-endianism very slowly?
https://www.garlic.com/~lynn/2019c.html#44 IBM 9020
https://www.garlic.com/~lynn/2013e.html#11 Relative price of S/370 AP and MP systems
https://www.garlic.com/~lynn/2012o.html#22 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012d.html#65 FAA 9020 - S/360-65 or S/360-67?
https://www.garlic.com/~lynn/2007d.html#62 Cycles per ASM instruction

funny/joke. 3090 assumed the 3880 dasd controller was same as 3830 but capable of 3mbyte/sec transfer. However 3880 had special hardware path for data transfer and a really slow microprocessor that really increased all other channel busy. when the 3090 group found out, they realized they would have to significantly increase the number of channels (in order to achieve targeted system throughput), the extra channels required an extra TCM (and 3090 group semi-facetiously said they would bill the 3880 group for the increase in 3090 manufacturing cost). Eventually marketing respun the huge increase in channels as 3090 was great I/O machine (rather than needed to compensate for the 3880 increase in channel busy).

I periodically mention in 1988, IBM branch asks me to help LLNL (national lab) standardize some serial stuff they were working with which quickly becomes fibre-channel standard ("FCS", including being able to do full-duplex asynchronous operation), initially 1gbit/sec transfer, 200mbyte/sec aggregate. POK eventually gets their stuff shipped in the 90s with ES/9000 as ESCON (when it is already obsolete) as ESCON.

Then some POK engineers become involved in "FCS" and define heavy-weight protocol that significantly cuts the throughput, eventually ships as FICON. See early reference in this post about z196 "peak I/O" benchmarks of 2M IOPS using 104 FICON running over 104 FCS ... at a time when FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also z196 pubs claim capping SAPs (system assist processors that do the actual I/O) at 70% CPU ... meaning 1.5M IOPS.

FICON & FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

3090 need additional TCM because of 3880 channel busy
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#26 Vintage 370/158
https://www.garlic.com/~lynn/2023f.html#62 Why Do Mainframes Still Exist
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#103 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#45 IBM DASD
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#66 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021j.html#92 IBM 3278
https://www.garlic.com/~lynn/2021i.html#30 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021c.html#66 ACP/TPF 3083
https://www.garlic.com/~lynn/2021.html#60 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2019.html#79 How many years ago?
https://www.garlic.com/~lynn/2019.html#51 3090/3880 trivia

--
virtualization experience starting Jan1968, online at home since Mar1970

Benchmarks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Benchmarks
Date: 04 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#79 Benchmarks
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-way: 2016MIPS, 128-way: 16,128MIPS


RS6000 RIOS didn't have coherent cache for SMP (so scale-up purely cluster) Somerset/AIM doing Power/PC 1993 announcement includes coherent cache for SMP (so can do cluster, multiprocessor, and clusters of multiprocessors)

note by turn of century, i86 processors had hardware layer that translated i86 instructions into RISC micro-ops for execution, negating gap between CISC & RISC.

1999 IBM PowerPC 440 Hits 1,000MIPS (>six times z900 processor and 20yrs before z15 processor hits 1,000MIPS)
https://www.cecs.uci.edu/~papers/mpr/MPR/19991025/131403.pdf
1999, Intel Pentium III hits 2,054MIPS (13times z900 processor)

2003, z990 32 processor 9BIPS (281MIPS/proc)
2003, Pentium4 hits 9,726MIPS, faster than 32 processor z990

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019
• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)
z16, 200 processors, 222BIPS* (1111MIPS/proc), Sep2022
• pubs say z16 1.17 times z15 (1.17*190BIPS or 222BIPS)

z196/jul2010, 50BIPS, 625MIPS/processor
z16/sep2022, 222BIPS, 1111MIPS/processor

12yrs, Z196->Z16, 222/50=4.4times total system BIPS; 1111/625=1.8times per processor MIPS.

2010 E5-2600 server blade benchmarked at 500BIPS, 10 times max configured 2010 z196 and >twice 2022 z16


large cloud operation can have dozen or more megadatacenters around the world, each with half million or more high-end blades ... aggregate a few million TIPS aka million million MIPS (TIPS: thousand BIPS, million MIPS) ... enormous automation, a megadatacenter with 70-80 staff.

Historically i86 server technology was so cheap that lots of organization would get one or more per function, even if less than 5-10% utilized, it was less expensive than trying to optimize for full utilization. Since the 90s, cloud operations have been claiming they assemble their own blade servers at 1/3rd the cost of brand name servers. They've done enormous optimization and power consumption has become a major expense. For "on demand" purposes they may have 10-20 times the number of servers (for peak, on-deman load) as normally used, but can consolidate processing to as few servers as needed to optimize power use.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

posts mentioning (i86) risc micro-ops implementation
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#62 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#11 Vintage Future System
https://www.garlic.com/~lynn/2022g.html#85 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#82 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022b.html#64 Mainframes
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#45 Mainframe MIPS

--
virtualization experience starting Jan1968, online at home since Mar1970

Benchmarks

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Benchmarks
Date: 04 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#79 Benchmarks
https://www.garlic.com/~lynn/2024.html#80 Benchmarks

At Tandem, Jim Gray did study of what affects availability and found that hardware reliability had gotten to the point that outages were shifting to people mistakes and environmental (earthquakes, floods, hurricanes, fires, power outages, etc) ... 40yr old presentation
https://www.garlic.com/~lynn/grayft84.pdf
'85 paper
https://pages.cs.wisc.edu/~remzi/Classes/739/Fall2018/Papers/gray85-easy.pdf
https://web.archive.org/web/20080724051051/http://www.cs.berkeley.edu/~yelick/294-f00/papers/Gray85.txt

Late 80s, Nick Donofrio comes through and my wife shows him five hand drawn charts to do HA/6000, initially for NYTimes to port their newspaper system (ATEX) from DEC VAXCluster to RS/6000, and he approves the project. I then rename it HA/CMP when I start doing technical/scientific cluster processing scale-up with national labs and commercial cluster processing scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres). Early Jan1992, we have meeting with Oracle CEO and AWD/Hester tells him that we would have 16-way/systems by mid92 and 128-way/systems by ye92. I had done high-performance distributed lock manager with an API emulating VAXCluster, and also had sort of piggy-backed a high-performance distributed RDBMS cache on the DLM). For high-end was planning on using FCS (and follow-ons) and for mid-range had been working with Harrier/9333 and wanted it to morph into fractional FCS (but we leave IBM and it morphs into SSA).

Out marketing spend a lot of time talking to customers how things fail. I had also coined the terms disaster survivability and geographic survivability. The IBM S/88 Product Administrator then starts taking us around to her customers and also gets me to write a section for the corporate continuous available document (but it gets pulled when both Rochester/AS400 and POK complain that they couldn't meet the requirements).

During Jan92, I'm bringing FSD up to speed about HA/CMP with national labs. End of Jan92, FSD tells kingston supercomputer group they are going with HA/CMP (instead of the one kingston had been working on). Possibly within hrs, cluster scale-up is transferred to IBM Kingston for announce as IBM supercomputer (for technical/scientific *ONLY*) and we were told can't work on anything with more than four processors (we leave IBM a few months later).

availability posts
https://www.garlic.com/~lynn/submain.html#available
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

SNA/VTAM

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: SNA/VTAM
Date: 05 Feb, 2024
Blog: Facebook
Early 80s, I got HSDT project, T1 and faster computer links (both terrestrial and satellite). Lots of battles with communication who were capped at 56kbit links. One of my hardest problems was corporate required links to be encrypted and I hated what I had to pay for T1 links and faster encryptors were really hard to find ... finally internal IBM project capable of 3mbyte (not mbit) link encryptors.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

The communication group was fiercely fighting off client/server and distributed computing and also trying to prevent mainframe TCP/IP from being announced. When they lost, they changed their tactic and claimed that since they had corporation strategic (stranglehold) responsibility on everything that crossed datacenter walls, TCP/IP had to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did the "fixes" to support RFC1044 and in some tuning tests at Cray Research, between a 4341 and a Cray, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

The communication group also prepared study for executive committee that customers weren't interested in T1 links for almost another decade. SNA had fat-pipe support, multiple parallel 56kbit links treated as single logical link ... and they found customer installations dropped to zero for more than six links. What they didn't know (or didn't want to divulge) was typical telco tariff for T1 was usually about the same as 5o6 56kbit links; customers just went to full T1 with non-IBM controller and software. HSDT did trivial survey and found 200 customers with full T1 links.

I also got con'ed into taking a baby bell VTAM+NCP implementation done on Series/1 that had channel interface and would emulate cross-domain to host VTAM. I did fall86 presentation to SNA ARB in raleigh about it being significantly better than IBM's. Various parties involved were well aware of things that communication group might do and tried to deploy countermeasures for all of them (what they did to kill the effort could only be described as truth is stranger than fiction). Part of ARB pitch in this archived post

https://www.garlic.com/~lynn/99.html#67 part of "baby bell" presentation at spring COMMON user group meeting:
https://www.garlic.com/~lynn/99.html#70

Late 80s, senior disk engineer got a talk scheduled at world-wide, annual, internal communication group conference, supposedly on 3174 performance, but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division saw drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (with their corporate responsibility for everything that crossed datacenter walls). The GPD/Adstar VP of software as partial work-around was investing in distributed computing startups that would use IBM disks and would periodically ask us to drop by his investments. He also funded the unix/posix implementation in MVS (since it didn't "directly" cross datacenter walls, communication group couldn't directly veto it).

Communication group stranglehold on datacenters wasn't just disk and a couple short years later IBM has one of the largest losses in history of US corporations and was being reorged into 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone). Long-winded account
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Early 90s, communication group hired silicon valley consultant to implement TCP/IP support directly in VTAM. What he demo'ed was TCP running much faster than LU6.2. He was then told that "everybody knows that a proper TCP/IP implementation is much slower than LU6.2" and they would only be paying for a "proper" implementation.

posts mentioning communication group fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm and install base
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

SNA/VTAM

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: SNA/VTAM
Date: 05 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#83 SNA/VTAM

.. for a time I reported to same executive as person responsible for AWP164 (becomes APPN) ... I would chide him about coming to work on real networking (TCP/IP) because the SNA people would never appreciate the efforts. When APPN was about to announce, SNA group non-concurred ... eventually the announcement letter was carefully rewritten to not imply any sort of relationship between SNA and APPN.

we would joke that SNA wasn't a "System", wasn't a "Network", and wasn't an "Architecture". Part wasn't it didn't even have a (ISO/OSI) network layer .... it was mostly a terminal control program.

the SNA joke when SNA/VTAM was originally published, my wife was co-author of AWP39 .... and because SNA had corrupted the term "network", they had to qualify AWP39 as "peer-to-peer networking" ... even though nominally "networking" implies "peer-to-peer".

late 80s, I was member of (SGI) Greg Chessen's XTP TAB (technical advisory board) ... the communication group tried hard to block my participation. There were several military projects involved and the gov. was pushing for "standardization" so we took it to ANSI X3S3.3 (ISO chartered standards group for OSI level 3&4 standards) as "high speed protocol". Eventually they told us that ISO required standards to conform to the OSI model. XTP failed because 1) it supported the internetworking layer (which doesn't exist in the OSI model), 2) it went directly from transport to LAN MAC interface (bypassing level 4/3 interface) and 3) it supported the LAN MAC interface (which doesn't exist in the OSI model ... sitting somewhere in the middle of level 3, with both physical layer and network layer characteristics).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
XTP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning awp39 and/or awp164(/appn)
https://www.garlic.com/~lynn/2023g.html#18 Vintage X.25
https://www.garlic.com/~lynn/2023f.html#40 Rise and Fall of IBM
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#54 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#43 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#34 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"
https://www.garlic.com/~lynn/2021h.html#90 IBM Internal network
https://www.garlic.com/~lynn/2019d.html#119 IBM Acronyms
https://www.garlic.com/~lynn/2018e.html#2 Frank Heart Dies at 89
https://www.garlic.com/~lynn/2018e.html#1 Service Bureau Corporation
https://www.garlic.com/~lynn/2018b.html#13 Important US technology companies sold to foreigners

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 05 Feb, 2024
Blog: Facebook

https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#54 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#80 RS/6000 Mainframe

MIPS trivia: mentioned in previous referenced comments/replies ... we reported to an executive when we were doing HA/CMP product ... who then went over to head up Somerset ... doing the power/pc chip for AIM ... apple, ibm, motorola
https://en.wikipedia.org/wiki/AIM_alliance
when HA/CMP cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific only) and we are told we can't work on anything with more than four processors and leave IBM shortly later .... around that time SGI buys "MIPS" and then SGI hires our former executive and head of Somerset, as president of MIPS
https://en.wikipedia.org/wiki/MIPS_Technologies
... and we would periodically stop in to visit.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

other recent posts mentioning "AIM Aliance"
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2023b.html#28 NEC processors banned for 386 industrial espionage?
https://www.garlic.com/~lynn/2022g.html#85 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#28 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022e.html#11 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#105 Transistors of the 68000
https://www.garlic.com/~lynn/2022c.html#17 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021k.html#27 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021b.html#28 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#1 How an obscure British PC maker invented ARM and changed the world

--
virtualization experience starting Jan1968, online at home since Mar1970

RS/6000 Mainframe

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RS/6000 Mainframe
Date: 05 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#54 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#80 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#85 RS/6000 Mainframe

More like IBM's (failed) Future System (in 1st half of 70s) was inspiration for 801/RISC
https://en.wikipedia.org/wiki/IBM_801
... some FS info
http://www.jfsowa.com/computer/memo125.htm
one of the last nails in the FS coffin was IBM Houston Science Center analysis that 370/195 application moved to FS machine made out of the fastest available technology would have throughput of 370/145 (about 30 times slowdown).

During FS, internal politics was shutting down 370 efforts (lack of new 370s during the period is credited with giving the clone 370 makers their market foothold). When FS implodes, there was mad rush to get stuff back in 370 product pipelines including kicking off quick&dirty 3033&3081 efforts in parallel.

The head of POK also convinces corporate to kill vm/370, shutdown the vm/370 development group (out in burlington mall off rt128), and transfer all the people to POK for MVS/XA. They weren't planning on telling the people until the very last minute to minimize the numbers that could escape into the Boston area. The shutdown managed to leak early and several managed to escape (joke was that head of POK was major contributor to the DEC infant VAX effort).

The 1st major IBM RISC effort was 801/RISC Iliad chips to replace the variety of different custom CICS microprocessors for microprogrammed low&mid-range 370s, rather than a unique CISC microprocessor for every low&mid-range 370s with different microprogramming, converge on common 801/RISC Iliad architecture chips with common microprogramming (for whatever reason those 801/RISC floundered and find some of the IBM 801/RISC chip engineers leaving for RISC efforts at other vendors).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360
Date: 06 Feb, 2024
Blog: Facebook
I took two credit hr intro to fortran/computers. Univ had 709/1401 and IBM pitched 360/67 for tss/360 as replacement. Pending availability of 360/67, the 1401 was replaced with 64k 360/30 (that had 1401 emulation) to start getting 360 experience. At the end of semester, I was hired to rewrite 1401 MPIO (card reader->tape, tape->printer/punch, aka unit record front end for 709 running tape->tape) for 360/30, part of getting 360 experience. The univ shutdown datacenter over the weekend and I got the whole place dedicated (but 48hrs w/o sleep made monday classes hard). They gave me a bunch of hardware and software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc and within a few weeks had 2000 card program that ran stand-alone (IPL'ed with the BPS loader). I then modified it with assembler option that generated either the stand alone version (took 30mins to assemble) or OS/360 with system services macros (took 60mins to assemble, each DCB macro taking 5-6mins). I quickly learned 1st thing coming in sat. morning was clean the tape drives and printers, disassemble the 2540 printer/punch, clean it and reassemble. Also sometimes sat. morning, production had finished early and everything was powered off. Sometimes 360/30 wouldn't power on and reading manuals and trail&error would place all the controllers in CE mode, power-on the 360/30, individually power-on each controller, then placing them back in normal mode.

Then within a year of the intro class, the 360/67 arrived and I was hired fulltime responsible for os/360 (tss/360 never came to production so machine ran in 360/65 mode). My 1st system sysgen was MFT release 9.5. Student fortran had run under a second on 709, but ran over a minute on 360/65 (fortgclg). I install HASP cutting the time in half. I then start doing modified (MFT11) stage2 sysgens to 1) run stage2 in production w/HASP and 2) order datasets and PDS members to optimize disk arm seek and multi-track search, cutting time by another 2/3rds to 12.9secs (never got better than 709 until I install Univ. Waterloo WATFOR).

Jan1968, Science Center people came out to install CP/67 (precursor to VM/370) ... which I mostly played with in my weekend dedicated time. I initially concentrated on rewriting pathlengths to improve running OS/360 in virtual machine. OS/360 test stream ran 322secs on real machine and initially 856secs in virtual machine (CP67 CPU 534secs), by June had CP67 CPU down to 113secs (from 534). In June, Science Center scheduled a one week CP67/CMS class at Beverly Hills Hilton. I arrive Sunday and am asked to teach CP67 (turns out the science center members that were going to teach had resigned on Friday to go with one of the commercial online services spin-offs of the science center). Old archived post with part of SHARE presentation on some of the OS/360 and CP/67 optimization
https://www.garlic.com/~lynn/94.html#18

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Then before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter is largest in the world, couple hundred million in IBM gear, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room (I believe former airplanes assembly bldg) ... recent comment that Boeing ordered 360/65s like other companies ordered keypunches. Lots of politics between Renton director and CFO, who only had 360/30 for payroll up at Boeing field (although they enlarge it to install 360/67 for me to play with when I'm not doing other stuff). Boeing has disaster plan to replicate Renton up at the new 747 plant in Everett (mt. rainier heats up and the resulting mud slide takes out Renton) ... claim was it would cost Boeing more to do w/o Renton for a week or two than cost of replicating the datacenter.

While Renton had mostly 360/65s, they did have one 360/75 ... that was used for classified work. It had black rope around the 75 area and when there was classified work running, there were guards at the perimeter and black velvet draped over the CPU panel lights and the 1403 areas where print was exposed.

Both the Boeing and IBM teams told story that on day that 360 was announced ... Boeing walked in with an order that made the salesman the highest paid IBM employee that year ... claim was this was the motivation for IBM moving from straight commission to "quotas" the following year. Jan the next year, Boeing walked in again with a large order ... resulting in the salesman's quota being "adjusted" ... he leaves IBM shortly later.

While I was at Boeing, they also moved 360/67 multiprocessor from Boeing Huntsville to Seattle. Huntsville had originally got it for TSS/360 CAD/CAM with a number of 2250s ... but like many 360/67 customers, they ran it as 360/65. Huntsville configured it as two 360/65s but like the later motivation to add virtual memory on all 370s, they had severe MVT storage management problems ... aggravated by the long-running 2250 applications. Huntsville had modified MVT13 to run virtual memory mode ... didn't support paging ... but could use virtual memory to compensate a little for the MVT storage management problems. Later when I graduate, I join IBM science center (instead of staying at Boeing).

trivia: 360 original announce 360/60 with 2mic memory, 360/62 and 360/70 with 1mic memory (pg6)
http://bitsavers.org/pdf/ibm/360/systemSummary/A22-6810-0_360sysSummary64.pdf
60, 62, & 70 replaced with 65 & 75 with 750ns memory. other trivia: 360/75 functional characteristics (pg14 front panel)
http://bitsavers.org/pdf/ibm/360/functional_characteristics/A22-6889-0_360-75_funcChar.pdf

A decade ago, I was asked to track down the decision to add virtual memory to all 370s. I found staff member that reported to executive making the decision. Basically, MVT storage management was so bad that region sizes would have to be specified four times larger than used. As a result, a typical one mbyte 370/165 would only run four regions concurrently, insufficient to keep 165 busy and justified. Going to 16mbyte virtual memory would allow number of MVT regions to be increased by factor of four times with little or no paging (similar to running MVT in CP67 16mbyte virtual machine). Old archived post with pieces of the email exchange:
https://www.garlic.com/~lynn/2011d.html#73

Early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM. One of this stories was being very vocal that the electronics across the trail wouldn't work and then he was put in command of "spook base" (about the same time I'm at Boeing) ... one of Boyd's biographies claim that "spook base" was $2.5B "windfall" for IBM (ten times Renton, including 360/65s analyzing signals from sensors on the trail). "spook base" refs
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

John Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

2022 post with some Boyd & IBM refs (including Learson trying to block the bureaucrats, careerists, and MBAs from destroying Watson culture/legacy)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner

some posts mentioning undergraduate work at univ. and boeing
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360
Date: 07 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#87 IBM 360

... my hobby .... after joining science center was enhanced production operating systems for internal datacenters and HONE was long time customer (branch office online sales&marketing support, initially US ... but eventually HONE clone systems all aver the world).

then after transfer to IBM san jose research (and all the US HONE datacenters consolidated just up the road in Palo Alto ... becomes the largest single-system-image, loosely-coupled complex with eight multiprocessor systems with load balancing and fall-over; trivia: when FACEBOOK 1st moves into silicon valley, it is into a new bldg built next door to the former US HONE consolidated datacenter) ... I also got to wander around IBM & non-IBM datacenters in silicon valley including disk engineering (bldg14) and product test (bldg15) across the street. They were doing around the clock, pre-scheduled, stand-alone mainframe testing. They said that they had recently tried MVS ... but it had 15min mean-time-between failure (requiring manual re-ipl) in that environment. I offered to rewrite I/O supervisor to make it bullet proof and never fail enabling any amount of on-demand, concurrent testing, greatly improving productivity (downside was they start knee-jerk calling me anytime they had problem and I had to increasingly spend time playing disk hardware engineer).

Bldg 15 then gets the 1st engineering 3033 outside POK processor engineers and since disk testing takes only percent or two of processing, we scrounge up 3830 controller and string of 3330s and set up our own private online service ... followed by a engineering 4341 from Endicott.

I write a (internal only) research report about the effort and happen to mention the MVS 15min MTBF ... bringing down the wrath of the MVS organization on my head. A couple years later when 3380 was about to ship, FE had a test stream of a set of 57 hardware simulated errors that were likely to occur and in all 57 cases, MVS was crashing (requiring manual re-ipl) and in 2/3rds of the cases, no indication what caused the failure (and I didn't feel badly at all).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

some posts mentioning 57 simulated 3380 errors crashing MVS
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2018d.html#86 3380 failures
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2010n.html#15 Mainframe Slang terms
https://www.garlic.com/~lynn/2007.html#2 "The Elements of Programming Style"

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360
Date: 08 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#88 IBM 360

... also at SJR worked with Jim Gray (& Vera Watson) on the original SQL/relational implementation, "System/R" and while corporation was pre-occupied with next great DBMS, "EAGLE" ... were able to do tech transfer to Endicott for SQL/DS. Then when "EAGLE" implodes there is request for how fast could System/R be ported to MVS, which is eventually released as DB2 (originally for decision support only). Then when Jim leaves for Tandem, he tries to foist several things on me, including DBMS consulting with the IMS group in STL.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

Unrelated in 1980, STL is bursting at the seams and 300 people from the IMS group are being transferred to offsite bldg (about half-way between STL & SJR) with dataprocessing back to STL datacenter. They had tried "remote 3270" but found the human factors unacceptable. I get con'ed into doing channel-extender support so they can install channel-attached 3270 controllers in the offsite bldg and there is no perceptible human factors difference between between offsite and in STL. A side-effect is that the 168s for the offsite group have 10-15% improvement in system throughput. The channel-attached 3270 controllers had been previously been spread across all the channels shared with disk controllers and the (relatively) slower 3270 controller channel busy was interfering with disk I/O. The channel-extender boxes were faster than even the disk controllers, drastically reducing the channel busy for 3270 traffic (improving disk and system throughput). There was even talk about placing all STL 3270 controllers behind channel-extender boxes just to improve overall system throughput. The hardware vendor then tries to get IBM to release my support, but there is a group in POK playing with some serial stuff and they get that vetoed.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

I had started doing online computer conferencing in the 70s on the IBM internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s), but it really took off the spring of 1981 when I distributed a trip report of a visit to Jim Gray at Tandem. Only about 300 participated, but claims 25,000 were reading. We then print up six copies 300 pages from the online discussions, package them in six Tandem 3-ring binders and send them to the corporate executive committee (folklore is 5of6 wanted to fire me). From IBMJargon (see recent copy in the Mainframers "files" section, that doesn't have the datamation reference):

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

also see
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
ibm internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

In 1988, the IBM branch office asks if I can help LLNL (national lab) get some serial stuff they are playing with standardized, which quickly becomes fibre channel standard ("FCS", including some stuff i had done in 1980, initially 1gbit/sec, full-duplex, 200mbyte/sec aggregate). Then the POK serial stuff is finally released in the 90s with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec). Then some POK engineers become involved with FCS and define a heavy-weight protocol that drastically reduces throughput which is eventually released as "FICON". The most recent published benchmark is the z196 "Peak I/O" getting 2M IOPS using 104 FICON (over 104 FCS). About the same time a FCS is released for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Note the IBM pubs say to keep SAPs (system assist processors that actually do the I/O) CPU limited to 70% (which would be 1.5M IOPS).

FICON & FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM, Unix, editors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM, Unix, editors
Date: 09 Feb, 2024
Blog: Facebook
Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and Multics.
https://en.wikipedia.org/wiki/Multics
IBM line terminals used with CTSS & Multics
https://www.multicians.org/mga.html#2741

Others went to the IBM science center on the 4th flr
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
to do virtual machines (original CP40/CMS on 360/40 with hardware mods for virtual memory (& some amount of CMS from CTSS), which then morphs into CP67/CMS when 360/67 standard with virtual memory becomes available, precursor to VM370), internal network (technology also used for corporate sponsored univ. BITNET and EARN), invented GML in 1969 (morphs into ISO standard SGML a decade later and after another decade morphs into HTML at CERN) ... bunch of other online things. Lots more from Melinda's history
https://www.leeandmelindavarian.com/Melinda#VMHist

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

A decade ago, I was asked to track down decision to add virtual memory to 370s, eventually found a staff member to executive making the decision; basically MVT storage management was so bad that regions had to be specified four times larger than used, restricting typical 1mbyte 370/165 to four concurrent regions, insufficient to keep 165 busy and justified. Running MVT in 16mbyte virtual storage increased the number of concurrent regions by a factor of four times with little or no paging (similar to running MVT in a CP67 16mbyte virtual machine) ... which also resulted in decision to do VM370. archived post with pieces of the email exchange
https://www.garlic.com/~lynn/2011d.html#73

1974, CERN presents analysis to SHARE of MVS/TSO compared to VM370/CMS that plausibly contributed to SHARE MVS group selecting a TURKEY as the group's mascot ... and also later POK organization convincing corporate to kill the VM370 product. 3270 terminals launched in 1971 and "full-screen" editors (previous "line" editors for 2741 terminals). Early CMS full-screen editor released to customers was EDGAR, however there was a number of CMS "internal" full-screen editors.

Overlapped during this period was Future System effort, which was completely different than 370 and was going to replace it (internal politics during the FS period was killing off 370 efforts, the lack of new 370 during this period is credited with giving 370 clone makers their market foothold). FS details:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Note Amdahl appears to win battle for ACS being 360 compatible ... then folklore is executives were afraid that it would advance the state-of-the-art too fast and IBM would loose control of the market and kill ACS/360; Amdahl leaves shortly later (and before FS), starting his own clone, compatible computer company
https://people.computing.clemson.edu/~mark/acs_end.html

Then when FS implodes, there is a mad rush to get stuff back into the 370 product pipelines, including kicking off the quck&dirty 3033&3081 efforts in parallel. The head of POK (high-end mainframe) also convinces corporate to kill the vm370 product, shutdown the development group and transfer all the people to POK for MVS/XA (or supposedly MVS/XA wouldn't be able to ship on time).

Endicott manages to save the VM370 product mission, but had to recreate a development group from scratch. They also con me into helping with the 138/148 microcode assist. In the earlier 80s, I get permission to give talks on how it was done at the monthly "BAYBUNCH" user group hosted by Stanford SLAC. Several Amdahl people are there and after meetings they grill me on some of the details. They said that they are using MACROCODE to implement virtual machine microcode HYPERVISOR (not needing VM370). They said that MACROCODE was 370-like instructions running in microcode mode, originally created to quickly respond to series of trivial 3033 microcode changes constantly being required to run MVS. In the early 80s, customers weren't migrating to MVS/XA as originally planned, but Amdahl was having some success using HYPERVISOR to run MVS and MVS/XA concurrently IBM wasn't able to respond until almost decade later in the late 80s, with PR/SM and LPAR on 3090s.

Endicott instead of selecting one of the internal full-screen editors for release to customers, had the XEDIT effort. I wrote them a memo why they hadn't selected "RED" to use for XEDIT ... it had more feature/function, much more mature, more efficient code (almost the same as original line editor), etc. I got response back that it was obviously the RED author's fault that he developed it much earlier than XEDIT and it was much better, so it should be his responsibility to bring XEDIT up to level of RED. From 6jun1979 email (compare CPU secs to load large file for editing)


EDIT CMSLIB MACLIB S               2.53/2.81
RED CMSLIB MACLIB S  (NODEF)       2.91/3.12
ZED CMSLIB MACLIB S                5.83/6.52
EDGAR CMSLIB MACLIB S              5.96/6.45
SPF CMSLIB MACLIB S ( WHOLE )      6.66/7.52
XEDIT CMSLIB MACLIB S             14.05/14.88
NED CMSLIB MACLIB S               15.70/16.52

archived post with copy of emails
https://www.garlic.com/~lynn/2006u.html#26 Assembler question

Late 80s, senior disk engineer got a talk scheduled at world-wide, annual, internal communication group conference, supposedly on 3174 performance, but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division saw drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (with their corporate responsibility for everything that crossed datacenter walls, fiercely fighting off client/server and distributed computing, trying to save their dumb terminal paradigm and install base). The GPD/Adstar VP of software as partial work-around was investing in distributed computing startups that would use IBM disks and would periodically ask us to drop by his investments. He also funded the unix/posix implementation in MVS (since it didn't "directly" cross datacenter walls, communication group couldn't directly veto it). Communication group datacenter stranglehold wasn't just disks and a couple short years later, IBM has one of the largest losses in US corporate history and was being reorged into the 13 "baby blues" in preparation fro breaking up he company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

communication group protecting its install base posts:
https://www.garlic.com/~lynn/subnetwork.html#terminal

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

former AMEX president and IBM CEO reversing breakup posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM, Unix, editors

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM, Unix, editors
Date: 09 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors

Originally 801/RISC ROMP was going to be used for displaywriter follow-on. When that was canceled, they decided to pivot to the unix workstation market and got the company that had done PC/IX for the IBM/PC to do one for ROMP that is released as AIX for PC/RT.

Then RIOS chip-set was done for RS/6000 follow-on. We were doing HA/CMP cluster scale-up (RIOS didn't have cache consistency for multiprocessor scale-up). Then the executive we reported to, went over to head up SOMERESET (single chip power/pc for AIM Apple, IBM, Motorola, Motorola providing some features from its RISC 88k) ... 1993 (including cache protocol for doing multiprocessors).

HA/CMP had started out as HA/6000 for the NYTimes to port their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres). Then late Jan1992, cluster scale-up is transfered for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors; we leave IBM a few months later.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM, Unix, editors

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM, Unix, editors
Date: 09 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#91 IBM, Unix, editors

Introduced 1971, 3277+3272 ... channel attached controller had .089 hardware response ... early 80s pubs were showing improved productivity with quarter second response ... which required .16 system response (after joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters ... 1st half of 80s had several systems getting .11sec trivial interactive system response, +.089, gives .199sec).

3278+3274 was introduced ... moving large amount of 3270 electronics (reducing manufacturing costs) back to the controller with the coax protocol chatter greatly increasing hardware response to .3-.5secs (depending on amount of data). Letters to the 3278 product administrator about it worsening interactive computing got a reply that 3278 was targeted for "data entry" (not interactive computing). Of course MVS users never noticed since it was a very rare MVS system that had even one second system response.

Later 3277 terminal emulation IBM/PC card had 4-5 times the upload/download throughput of a 3278 terminal emulation card.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dyanmic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

some past posts mentioning 3270 interactive computing
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023f.html#78 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023e.html#0 3270
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2019c.html#4 3270 48th Birthday
https://www.garlic.com/~lynn/2017e.html#26 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016f.html#1 Frieden calculator
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#42 Old Computing
https://www.garlic.com/~lynn/2015d.html#33 Remember 3277?
https://www.garlic.com/~lynn/2015.html#38 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011p.html#19 Deja Cloud?
https://www.garlic.com/~lynn/2011g.html#41 My first mainframe experience
https://www.garlic.com/~lynn/2005r.html#28 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM, Unix, editors

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM, Unix, editors
Date: 09 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#91 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#92 IBM, Unix, editors

from 2000 on, we had left in 1992.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

For commercial, I had done a high-performance distributed lock manager with VAXCluster API semantics ... to simplify RDBMS ports from VAXCluster ... also used it to layer a high-performance RDBMS cache consistency protocol. Got some amount of input from Oracle, Sybase, Informix, and Ingres on how to improve over VAXCluster. Also had some experience having transferred to SJR in 1977 and worked with Jim Gray (before he left for Tandem) & Vera Watson on the original SQL/relational implementation, System/R. Was able to do tech transfer to Endicott for SQL/DS (while company was pre-occupied with the next great DBMS "EAGLE"). When EAGLE implodes there was request how fast could System/R be ported to MVS ... eventually gets released as DB2. During HA/CMP the mainframe DB2 group were complaining if I was allowed to proceed, it would be at least five years ahead of them

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

some posts mentioning distributed lock manager, vaxcluster, RDBMS
https://www.garlic.com/~lynn/2023e.html#86 Relational RDBMS
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#63 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#62 IBM DB2
https://www.garlic.com/~lynn/2014k.html#40 How Larry Ellison Became The Fifth Richest Man In The World By Using IBM's Idea
https://www.garlic.com/~lynn/2009b.html#43 "Larrabee" GPU design question

... note, not just mainframe DB2 ... the IBM S/88 Product Administrator started taking us around to their customers ... and also got me to write a section for the corporate continuous availability strategy document. However, it got pulled when both Rochester (AS/400) and POK (370 mainframe) complained (they couldn't meet the requirements)

continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

some HA/CMP posts mention work with s/88 (rebranded "stratus") product manager
https://www.garlic.com/~lynn/2024.html#82 Benchmarks
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#115 IBM RAS
https://www.garlic.com/~lynn/2023f.html#72 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#38 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2022b.html#55 IBM History
https://www.garlic.com/~lynn/2021d.html#53 IMS Stories
https://www.garlic.com/~lynn/2021.html#3 How an obscure British PC maker invented ARM and changed the world
https://www.garlic.com/~lynn/2008j.html#16 We're losing the battle
https://www.garlic.com/~lynn/2001i.html#49 Withdrawal Announcement 901-218 - No More 'small machines'

--
virtualization experience starting Jan1968, online at home since Mar1970

MVS SRM

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVS SRM
Date: 10 Feb, 2024
Blog: Facebook
within year of taking 2credit hr intro to fortran/computers, I was hired fulltime responsible for os/360. The univ. got a 360/67 for tss/360 to replace 709/1401 ... but ran it as 360/65 with os/360. Then some IBMers from the science center came out to install early CP/67 (before announce and availability for customers) and I mostly got to play with it during my dedicated weekend time. I was then invited to the next SHARE meeting for the announce of CP/67. I had been mostly reWriting code to improve OS/360 running in virtual machine, test was job stream that ran 322secs on bare machine but 856secs in virtual machine (CP67 CPU 534secs). Within a few months I had the CP67 CPU 534secs down to 113secs. I then start reWriting I/O, page replacement algorithm and dispatching/scheduling ... including doing dynamic adaptive resource management. Lots of the stuff picked up by the science center and distributed in CP67. When I graduate, I join the science center ... where one of my hobbies was enhanced production operating systems for internal datacenters (the branch office online sales&marketing HONE systems were long time customers).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

In 23jun1969 unbundling announcement, IBM started to charge for SE services, maintenance, software (application, but manage to make the case the kernel software was still free), etc. After the 370 announce, there was decision to add virtual memory to all 370s (motivation was that MVT storage management was so bad that regions had to be specified four times large than used, as a result typical 1mbyte 370/165 was only able to run four regions concurrently, insufficient to keep system busy and justified; going to MVT in 16mbyte virtual memory //something like running MVT in CP67 16mbyte virtual machine// allowed number of concurrent regions to be increased by a factor of four times with little or no paging). The virtual memory decision was also motivation for morph of CP67->VM370 which dropped or greatly simplified a lot of CP67.

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling

Then there was the "Future System" period that was completely different than 370 and was suppose to completely replace 370. During FS period, internal politics was killing off 370 efforts, the lack of new 370 during the FS period is credited with giving the 370 clone makers their market foothold. All during the FS period I continued to work on 360 stuff, then migrating lots of CP67 to VM370 for internal datacenters (and periodically ridiculing FS). When FS finally implodes there is mad rush to get stuff back into the 370 product pipelines (including kicking off quick&dirty 3033&3081 efforts in parallel) ... and also (motivated by the rise of clone 370 makers) the decision to start transition to charging for kernel software. Trivia: one of the final nails in the FS coffin was study by he IBM Houston Science Center that if some 370/195 applications were migrated to a FS machine made out of the fastest technology available, it would have throughput of 370/145 (factor of 30 times slowdown). Some FS details
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

My dynamic adaptive resource manager (from my 60s undergraduate days) was selected to be guinea pig for "charged-for" kernel add-on (on the way to charging for all kernel software). A corporate guru (infused in MVS & SRM) did a review and said he wouldn't sign off because everybody knew that manual tuning knobs was the state of the art (and I didn't have any). MVS SRM had huge array of random tuning knobs and they were making presentations at SHARE about effects of different (random?) combinations of SRM values had on different (effectively) static workloads. I tried to explain dynamic adaptive but it fell on deaf ears. So I put in a few manual tuning knobs (all accessed by a "SRM" command ... part of ridiculing MVS) that were part of a Operations Research joke ... because the dynamic adaptive code had greater degrees of freedom than the manual tuning knobs ... and could compensate for any manually set value.

dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

Then the head of POK managed to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (or supposedly MVS/XA wouldn't be able to ship on time) ... Endicott eventually managed to save the VM370 product mission, but had to recreate a development group from scratch. Note that earlier CERN had presented an analysis at SHARE comparing MVS/TSO with VM370/CMS which plausibly contributed to SHARE MVS group selecting "TURKEY" as mascot (and the POK MVS motivation to kill VM370).

a few posts about bringing down the wrath of the MVS group on my head when I documented work for disk engineering and happen to mention when they had tried MVS, it had a 15min mean-time-between-failures (requiring manual re-ipl)
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022f.html#95 VM I/O
https://www.garlic.com/~lynn/2022d.html#11 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#48 IBM 3033 Personal Computing
https://www.garlic.com/~lynn/2022b.html#7 USENET still around
https://www.garlic.com/~lynn/2022.html#35 Error Handling
https://www.garlic.com/~lynn/2021k.html#107 IBM Future System
https://www.garlic.com/~lynn/2021k.html#97 IBM Disks
https://www.garlic.com/~lynn/2021j.html#65 IBM DASD
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021g.html#0 MVS Group Wrath

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AIX

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AIX
Date: 11 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#72 IBM AIX
https://www.garlic.com/~lynn/2024.html#70 IBM AIX

there was an unsuccessful attempt to get IBM to make an offer to student that did 370 unix port at the univ ... and amdahl hired him, during development the amdahl people referred to it as "gold" (for play on Amdahl Unix ... aka element Au).

Then Palo Alto was working with both UCB on BSD for 370 and UCLA on Locus for 370 ... then Palo Alto got redirected to do BSD port to PC/RT instead ... but eventually Locus port(s) ships as AIX/370 (and AIX/386) ... AIX/370 running under vm/370.

Both IBM people and Amdahl people claimed that running under VM/370 was field engineering/service required full EREP to service the machines and to add mainframe EREP to Unix was many times the effort of just doing straight-forward port ... running under VM/370 took advantage of its EREP support.

A stanford group approached Palo Alto about IBM doing a workstation they had developed. Palo Alto brought in some internal groups for the review, ACORN (aka IBM/PC), Datahub (doing local lan fileserver), and group working with Apollo Domain workstations. All three groups claimed they were doing something better than Stanford presentation ... and IBM declined ... so they went away and formed SUN.

related threads
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2024.html#13 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2024.html#14 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2024.html#15 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#40 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#54 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#74 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#80 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#85 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#86 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#91 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#92 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#93 IBM, Unix, editors

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM, Unix, editors

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM, Unix, editors
Date: 11 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#91 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#92 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#93 IBM, Unix, editors

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters. Then after virtual memory for all 370s decision, there was group formed to do VM370 ... and in the morph from CP67->VM370 lots of stuff was simplified or dropped. In 1974, I started moving stuff to VM370 and eventually had a release 2 based "CSC/VM" system for production distribution and then upgraded to release 3 base. Somehow AT&T Longlines had acquired a copy ... but it was after I had restructured the kernel for multiprocessor ... but before moving the CP67 multiprocessor support to VM370 base (originally for branch office online sales&marketing support HONE systems).

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

Longlines added local features and kept upgrading it to the latest processor lines and also making it available to some other places in AT&T. In the early 80s, the IBM executive account manager for AT&T tracks me down wanting me to help Longlines move to system with multiprocessor support. IBM 3081 was never intended to have single processor version and they were afraid all the internal AT&T accounts running my CSC/VM from mid-70s would move to latest Amdahl single processor machine (similar concern about Airlines ACP/TPF customers which also didn't have multiprocessor support).

The original 3081D was suppose to be faster than 3033, but for lots of benchmarrks a 3081D processor was slower than 3033 ... then they double the cache size for 3081K (claim about 40% higher throughput). Amdahl's latest single processor was about same MIP rate as aggregate of two processor 3081K.

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

long winded post that starts with Learson trying to block the bureaucrats, careerists and MBAs from destroying the watson culture/legacy ... but also has an account about what was claimed to be the 1st true blue, commercial IBM customer installing an Amdahl machine:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning CSC/VM, AT&T Longlines and 3081
https://www.garlic.com/~lynn/2023g.html#30 Vintage IBM OS/VU
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2022e.html#97 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021b.html#80 AT&T Long-lines
https://www.garlic.com/~lynn/2019d.html#121 IBM Acronyms
https://www.garlic.com/~lynn/2017d.html#80 Mainframe operating systems?
https://www.garlic.com/~lynn/2017d.html#48 360 announce day
https://www.garlic.com/~lynn/2017.html#20 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2015c.html#27 30 yr old email
https://www.garlic.com/~lynn/2013b.html#37 AT&T Holmdel Computer Center films, 1973 Unix
https://www.garlic.com/~lynn/2012f.html#59 Hard Disk Drive Construction
https://www.garlic.com/~lynn/2011g.html#7 Is the magic and romance killed by Windows (and Linux)?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM, Unix, editors

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM, Unix, editors
Date: 12 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#91 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#92 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#93 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#96 IBM, Unix, editors

AWD had done their own (PC/AT bus) 4mbit T/R card for PC/RT. Then for RS/6000 with microchannel, AWD was forced to use PS2 cards (that had been heavily performance kneecapped by communication group), example was the PS2 16mbit T/R microchannel card had lower card throughput than PC/RT 4mbit T/R card (joke was a PC/RT 4mbit t/r server would have higher throughput than a RS/6000 16mbit t/r server). New Almaden research bldg had been heavily provisioned with CAT4 (assuming 16mbyte T/R), but they found that (CAT4) 10mbit ethernet had higher aggregate LAN throughput (than 16mbit t/r) and lower latency. Also $69 10mbit ethernet card had much higher throughput than $800 16mbit T/R card.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Communication group had also been fighting off release of mainframe TCP/IP support (part of fiercely fighting off client/server and distributed computing) ... then apparently some influential customers got that changed ... and then they changed their strategy, since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be release through them; what shipped got 44kbytes/sec aggregate using nearly whole 3090 processor.

I then did RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341 got sustained (4341) channel throughput using only modest amount of 4341 processor (around 500 times improvement in bytes moved per instruction execute).

rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
posts mentioning communication group fighting off client/server and distributed computing, trying to preserve their dumb terminal paradigm and install base
https://www.garlic.com/~lynn/subnetwork.html#terminal

recent posts that specifically mention $69 10mbit ethernet card
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#83 IBM's Near Demise
https://www.garlic.com/~lynn/2023b.html#50 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#34 Online Terminals
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#18 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#65 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021d.html#15 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021b.html#45 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#17 IBM Kneecapping products
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring

--
virtualization experience starting Jan1968, online at home since Mar1970

Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
 Concertina II Progress)
Newsgroups: comp.arch
Date: Tue, 13 Feb 2024 10:45:56 -1000
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
They shouldn't. Different people here have expressed different ideas, but each person has expressed more or less definite ideas. The essential element of "No true Scotsman" is that whatever the distinguishing property or quality is supposed to be is never identified, and cannot be, because it is chosen after the fact to make the "prediction" be correct. That's not what's happening in the RISC discussions.

I had impression from John
https://en.wikipedia.org/wiki/John_Cocke
that 801/risc was to do the opposite of the failed Future System effort
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
but there is also account of some risc work overlapping with FS
https://www.ibm.com/history/risc

In the early 1970s, telephone calls didn't instantly bounce between handheld devices and cell towers. Back then, the connection process required human operators to laboriously plug cords into the holes of a switchboard. Come 1974, a team of IBM researchers led by John Cocke set out in search of ways to automate the process. They envisioned a telephone exchange controller that would connect 300 calls per second (1 million per hour). Hitting that mark would require tripling or even quadrupling the performance of the company's fastest mainframe at the time -- which would require fundamentally reimagining high-performance computing.

... snip ...

End of 70s, 801/risc Iliad chip was going to be microprocessor for running 370 (for low&mid range 370 computers) & other architecture emulators ... effort floundered and even had some 801/risc engineers leaving IBM for other vendor risc efforts (like AMD 29k).

801/risc ROMP chip was going to be for the displaywriter followon ... but when that was killed, they decided to pivot to unix workstation market ... and got the company that did PC/IX for the IBM/PC to do port for ROMP ... announced as AIX for PC/RT.

Then there was six chip RIOS for RS/6000 ... we were doing HA/6000 originally for NYTimes to move their newspaper system (ATEX) off VAXcluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commerical cluster scale-up with RDBMS vendors (oracle, sybase, informix, ingres). At the time 801/risc didn't have cache coherency for multiprocessor scale-up

The executive we were reporting to, then went over to head up Somerset ... single chip for AIM
https://en.wikipedia.org/wiki/AIM_alliance
https://en.wikipedia.org/wiki/IBM_Power_microprocessors
https://en.wikipedia.org/wiki/Motorola_88000
In the early 1990s Motorola joined the AIM effort to create a new RISC architecture based on the IBM POWER architecture. They worked a few features of the 88000 (such as a compatible bus interface[10]) into the new PowerPC architecture to offer their customer base some sort of upgrade path. At that point the 88000 was dumped as soon as possible

... snip ...

https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/IBM_Power_microprocessors#PowerPC
After two years of development, the resulting PowerPC ISA was introduced
in 1993. A modified version of the RSC architecture, PowerPC added single-precision floating point instructions and general register-to-register multiply and divide instructions, and removed some POWER features. It also added a 64-bit version of the ISA and support for SMP.

... snip ...

trivia, telco work postdates ACS/360 ... folklore is IBM killed the effort because they were afraid that it would advance the state-of-the-art too fast and they would loose control of the market ... also references features that would show up more than two decades later in the 90s with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html

trivia2: We had early Jan92 meeting with Oracle CEO Ellison and AWD/Hester where Hester tells Ellison HA/CMP would have 16-way clusters by mid92 and 128-way clusters by ye92. Then end of Jan92, the official IBM Kingston supercomputer group pivtos and HA/CMP cluster scale-up is transferred to IBM Kingston (for announce as IBM supercomputer for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors. Then Computerworld news 17feb1992 (from wayback machine) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

A Look at Private Equity's Medicare Advantage Grifting

From: Lynn Wheeler <lynn@garlic.com>
Subject: A Look at Private Equity's Medicare Advantage Grifting
Date: 14 Feb, 2024
Blog: Facebook
A Look at Private Equity's Medicare Advantage Grifting
https://www.nakedcapitalism.com/2024/02/a-look-at-private-equitys-medicare-advantage-grifting.html
This paper is useful since it describes some of the Medicare Advantage abuses, explaining how various rentiers game the program, including outright fraud. After explaining the common types of extractive behavior, and making clear they are serious. Improper payments, according to the CBO, approximately 10% of total payments to Medicare Advantage Organizations, as in insurers that have contracted with Medicare to offer Medicare Advantage plans. And as the study explains, this isn't the only place fraud occurs. Broker schemes to get paid well over their supposedly regulated commissions and mis-selling of plans are also common abuses.

... snip ...

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

some recent specific posts mentioning private equity and health care
https://www.garlic.com/~lynn/2024.html#45 Hospitals owned by private equity are harming patients
https://www.garlic.com/~lynn/2024.html#0 Recent Private Equity News
https://www.garlic.com/~lynn/2023f.html#1 How U.S. Hospitals Undercut Public Health
https://www.garlic.com/~lynn/2023d.html#68 Tax Avoidance
https://www.garlic.com/~lynn/2023b.html#93 Corporate Greed Is a Root Cause of Rail Disasters Around the World
https://www.garlic.com/~lynn/2023.html#23 Health Care in Crisis: Warning! US Capitalism is Lethal
https://www.garlic.com/~lynn/2023.html#8 Ponzi Hospitals and Counterfeit Capitalism
https://www.garlic.com/~lynn/2022h.html#119 Patients for Profit: How Private Equity Hijacked Health Care
https://www.garlic.com/~lynn/2022h.html#53 US Is Focused on Regulating Private Equity Like Never Before
https://www.garlic.com/~lynn/2022g.html#25 Another Private Equity-Style Hospital Raid Kills a Busy Urban Hospital
https://www.garlic.com/~lynn/2022f.html#100 When Private Equity Takes Over a Nursing Home
https://www.garlic.com/~lynn/2022c.html#103 The Private Equity Giant KKR Bought Hundreds Of Homes For People With Disabilities
https://www.garlic.com/~lynn/2021h.html#20 Hospitals Face A Shortage Of Nurses As COVID Cases Soar
https://www.garlic.com/~lynn/2021g.html#64 Private Equity Now Buying Up Primary Care Practices
https://www.garlic.com/~lynn/2021f.html#7 The Rise of Private Equity
https://www.garlic.com/~lynn/2021e.html#48 'Our Lives Don't Matter.' India's Female Community Health Workers Say the Government Is Failing to Protect Them From COVID-19
https://www.garlic.com/~lynn/2021c.html#44 More Evidence That Private Equity Kills: Estimated >20,000 Increase in Nursing Home Deaths
https://www.garlic.com/~lynn/2021c.html#7 More Evidence That Private Equity Kills: Estimated >20,000 Increase in Nursing Home Deaths

--
virtualization experience starting Jan1968, online at home since Mar1970

Multicians

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Multicians
Date: 14 Feb, 2024
Blog: Facebook
Multicians
https://www.multicians.org

Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and Multics.
https://en.wikipedia.org/wiki/Multics
IBM line terminals used with CTSS & Multics
https://www.multicians.org/mga.html#2741

Others went to the IBM science center on the 4th flr
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

to do virtual machines (original CP40/CMS on 360/40 with hardware mods for virtual memory (& some amount of CMS from CTSS, CTSS RUNOFF redone as "SCRIPT"), which then morphs into CP67/CMS when 360/67 standard with virtual memory becomes available, precursor to VM370), internal network (technology also used for corporate sponsored univ. BITNET and EARN), invented GML in 1969 (morphs into ISO standard SGML a decade later and after another decade morphs into HTML at CERN) and then GML tag processing added to SCRIPT ... bunch of other online things. Lots more from Melinda's history
https://www.leeandmelindavarian.com/Melinda#VMHist

science center co-worker was responsible for networking technology for science center CP67 wide-area network which morphs into the internal corporate network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) and also used for the corporate sponsored BITNET and EARN univ. networks (BITNET/EARN also larger than arpanet/internet for a period). Some 60s network from one of the GML inventors
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

Science Center co-worker
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

Ed and I transfer out to SJR in 1977

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

technology also used for the corporate sponsored univ. BITNET (& EARN)
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
https://earn-history.net/technology/the-network/

In the early 80s, I had HSDT effort, T1 and faster computer links (both satellite & terrestrial), was also working with the NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers, then congress cuts the budget, some other things happened and eventually an RFP was released (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

trivia: 1st webserver in the US was on the Stanford SLAC VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

other trivia: I was undergraduate and full time univ. employee responsible for OS/360 (on 360/67 running as 360/65) and univ. shutdown on weekends and I had the datacenter dedicated (although 48hrs w/o sleep made monday classes hard). CSC came out and installed CP67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly played with it on weekends. CP67 had auto terminal recognition for 2741 and 1052, but univ also had some number of TTY/ASCII, so I added TTY support (integrated with auto terminal recognition) but had a hack with one byte line lengths ... which CSC picked up and included in CP67 distribution.

Account of CP67 system for MIT Urban Lab in tech sq (bldg across from 545) with 27 crashes in single day. Somebody down at harvard got ascii device with 1200 line length, they increased max. line length but didn't patch my one byte hack:
https://www.multicians.org/thvv/360-67.html

There was a little rivalry between 4th & 5th flrs ... one of their customers was USAFDC in the pentagon ...
https://www.multicians.org/sites.html
https://www.multicians.org/mga.html#AFDSC
https://www.multicians.org/site-afdsc.html

In spring 1979, some USAFDC wanted to come by to talk about getting 20 4341 VM370 systems. When they finally came by six months later, the planned order had grown to 210 4341 VM370 systems. Earlier in jan1979, I had been con'ed into doing a 6600 benchmark on an internal engineering 4341 (processor clock not running quite full-speed, before shipping to customers) for a national lab that was looking at getting 70 4341s for a compute farm (sort of leading edge of the coming cluster supercomputing tsunami). The national lab benchmark had run 35.77sec on 6600 and 36.21secs on engineering 4341.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
NSFNET related email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
Newsgroups: comp.arch
Date: Wed, 14 Feb 2024 13:26:05 -1000
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
By coincidence I have recently been reading The Design of Design, by Fred Brooks. In an early chapter he briefly relates (in just a few paragrahs) an experience of doing a review of the Future Systems architecture, and it's clear Brooks was impressed by a lot of what he heard. It's worth reading. But I can't resist giving away the punchline, which appears at the start of the fourth (and last) paragraph:

I knew then that the project was doomed.

In case people are interested, I think the rest of the book is worth reading also, but to be fair there should be a warning that much of what is said is more about management than it is about technical matters. Still I expect y'all will find it interesting (and it does have some things to say about computer architecture).


re:
https://www.garlic.com/~lynn/2024.html#98 Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)

one of the final nails in the FS coffin was study by the IBM Houston Science Center if 370/195 apps were redone for FS machine made out of the fastest available technology, they would have throughput of 370/145 (about factor of 30 times drop in throughput).

during the FS period, which was completely different than 370 and was going to completely replace it, internal politics was killing off 370 efforts ... the lack of new 370 during the period is credited with giving clone 370 makers their market foothold. when FS finally implodes there as mad rush getting stuff back into the 370 product pipelines.

trivia: I continued to work on 360&370 stuff all through FS period, even periodically ridiculing what they were doing (drawing analogy with a long running cult film playing down the street in central sq), which wasn't exactly career enhancing activity ... it was as if there was nobody that bothered to think about how all the "wonderful" stuff might actually be implemented (or even if was possible).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts specificly mentioning drawing analogy with cult film playing down at central sq
https://www.garlic.com/~lynn/2023d.html#12 Ingenious librarians
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#15 IBM Wild Ducks
https://www.garlic.com/~lynn/2021b.html#97 IBM Glory days
https://www.garlic.com/~lynn/2014m.html#155 IBM Continues To Crumble
https://www.garlic.com/~lynn/2012l.html#49 Too true to be funny - 51% of the surveyed Americans think that stormy we
https://www.garlic.com/~lynn/2011j.html#60 Who was the Greatest IBM President and CEO of the last century?
https://www.garlic.com/~lynn/2011j.html#14 Innovation and iconoclasm
https://www.garlic.com/~lynn/2011g.html#6 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011f.html#2 Car models and corporate culture: It's all lies
https://www.garlic.com/~lynn/2011d.html#13 I actually miss working at IBM
https://www.garlic.com/~lynn/2009d.html#66 Future System
https://www.garlic.com/~lynn/2008g.html#54 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008g.html#53 performance of hardware dynamic scheduling

--
virtualization experience starting Jan1968, online at home since Mar1970

EBCDIC Card Punch Format

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: EBCDIC Card Punch Format
Date: 15 Feb, 2024
Blog: Facebook
nearly 60yrs ago, the univ. had 709 (tape->tape) with 1401 (unit record front end) and IBM and convinced them to replace 709/1401 with 360/67 for tss/360; pending availability of 360/67, IBM replaced the 1401 with 360/30 (as part of gaining 360 experience). I had taken 2 credit hr intro to fortran/computers and at the end of the semester I was hired to rewrite 1401 MPIO in assembler for 360/30. The univ. shutdown the datacenter on the weekends and I had the whole place dedicated although 48hrs w/o sleep made monday classes hard. I was given a bunch of software and hardware manuals and got to design and implement my own monitor, interrupt handlers, device drivers, error recovery, storage management etc ... and after a few weeks had a 2000 card assembler program (running stand-alone, loaded by IPLing the BPS loader).

Re-assembly required booting OS/360 and assembling the program which took 30mins elapsed time. I eventually learned to read hex punch holes, fan the TXT deck out to card with the storage displacement, put it in 026, copy card out to patch location, multi-punch the patch and copy the rest of the card. I then did assembler option to generate either the stand-alone version or OS/360 version with system services (which took 60mins to assemble, each DCB macro taking 5-6mins). Sat. morning I had quickly learned the 1st thing was clean tape drives, printers, disassemble 2540, clean and re-assemble. Sometimes sat. morning, production had finished early and everything had been powered off. Sometimes 360/30 wouldn't power on ... lots of reading manual and trail&error, found I could place all the controllers in CE-mode, power on 360/30, individually power-on controllers and return to normal mode.

note: 360s originally were suppose to be ASCII machines, but the ASCII unit record gear wasn't ready yet, so they decided to ship with existing BCD gear and shift later (PSW had ASCII/EBCDIC mode bit). However, the 360 system developers had littered the software with enormous amount of EBCDIC dependencies ... and never were able to practically make the shift.
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
The culprit was T. Vincent Learson. The only thing for his defense is that he had no idea of what he had done. It was when he was an IBM Vice President, prior to tenure as Chairman of the Board, those lofty positions where you believe that, if you order it done, it actually will be done. I've mentioned this fiasco elsewhere.

... snip ...

other
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

Other Learson trivia as Chairman, trying (and failed) to block the bureaucrats, careerists, and MBAs from destroying the Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

20yrs later IBM has one of the largest losses in history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

one of the 100 videos from the 100th year event a few years ago was about "wild ducks" ... but it was customer "wild duck" ... apparently IBM employee wild ducks had been expunged.

gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts memtion EBCDIC "goof"
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023f.html#53 IBM Vintage ASCII 360
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#94 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023e.html#24 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023.html#80 ASCII/TTY Terminal Support
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards

recent posts mentioning rewriting 1401 MPIO for 360/30
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#66 2540 "Column Binary"
https://www.garlic.com/~lynn/2023g.html#53 Vintage 2321, Data Cell
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#36 Timeslice, Scheduling, Interdata
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#90 Vintage IBM HASP
https://www.garlic.com/~lynn/2023f.html#88 Vintage IBM 709
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#29 Univ. Maryland 7094
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023f.html#14 Video terminals
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#99 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#60 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#28 Punch Cards
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#58 Almost IBM class student
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?

--
virtualization experience starting Jan1968, online at home since Mar1970

Multicians

From: Lynn Wheeler <lynn@garlic.com>
Subject: Multicians
Date: 15 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#100 Multicians

Note both Multics and IBM TSS/360 had single level store design ... and was sort of adopted by the future system effort ... with FS failure also contributed to giving single-level-store a bad reputation inside IBM ... more FS
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
one of the last nails in the FS coffin was analysis by the IBM Houston Science Center showing that redoing 370/195 applications for FS machine made out of the fastest hardware technology available, would have throughput of 370/145 (something like 30 times slowdown).

As science center (on 4th flr, below Multics on 5th flr), I continued to work on 360/370 all during FS period even periodically ridiculing FS, drawing analogy with long running cult film down at central sq (which wasn't a career enhancing activity). I also done a single-level-store like implementation for CP67/CMS filesystem and moved to VM370/CMS (I would claim I learned how not to do some things based on TSS/360 experience) ... available on internal systems ... but got lots of resistance after FS implosion.

Chandersekaran sent out a request (copying you) asking for somebody to teach CP internals which found its way to me ... my reply (from long ago ... nearly 40yrs ago ... and far away):

Date: 11/14/85 09:33:21
From: wheeler

re: cp internals class;

I'm not sure about 3 days solid ... and/or how useful it might be all at once ... but I might be able to do a couple of half days here and there when I'm in washington for other reasons. I'm there (Alexandria) next tues, weds, & some of thursday.

I expect ... when the NSF joint study for the super computer center network gets signed ... i'll be down there more.

BTW, I'm looking for a IBM 370 processor in the wash. DC area running VM where I might be able to get a couple of userids and install some hardware to connect to a satellite earth station & drive PVM & RSCS networking. It would connect into the internal IBM pilot ... and possibly also the NSF supercomputer pilot.


... snip ... top of post, old email index, NSFNET email

as per above, internal IBM politics shutdown the effort with NSF & supercomputer centers.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
NSFNET related email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Multicians

From: Lynn Wheeler <lynn@garlic.com>
Subject: Multicians
Date: 15 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#103 Multicians

... s/38 simplified (FS) version of single level store ...also there was plenty of performance headroom between s/38 market throughput requirements and available technology

one of the simplifications was adding disks into single filesystem with result could have scatter allocation of file across available disks and any backup/restore would be the whole filesystem (no matter how many disks). common was single disk failure ... requiring the physical disk replaced and then the whole filesystem restored (scale-up could mean the whole system was doing nothing but backup/restore for extended period of time).

s/38 with multiple disks could spend a day or two restoring the filesystem after a single disk failure. an engineer over in bldg14, where I periodically played disk engineer, is responsible for 1977 RAID patent
https://en.wikipedia.org/wiki/RAID#History
and since single disk failures were so traumatic for larger s/38 systems, they were early adopter.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts mentioning getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

some posts mentioning s/38 single level store, single disk failures, and raid
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#61 The Most Important Computer You've Never Heard Of
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2022.html#41 370/195
https://www.garlic.com/~lynn/2021k.html#43 Transaction Memory
https://www.garlic.com/~lynn/2019c.html#44 IBM 9020
https://www.garlic.com/~lynn/2019c.html#32 IBM Future System
https://www.garlic.com/~lynn/2019c.html#2 S/38, AS/400
https://www.garlic.com/~lynn/2019b.html#52 S/360
https://www.garlic.com/~lynn/2018f.html#118 The Post-IBM World
https://www.garlic.com/~lynn/2018f.html#37 The rise and fall of IBM
https://www.garlic.com/~lynn/2018e.html#95 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017k.html#43 Low end IBM System/360 (-30) and other machines
https://www.garlic.com/~lynn/2017j.html#34 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017g.html#66 Is AMD Dooomed? A Silly Suggestion!
https://www.garlic.com/~lynn/2016e.html#115 IBM History
https://www.garlic.com/~lynn/2015h.html#59 IMPI (System/38 / AS/400 historical)
https://www.garlic.com/~lynn/2014m.html#115 Mill Computing talk in Estonia on 12/10/2104
https://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014c.html#76 assembler
https://www.garlic.com/~lynn/2013o.html#7 Something to Think About - Optimal PDS Blocking
https://www.garlic.com/~lynn/2013f.html#29 Delay between idea and implementation
https://www.garlic.com/~lynn/2011l.html#15 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2011d.html#71 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011.html#14 IBM Future System
https://www.garlic.com/~lynn/2010o.html#7 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2007t.html#72 Remembering the CDC 6600

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM, Unix, editors

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM, Unix, editors
Date: 16 Feb, 2024
Blog: Facebook
comment from the (mainframe) vi post (edit, red, zed, edgar, spf, xedit, ned)
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors

also, as undergraduate I had been hired fulltime responsible for OS/360 (running on 360/67 as 360/65 ... originally was for TSS/360) ... the univ. shutdown datacenter on weekends and would have the place dedicated ... but 48hrs w/o sleep made monday classes hard. Then science center came out and installed CP67 (precursor to vm370), 3rd install after CSC itself and MIT Lincoln Labs ... mostly got to play with it during weekend dedicated time. There was 2250 and I modified CMS editor to have fullscreen 2250 support (leveraging a 2250 library that Lincoln Labs had done originally for Fortan). Then for MVT18+HASP, I added terminal support to HASP with editor implementing CMS edit syntax (HASP conventions were totally different than CMS so I had to implement from scratch) for CRJE.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
hasp, jes, nje, nji posts
https://www.garlic.com/~lynn/submain.html#hasp

a few posts mentioning in 60s, cms edit modified for 2250 and HASP modified for CRJE
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2005n.html#45 Anyone know whether VM/370 EDGAR is still available anywhere?
https://www.garlic.com/~lynn/99.html#109 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM, Unix, editors

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM, Unix, editors
Date: 16 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#105 IBM, Unix, editors

much more mellow, recent post about Learson trying (and failed) to block bureaucrats, careerists and MBAs from destroying Watson culture/legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

also mentions in the late 70s and early 80s, being blamed for online computer conferencing on the IBM internal network (larger than arpanet/internet from just about the start until sometime mid/late 80s) for online computer conferencing (precursor to modern social media) ... folklore when corporate executive committee was told, 5of6 wanted to fire me. A decade later (and two decades after Learson failed) ... IBM has one of the largest losses in US company history and was being reorged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

corporate support for vm370, modulo things like in the wake of the FS failure ... head of POK managed to convince corporate to kill VM370, shutdown the development group, transfer all the people to POK for MVS/XA (supposedly otherwise MVS/XA wouldn't ship on time). Endicott eventually managed to save the VM370 product mission, but had to recreate a development group from scratch. POK executives were also going around internal datacenters trying to bully them into moving off VM370/CMS to MVS/TSO.

a few recent refs about POK killing VM370 and bullying internal datacenters:
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
Newsgroups: comp.arch
Date: Fri, 16 Feb 2024 09:36:05 -1000
Brett <ggtgp@yahoo.com> writes:
A page with a bunch of links on IBM future systems:

https://people.computing.clemson.edu/~mark/fs.html#:~:text=The%20IBM%20Future%20System%20(FS,store%20with%20automatic%20data%20management.


re:
https://www.garlic.com/~lynn/2024.html#98 Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
https://www.garlic.com/~lynn/2024.html#101 Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)

trivia: upthread post I also mention web page
https://people.computing.clemson.edu/~mark/fs.html
and Smotherman references archive of my posts that mention future system
https://www.garlic.com/~lynn/subtopic.html#futuresys
but around two decades ago, I split subtopic.html web page into several, now
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM, Unix, editors

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM, Unix, editors
Date: 16 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#105 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#106 IBM, Unix, editors

the 23jun69 unbundling announcement started to charge for (application) products ... price requirements was that revenue had to cover original development and ongoing development/maintenance.

Mainstream MVT/SVS/MVS organizations tended to be extremely bloated with large run rate. This was encountered with JES NJE. The origianl HASP code had "TUCC" in col. 68-71 ... then adopted to JES2. However, there was no NJE forecasted price where the projected number of customers resulted in the required revenue. The VM370 group was trying to get VNET/RSCS announced ... which met requirements at a few dollars/month ... but the head of POK was in the process of convincing corporate to kill VM370 product ... so there was no prospect of getting the VNET/RSCS product approved (even tho the internal network was rapidly approaching 700, nearly all vm370). Then the JES group cut a deal announcing NJE with VNET/RSCS as a "joint" product (at $600/month) ... resulting in the joint product revenue to meet the requirements (effectively VNET/RSCS covering JES NJE costs).

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
HASP, jes, nge/nji posts
https://www.garlic.com/~lynn/submain.html#hasp
internel network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

This tactic was further refined for ISPF ... sufficient that a product group's total revenue covered the total organization costs. There was no price of ISPF that resulted in meeting total customer revenue requirement. The approach was to transfer three VM/370 performance products (along with 3 people) into the same (bloated) organization with ISPF ... effectively allowing the VM/370 performance products' revenue to cover ISPF.

a few posts mentioning having VM/370 performance products pay for ISPF
https://www.garlic.com/~lynn/2022e.html#63 IBM Software Charging Rules
https://www.garlic.com/~lynn/2022c.html#45 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2021k.html#89 IBM PROFs
https://www.garlic.com/~lynn/2019e.html#126 23Jun1969 Unbundling
https://www.garlic.com/~lynn/2017i.html#23 progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017g.html#34 Programmers Who Use Spaces Paid More
https://www.garlic.com/~lynn/2013k.html#27 Unbuffered glass TTYs?
https://www.garlic.com/~lynn/2013i.html#36 The Subroutine Call
https://www.garlic.com/~lynn/2012n.html#64 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2011p.html#106 SPF in 1978
https://www.garlic.com/~lynn/2010g.html#50 Call for XEDIT freaks, submit ISPF requirements
https://www.garlic.com/~lynn/2010g.html#6 Call for XEDIT freaks, submit ISPF requirements

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM User Group SHARE

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM User Group SHARE
Date: 16 Feb, 2024
Blog: Facebook
TYMSHARE ... online commercial service bureau
https://en.wikipedia.org/wiki/Tymshare
and its TYMNET with lots of local phone numbers around US and the world
https://en.wikipedia.org/wiki/Tymnet

In Aug1976, Tymshare started offering its CMS-based online computer conferencing free to (user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

was complemented by the SHARE program library (dating back to the really early SHARE days, when customers wrote and shared their own operating systems)
https://en.wikipedia.org/wiki/History_of_IBM_mainframe_operating_systems
https://en.wikipedia.org/wiki/SHARE_Operating_System
and Univ Waterloo program library

I had cut deal with TYMSHARE to get monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on the internal IBM network (and various internal IBM systems, including the online, branch office, world-wide sales&marketing support HONE systems). My biggest problem was with lawyers who were concerned that internal employees would be contaminated if exposed to unfiltered customer information. For example the CERN analysis comparing MVS/TSO with VM370/CMS was freely available at SHARE, but inside IBM they were stamped "IBM Confidential - Restricted" ... 2nd highest security classification, available on need to know only.

trivia: After joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters and HONE was long time customer. Initially online US HONE was a number of (virtual machine) CP67/CMS systems, then started having a few HONE installations outside US. Then migrated HONE to VM370/CMS and all the US HONE systems were consolidated in Palo Alto (trivia: when facebook 1st moved into Silicon Valley, it was into a new bldg, built next door to the old consolidated US HONE datacenter) with larger numbers of HONE clones appearing around the world.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

some posts mentioning vmshare, cern, mvs/tso, vm370/cms
https://www.garlic.com/~lynn/2023e.html#66 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2022h.html#69 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2020.html#28 50 years online at home
https://www.garlic.com/~lynn/2014b.html#105 Happy 50th Birthday to the IBM Cambridge Scientific Center
https://www.garlic.com/~lynn/2010q.html#34 VMSHARE Archives

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM User Group SHARE

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM User Group SHARE
Date: 16 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#109 IBM User Group SHARE

Science Center besides responsible for virtual machines and the internal network ... also did a lot of online apps and GML was invented there in 1969 ... from one of the GML inventors:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

CSC CP67 wide-area network, ballooning into the world-wide internal network (larger than arapnet/internet from just about the beginning until sometime mid/late 80s). CTSS RUNOFF had been redone for CMS as SCRIPT and after GML was invented, GML tag support added to SCRIPT. After a decade, GML morphs into ISO standard and after another decade morphs into HTML at CERN. trivia: the first webserver in the US was on Stanford SLAC VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

SLAC also hosted/sponsored the monthly computer user BAYBUNCH meetings

co-worker at CSC responsible for internal corporate network ... technology also used for the corporate sponsored univ BITNET (and EARN) networks
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

Ed and I transfer out to SJR in 1977

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

technology also used for the corporate sponsored univ. BITNET (& EARN)
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
https://earn-history.net/technology/the-network/

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
gml, sgml, html posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet (& earn) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
hsdt network posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
NSFNET related email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM User Group SHARE

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM User Group SHARE
Date: 17 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#109 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#110 IBM User Group SHARE

I was blamed for online computer conferencing on the internal network in the late 70s and early 80s ... which really took off spring of 1981 when I distributed trip report of visit to Jim Gray at Tandem, only about 300 directly participated, claim is that 25,000 were reading (folklore is that when corporate executive committee was told, 5of6 wanted to fire me). Some more in this recent post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

One of the results was official conferencing software and sanctioned, moderated discussion forums. Another was a research was paid to sit in the back of my office for nine months, taking notes on how I communicated, face-to-face, telephone, got copies of all my incoming and outgoing email and logs of all instant messages ... material also was used for conference talks, papers, books and Stanford Phd (joint between language and AI, Winograd was advisor on AI side).

Old email from former co-worker from IBM France that spent a year at the IBM Cambridge Science Center:

Date: 03/20/84 15:15:41
To: wheeler

Hello Lynn,

I have left LaGaude last September for a 3 years assignement to IBM Europe, where I am starting a network that IBM helps the universities to start.

This network, called EARN (European Academic and Research Network), is, roughly speaking, a network of VM/CMS machines, and it looks like our own VNET. It includes some non IBM machines (many VAX, some CDC, UNIVAC and some IBM compatible mainframes). EARN is a 'brother' of the US network BITNET to which it is connected.

EARN is starting now, and 9 countries will be connected by June. It includes some national networks, such as JANET in U.K., SUNET in Sweden.

I am now trying to find applications which could be of great interest for the EARN users, and I am open to all ideas you may have. Particularly, I am interested in computer conferencing.


... snip ... top of post, old email index, HSDT email

listserv (internal computer conferencing IBM software could work in both usenet local repository mode and mail list mode) from Paris 1985 on EARN ... used on both EARN&BITNET for computer conferencing mailing lists https://en.wikipedia.org/wiki/LISTSERV
https://www.lsoft.com/products/listserv-history.asp

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
bitnet&earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

In early 80s, I also had HSDT project (T1 and faster computer links) and was working with NSF director and was suppose to get $20M for NSF supercomputer center interconnects. Then congress cuts the budget, some other things happen and eventually NSF releases RFP (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

Late 85 email about request for a CP internals class that had been distributed by FSD and been forwarded, eventually to me (our NSF activity had yet to be kneecapped):

Date: 11/14/85 09:33:21
From: wheeler

re: cp internals class;

I'm not sure about 3 days solid ... and/or how useful it might be all at once ... but I might be able to do a couple of half days here and there when I'm in washington for other reasons. I'm there (Alexandria) next tues, weds, & some of thursday.

I expect ... when the NSF joint study for the super computer center network gets signed ... i'll be down there more.

BTW, I'm looking for a IBM 370 processor in the wash. DC area running VM where I might be able to get a couple of userids and install some hardware to connect to a satellite earth station & drive PVM & RSCS networking. It would connect into the internal IBM pilot ... and possibly also the NSF supercomputer pilot.


... snip ... top of post, old email index, NSFNET email

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
NSFNET related email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM User Group SHARE

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM User Group SHARE
Date: 17 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#109 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#110 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#111 IBM User Group SHARE

In 1992, IBM had one of the largest losses in the history of US companies and was being reorg'ed into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone

Also in 1992, AMEX spun off a lot of mainframe dataprocessing and financial outsourcing in the largest IPO up until that time, as FDC. Later in the 90s, I was hired into FDC. Turn of the century, one of the large FDC mainframe datacenters was handling over half of all credit card accounts in the US (account transactions, auths, settlements, statementing/billing, card embossing/personalization, call centers, etc). They had something like 40+ mainframe systems (@$30M) all running the same 450K statement cobol application (number of systems needed to finish settlement in the overnight batch window). They had a large performance group that for decades been managing throughput ... but possibly got a little myopic in their approach. I asked to use some different analysis technology (from the 70s at the IBM cambridge scientific center) and found 14% throughput improvement.

A performance analyst (from the EU) was also brought in, who had acquired (during the IBM troubles of the early 90s) a descendant of the IBM cambridge scientific center APL analytical system model (and had run it through APL->C converter), using it to find another 7% throughput improvement.

In the early 70s, a co-worker had written an APL analytical system model that was made available on the (world-wide, online, sales&marketing) HONE systems as the Performance Predictor; branch people could enter customer configuration and workload profiles and ask "what-if" questions about what happens when changes were made (to configuration and/or workload).

A modified version was also used to make (consolidated US) HONE workload balancing decisions (when US HONE datacenters were consolidated in Palo Alto, initially grew to eight 168 in single-system image, loosely-coupled configuration sharing large disk farm with load balancing and fall-over support (configuration comparable to largest ACP/TPF configuration). I then got around to adding CP67 multiprocessor support to VM370 (aka in the morph of CP67->VM370, lots of features were dropped and/or greatly simplified) initially for HONE, so they could add a 2nd processor to each system (at least twice any ACP/TPF since ACP/TPF didn't have multiprocessor support, at the time, lots of ACP/TPF configurations capped at four systems & processors).

Old article, some of the early stuff is litle garbled, Mar/Apr '05 eserver mag (mentions FDC)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
related
https://www.enterprisesystemsmedia.com/mainframehalloffame
and
http://mvmua.org/knights.html

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

posts mentioning performance predictor and FDC 450k statement cobol
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

Cobol

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Cobol
Date: 18 Feb, 2024
Blog: Facebook
recent comment reply about datacenter with some 40+ max configured mainframes all running the same 450k statement cobol app
https://www.garlic.com/~lynn/2024.html#26 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE

note a few years earlier ... large financial institutions were investing billions in redoing overnight batch settlement (some cobol apps dating back to the 60s) with parallel straight through settlement implemented on large number of "killer micros" ... and were using some open industry parallelization libraries. Large part of the motivations was batch settlement workloads were starting to overrun the overnight processing window. Some of us tried to convince them that those parallelization libraries had 100 times the overhead of batch cobol ... which no "reasonable" large numbers of killer micros could overcome that huge parallelization overhead. Fell on deaf ears until some major pilot deployments went up in infernos ... and they retrenched to legacy implementation (with whispered comments that it would be unlikely to be retried for decades).

A couple other things happened around the turn of the century. The killer micro technology was redone with hardware layer that translated instructions into risc micro-ops for actual execution (largely negating the throughput difference with real RISC processor implementations).
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#81 Benchmarks

from above 2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc), 2003 Pentium4 processor 9.7BIPS

Another was major system and RDBMS (including IBM) vendors had been doing significant throughput optimization for (non-mainframe) parallelization/cluster operation. Some demo (non-COBOL) "straight through settlement" implementations rewritten for SQL RDBMS (and relying on the cluster RDBMS parallelization rather than roll-your-own with public libraries) would show many times the throughput of any existing legacy operation. A cluster of six Pentium4 multiprocessors (with four processors each), aggregate of 24 Pentium4 processors and 233BIPS easily outperformed a max. configured z990 9BIPS (but industry was still reeling from the failures of the previous decade)

Another recent comment reply about comparing disk IOPS throughput of FICON compared to native FCS ... and observation that no CKD DASD have been manufactured for decades, all being simulated on industry standard fixed-block disks.
https://www.garlic.com/~lynn/2024.html#7 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#54 RS/6000 Mainframe

some recent posts mentioning "cobol batch", overnight batch window, "batch settlement", and straight through processing
https://www.garlic.com/~lynn/2023g.html#12 Vintage Future System
https://www.garlic.com/~lynn/2022g.html#69 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#3 Final Rules of Thumb on How Computing Affects Organizations and People
https://www.garlic.com/~lynn/2021k.html#123 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021g.html#18 IBM email migration disaster
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros

--
virtualization experience starting Jan1968, online at home since Mar1970

BAL

From: Lynn Wheeler <lynn@garlic.com>
Subject: BAL
Date: 18 Feb, 2024
Blog: Facebook
at end of semester after taking a two credit hr intro to fortran/computers, was hired to rewrite 1401 MPIO in assembler for 360/30 (was given a lot of hardware & software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc), ... then within a year of taking intro class, was hired fulltime responsible for OS/360, some recent posts
https://www.garlic.com/~lynn/2024.html#102 EBCDIC Card Punch Format
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#66 2540 "Column Binary"
https://www.garlic.com/~lynn/2023g.html#53 Vintage 2321, Data Cell
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#36 Timeslice, Scheduling, Interdata

more than decade later in 1977 after transferring to SJR, got to wander around IBM & non-IBM datacenters in silicon valley, including disk engineering (bldg14) and disk product test (bldg15) across the street. They were doing 7x24, prescheduled, stand-alone, mainframe testing and had mentioned that they had recently tried MVS, but it had 15min mean-time-between failure (MTBF) in that environment (requiring manual re-ipl). I offered to rewrite I/O supervisor (all BAL) making it bullet proof and never fail enabling any amount of on-demand, concurrent testing, greatly improving productivity. I wrote a (internal IBM) research report and happened to mention MVS 15min MTBF, bringing down the wrath of the MVS organization on my head. Later, just before 3380s were about to ship, FE had test of 57 simulated hardware errors (they felt were likely to happen) and in all 57 cases, MVS was still failing (requiring re-ipl) and in 2/3rd of the failures, there was no indication of what caused the failure (& I didn't feel baldly about it).

playing disk engineering posts
https://www.garlic.com/~lynn/subtopic.html#disk

Late 80s, we had started HA/6000 (Nick Donofrio had approved) originally for NYTimes to migrate their newspaper system (ATEX) off VAXCluster, I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS. Also spent some time talking to TA to FSD president ... who was spending 2nd shift writing ADA code for the latest FAA modernization effort.
https://en.wikipedia.org/wiki/Ada_(programming_language)

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

trivia: never dealt with Fox in IBM; FAA ATC, The Brawl in IBM 1964
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514
Two mid air collisions 1956 and 1960 make this FAA procurement special. The computer selected will be in the critical loop of making sure that there are no more mid-air collisions. Many in IBM want to not bid. A marketing manager with but 7 years in IBM and less than one year as a manager is the proposal manager. IBM is in midstep in coming up with the new line of computers - the 360. Chaos sucks into the fray many executives- especially the next chairman, and also the IBM president. A fire house in Poughkeepsie N Y is home to the technical and marketing team for 60 very cold and long days. Finance and legal get into the fray after that.

... snip ....

Executive Qualities
https://www.amazon.com/Executive-Qualities-Joseph-M-Fox/dp/1453788794
After 20 years in IBM, 7 as a divisional Vice President, Joe Fox had his standard management presentation -to IBM and CIA groups - published in 1976 -entitled EXECUTIVE QUALITIES. It had 9 printings and was translated into Spanish -and has been offered continuously for sale as a used book on Amazon.com. It is now reprinted -verbatim- and available from Createspace, Inc - for $15 per copy. The book presents a total of 22 traits and qualities and their role in real life situations- and their resolution- encountered during Mr. Fox's 20 years with IBM and with major computer customers, both government and commercial. The presentation and the book followed a focus and use of quotations to Identify and characterize the role of the traits and qualities. Over 400 quotations enliven the text - and synthesize many complex ideas.

... snip ...

... but after leaving IBM, had a project with Fox and his company that also had some other former FSD FAA people.

posts mentioning 57 simulated errors all crashing MVS
https://www.garlic.com/~lynn/2024.html#88 IBM 360
https://www.garlic.com/~lynn/2023g.html#105 VM Mascot
https://www.garlic.com/~lynn/2023f.html#115 IBM RAS
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#72 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#80 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#35 Error Handling
https://www.garlic.com/~lynn/2021k.html#59 IBM Mainframe
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing is a wake-up call

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing is a wake-up call
Date: 18 Feb, 2024
Blog: Facebook
Boeing is a wake-up call. America's businesses gambled that 'greed is good.' Now they're losing that bet, big time.
https://www.businessinsider.com/boeing-disaster-american-businesses-greedy-broken-stock-market-wall-street-2024-2

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalsim
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback

Boeing Anniversary/Century post
https://www.garlic.com/~lynn/2016e.html#20 The Boeing Century
some recent posts mentioning Boeing (and change after M/D merger)
https://www.garlic.com/~lynn/2024.html#77 Boeing's Shift from Engineering Excellence to Profit-Driven Culture: Tracing the Impact of the McDonnell Douglas Merger on the 737 Max Crisis
https://www.garlic.com/~lynn/2024.html#66 Further Discussion of Boeing's Slow Motion Liquidation: Rational Given Probable Airline Industry Shrinkage?
https://www.garlic.com/~lynn/2024.html#56 Did Stock Buybacks Knock the Bolts Out of Boeing?
https://www.garlic.com/~lynn/2023g.html#104 More IBM Downfall
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022g.html#64 Massachusetts, Boeing
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2022.html#90 Navy confirms video and photo of F-35 that crashed in South China Sea are real
https://www.garlic.com/~lynn/2021k.html#103 After deadly 737 Max crashes, damning whistleblower report reveals sidelined engineers, scarcity of expertise, more
https://www.garlic.com/~lynn/2021k.html#78 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#69 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#40 Boeing Built an Unsafe Plane, and Blamed the Pilots When It Crashed
https://www.garlic.com/~lynn/2021j.html#67 A Mini F-35?: Don't Go Crazy Over the Air Force's Stealth XQ-58A Valkyrie
https://www.garlic.com/~lynn/2021h.html#64 WWII Pilot Barrel Rolls Boeing 707
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021e.html#87 Congress demands records from Boeing to investigate lapses in production quality
https://www.garlic.com/~lynn/2021c.html#39 WA State frets about Boeing brain drain, but it's already happening
https://www.garlic.com/~lynn/2021c.html#36 GAO report finds DOD's weapons programs lack clear cybersecurity guidelines
https://www.garlic.com/~lynn/2021b.html#70 Boeing CEO Said Board Moved Quickly on MAX Safety; New Details Suggest Otherwise
https://www.garlic.com/~lynn/2021b.html#40 IBM & Boeing run by Financiers
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#11 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM's Unbundling

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM's Unbundling
Date: 20 Feb, 2024
Blog: Facebook
IBM's Unbundling
https://www.youtube.com/watch?feature=shared&v=rXS6lTYSgCw

23Jun1969 unbundling announcement started to charge for (application) software (they made the case that kernel software was still free), SE services, maint., etc. During this period, standard SE training including sort of journeyman position as part of group SEs at customer datacenter ... however they couldn't figure out how *not* to charge for trainee SE time at customer location. This was the initial motivation for the HONE (hands-on network experience) virtual machine CP/67 systems (SEs at branch offices could login and practice with guest operating systems in virtual machines).

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

The cambridge science center had also ported APL\360 to CP67/CMS for CMS\APL and HONE started to offer APL-based sales&marketing support applications ... and it wasn't long before the sales&marketing use came to dominate all HONE activity (and use for guest operating system use dwindled away) ... and HONE (sales&marketing) clones started popping up all over the world (and migrated to VM370)

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Early 70s, IBM had the future system effort (completely different from 370 and was going to replace all 370s)
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

During FS, internal politics were killing off 370 efforts, and the lack of new 370 products is credited with giving the clone 370 makers (like Amdahl) their market foothold. Then when FS imploded, there was mad rush to get stuff back into the 370 product pipelines including kicking off the quick&dirty 3033&3081 efforts. The rise of clone 370s, also was motivation to transition to charging for kernel software, starting with incremental kernel add-ons ... eventually transition to charging for all kernel software in the early 80s ... and then the OCO-wars (object-code only, decision to stop shipping source for software). One of the last nails in the FS coffin was IBM Houston Science Center analysis that 370/195 applications redone for FS machine made out of the fastest available hardware technology, would have throughput of 370/145 (about 30times slowdown).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

One of my hobbies after joining IBM was enhanced production operating systems and HONE was long time customer. Then some of my enhancements were selected as initial guinea pig for kernel add-on charging and I had to spend some amount of time with lawyers and business planners on kernel charging policies.

dynamic adaptive resource management (also part of original kernel add-on)
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging and page replacement algorithms (also part)
https://www.garlic.com/~lynn/subtopic.html#clock

trivia: Late 60s, Amdahl had won the battle that ACS should be 360 compatible. Then ACS/360 was killed, folklore was that executives were afraid that it would advance the state-of-the-art too fast and IBM would loose control of the market. Amdahl leaves IBM shortly later (to do clone mainframes) before the start of FS ... following also has some ACS/360 features that show up in the 90s with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html

trivia: For software charging, IBM was under requirement that the charges had to cover initial development plus ongoing maintenance and development ... and some mainstream IBM organizations were having trouble adapting ... it was common to do high, middle, and low charge forecast ... forecasting expected customers at each price level. However, some mainstream IBM software couldn't find a price point where the revenue met the requirement. One thing found was that the MVS organizations had significantly higher cost structure than the VM/370 organizations ... and the charging requirement was somewhat loosely defined. An example was announcing JES2 networking as a "combined" product with VM/370 networking ... with the same price ... where the VM/370 revenue could be used to cover the MVS costs.

internal network posts (mostly vm370 rscs/vnet)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

post starting with Learson trying (& failed) to block the bureaucrats, careerists, and MBAs from destroying the Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 20 Feb, 2024
Blog: Facebook
Communication group was fiercely fighting off client/server and distributed computing and trying to block release of mainframe tcp/ip support ... when possibly some influential customers got that reversed, communication group changed their tactic and said that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be release through them. what shipped got 44kbytes/sec aggregate and used nearly whole 3090 processor. I then did support for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained channel throughput, using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

rfc 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

A little later, a senior disk engineer got a talk scheduled at internal, world-wide, communication group conference supposedly on 3174 performance, but opened the talk that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing data fleeing datacenter for more distributed computing friendly platforms with a drop in disk sales. The disk division had come up with a number of solutions, but they were all being vetoed (since they had corporate responsibility for everything that crossed datacenter walls). As partial countermeasure GPD/Adstar VP of software was investing in distributed computing startups that would use IBM disks and would periodically ask us to stop by his investments to see if we could provide any assistance.

communication group fighting off client/server and distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

The communication group datacenter stranglehold wasn't just affecting disks, and a few short years later, IBM has one of the largest losses in the history of US companies and was being reorged into the 13 baby blues in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

Also in the early 90s, the communication group had hired a silicon valley contractor to implement TCP/IP support directly in VTAM ... what he initially demoed was TCP running much faster than LU6.2. He was then told that everybody knows that LU6.2 is much faster than a "proper" TCP/IP implementation, and they would only be paying for a proper implementation.

trivia: IBM AWD had done their own (AT bus) 4mbit token-ring card for the PC/RT. Then for the RS/6000 microchannel, they were told they couldn't do their own cards, but had to use the PS2 microchannel cards (that had all been severely performance kneecapped by the communication group). It turns out that the PS2 microchannel 16mbit token-ring card had lower (card) throughput than the PC/RT 4mbit token-ring card (joke was that a PC/RT token-ring server had higher throughput than a RS/6000 token-ring server). The new Almaden bldg had been heavily provisioned with CAT4 wiring, presumably for 16mbit token-ring, but they found that 10mbit ethernet CAT4 had higher aggregate LAN throughput and lower latency than 16mbit token-ring. Also a $69 10mbit ethernet card had higher card throughput than a $800 16mbit token-ring card.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
post starting with Learson trying (& failed) to block the bureaucrats, careerists, and MBAs from destroying the Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Transfer SJR to YKT

From: Lynn Wheeler <lynn@garlic.com>
Subject: Transfer SJR to YKT
Date: 20 Feb, 2024
Blog: Facebook
In the early 80s I was transferred from SJR to YKT for some transgressions, but continued to live in San Jose with offices in SJR (later ALM) and Los Gatos ... but had to commute to YKT a couple times a month ... reference

there was at least one person in YKT that was very active in tandem memos ... and also pen'ed the executive summary. More information
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
... folklore is 5of6 executive committee wanted to fire me.

Note: Learson tried to block bureaucrats from destroying Watson culture/legacy. 20yrs later (and a decade after Tandem Memos), IBM has one of the largest losses in the history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup

online computer conferencing (& tandem memos) posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former AMEX presidient and IBM CEO Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

Transfer SJR to YKT

From: Lynn Wheeler <lynn@garlic.com>
Subject: Transfer SJR to YKT
Date: 20 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#118 Transfer SJR to YKT

In early 80s, I also had HSDT project (T1 and faster computer links) and was working with NSF director and was suppose to get $20M for NSF supercomputer center interconnects. Then congress cuts the budget, some other things happen and eventually NSF releases RFP (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

Between the $20m and RFP ... was suppose to have NSF joint study/pilot ... and had meeting setup in YKT with number of the datacenter directors ... and then somebody in YKT called them all up and canceled the meeting.

Late 85 email about request for a CP internals class that had been distributed by FSD and been forwarded, eventually to me (our NSF activity had yet to be kneecapped):

Date: 11/14/85 09:33:21
From: wheeler

re: cp internals class;

I'm not sure about 3 days solid ... and/or how useful it might be all at once ... but I might be able to do a couple of half days here and there when I'm in washington for other reasons. I'm there (Alexandria) next tues, weds, & some of thursday.

I expect ... when the NSF joint study for the super computer center network gets signed ... i'll be down there more.

BTW, I'm looking for a IBM 370 processor in the wash. DC area running VM where I might be able to get a couple of userids and install some hardware to connect to a satellite earth station & drive PVM & RSCS networking. It would connect into the internal IBM pilot ... and possibly also the NSF supercomputer pilot.


... snip ... top of post, old email index, NSFNET email

later somebody was collecting a lot of executive email with a lot of sna/vtam misinformation with respect to NSFNET ... and forwarded to us ... previously posted (extreme sniped and redacted to protect the guilty)

Date: 01/09/87 16:11:26
From: ?????

TO ALL IT MAY CONCERN-

I REC'D THIS TODAY. THEY HAVE CERTAINLY BEEN BUSY. THERE IS A HOST OF MISINFORMATION IN THIS, INCLUDING THE ASSUMPTION THAT TCP/IP CAN RUN ON TOP OF VTAM, AND THAT WOULD BE ACCEPTABLE TO NSF, AND THAT THE UNIVERSITIES MENTIONED HAVE IBM HOSTS WITH VTAM INSTALLED.

Forwarded From: ***** To: ***** Date: 12/26/86 13:41

1. Your suggestions to start working with NSF immediately on high speed (T1) networks is very good. In addition to ACIS I think that it is important to have CPD Boca involved since they own the products you suggest installing. I would suggest that ***** discuss this and plan to have the kind of meeting with NSF that ***** proposes.

< ... great deal more of the same; several more appended emails from several participants in the MISINFORMATION ... >


... snip ... top of post, old email index, NSFNET email

... an earlier '85 email in the series

Date: 09/30/85 17:27:27
To: CAMBRIDG xxxxx

re: channel attach box; fyi;

I'm meeting with NSF on weds. to negotiate joint project which will install HSDT as backbone network to tie together all super-computer centers ... and probably some number of others as well. Discussions are pretty well along ... they have signed confidentiality agreements and such.

For one piece of it, I would like to be able to use the cambridge channel attach box.

I'll be up in Milford a week from weds. to present the details of the NSF project to ACIS management.


... snip ... top of post, old email index, NSFNET email

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
NSFNET related email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

The Greatest Capitalist Who Ever Lived

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Greatest Capitalist Who Ever Lived
Date: 20 Jan, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived

Learson trying to block the bureaucrats, careerists, and MBAs from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
two decades later, IBM has one of the largest losses in US company history and was being re-organized into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

AMEX was in competition with KKR for private equity (LBO) takeover of RJR and KKR wins. KKR then runs into trouble with RJR and hires the AMEX president to help.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco

Then the IBM Board hires the former president of AMEX as new CEO, who reverses the breakup and uses some of the same tactics used at RJR (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
above some IBM related specifics from
https://www.amazon.com/Retirement-Heist-Companies-Plunder-American-ebook/dp/B003QMLC6K/

... turning into a financial engineering company, Stockman; The Great Deformation: The Corruption of Capitalism in America
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall Street momo traders. It was actually a stock buyback contraption on steroids. During the five years ending in fiscal 2011, the company spent a staggering $67 billion repurchasing its own shares, a figure that was equal to 100 percent of its net income.

pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82 billion, or 122 percent, of net income over this five-year period. Likewise, during the last five years IBM spent less on capital investment than its depreciation and amortization charges, and also shrank its constant dollar spending for research and development by nearly 2 percent annually.

... snip ...

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases have come to a total of over $159 billion since 2000.

... snip ...

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

more financial engineering company

IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims. Lawsuit accuses Big Blue of cheating investors by shifting systems revenue to trendy cloud, mobile tech
https://www.theregister.com/2022/04/07/ibm_securities_lawsuit/
IBM has been sued by investors who claim the company under former CEO Ginni Rometty propped up its stock price and deceived shareholders by misclassifying revenues from its non-strategic mainframe business - and moving said sales to its strategic business segments - in violation of securities regulations.

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts mentioning financial engineering company and/or stock buyback contraption
https://www.garlic.com/~lynn/2023c.html#72 Father, Son & CO
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#118 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#105 IBM 360
https://www.garlic.com/~lynn/2022f.html#105 IBM Downfall
https://www.garlic.com/~lynn/2022d.html#83 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022c.html#46 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2022b.html#115 IBM investors staged 2021 revolt over exec pay
https://www.garlic.com/~lynn/2022b.html#52 IBM History
https://www.garlic.com/~lynn/2022.html#108 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#11 General Electric Breaks Up
https://www.garlic.com/~lynn/2021k.html#3 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#101 Who Says Elephants Can't Dance?
https://www.garlic.com/~lynn/2021i.html#80 IBM Downturn

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VM/370 and VM/XA

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VM/370 and VM/XA
Date: 21 Feb, 2024
Blog: Facebook
Future System was effort in early 70s was to replace 370 with completely different architecture ... and internal politics was killing off 370 efforts, the lack of new 370 products during the period is credited with giving the clone 370 makers (like Amdahl) their market foothold. When FS finally implodes there is made rush to get stuff back into the 370 product pipelines, including kicking off the quick and dirty 3033 & 3081 efforts.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

The head of POK was also convincing corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (supposedly otherwise MVS/XA wouldn't ship on time, also POK executives were trying to bully internal datacenters for move from vm370 to mvs). Endicott eventually managed to save the VM370 product mission, but had to reconstitute a vm370 development group from scratch. Some of the transplanted VM370 people in POK did a simplified virtual machine, VMTOOL that was suppose to be used only for MVS/XA development (and never released to customers).

Amdahl had done MACROCODE, 370-like instructions that ran in microcode mode that greatly simplified microcoding ... originally used to respond to a series of small microcode changes for 3033 required by MVS ... and then was used to implement HYPERVISOR able to run MVS and MVS/XA concurrently. IBM was having trouble getting customers to convert to MVS/XA ... and Amdahl was having more success being able to run MVS & MVS/XA concurrently. To respond to HYPERVISOR running MVS & MVS/XA concurrently, they decide to release VMTOOL as VM/MA and VM/SF. In the mean time, an internal sysprog in Rochester had added 370/XA support to VM/370. Then POK has proposal for couple (few?) hundred people to bring VMTOOL up to the feature/function/performance level of VM/370 ... and wins over alternative to release VM/370 with full 370/XA support.

It also takes IBM nearly a decade to respond to Amdahl HYPERVISOR (multiple domain facility) with LPAR & PR/SM.
https://en.wikipedia.org/wiki/PR/SM
... and then eventually LPAR
https://en.wikipedia.org/wiki/Logical_partition

Other trivia: 370/XA required special mechanism to enter virtual machine mode. For VMTOOL and 3081 ... they did the SIE instruction. Problem was that 3081 didn't have a lot of spare microcode space for new function, and since VMTOOL&SIE was purely intended for development only and no performance requirement for production, the 3081 SIE microcode was paged in&out.

recent posts mentioning vmtool, vm/ma, vm/sf, vm/xa, hypervisor
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2021c.html#56 MAINFRAME (4341) History

--
virtualization experience starting Jan1968, online at home since Mar1970

Assembler language and code optimization

From: Lynn Wheeler <lynn@garlic.com>
Subject: Assembler language and code optimization
Date: 21 Feb, 2024
Blog: Facebook
When I transferred out to SJR, got to wander around IBM and non-IBM datacenters in silicon valley including disk engineering (bldg14) and disk product test (bldg15) across the street. They were running 7x24, pre-scheduled, stand-alone testing. They mentioned that they had recently tried MVS, but it had 15min mean-time-between failure in that environment. I offered to rewrite I/O supervisor to make it bullet proof and never fail allowing any amount of on-demand concurrent testing, greatly improving productivity.

It was also 3-5times faster than the original code and 10times faster than the MVS code. I would make statements about many different hands had worked on the original code until it was spaghetti. Starting from scratch I could eliminate all that. Trivia: I did a (internal only) research report that mentioned the MVS 15min MTBF (requiring manual re-ipl), which brought the wrath of the MVS organization on my head (when they first talked with me, I thought it was to ask about fixing the bugs, but turned out what they really wanted to do was get me fired).

posts getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

After leaving IBM nearly 20yrs later, I was asked to come into the largest airline ACP/TPF operation to look at the ten things they couldn't do. They asked me to start with ROUTES (direct/connections getting from "A" to "B") that represented 20-25% of processing. They gave me a complete copy of OAG (all scheduled commercial airlines in the world) and came back after couple months with implementation that included doing the impossible things. I started with re-implementation from scratch that ran 100 times faster (I would claim that existing still had technology&design trade-offs from the 60s, starting completely from scratch I could make totally different trade-offs), then adding the new features cut to only ten times faster (while consolidating 3-4 human transactions into a single transaction). Then the hang-wringing started ... part of 60s trade-offs required a couple hundred staff doing "stuff" ... starting from scratch all that disappeared.

One of the issues was they had huge number of people munging the OAG data building DBMS of direct/connections doing (subset of) A->B ... which would be periodically used to update data on TPF. I compressed all OAG data into a 30mbytes memory image and could find all possible ways of A->B in less time than doing TPF disk i/o. Then I modified the sequence so it processed memory image optimized for that specific processor cache operation and got a factor of five times speedup. They wanted to retain the large staff that was responsible for the DBMS paradigm.

trivia: Airlines had invented "change of equipment" OAG entries ... issue was in paper books listing A->B flts, "direct" flts were listed before connecting flts. First time I saw it was 1970 in San Jose Airport ... TWA(?) had Kennedy->SFO flt that stayed overnight and returned to Kennedy in the morning ... but found it was cheaper to stay/park overnight at San Jose. In the morning, the flt from San Jose->SFO left with two flt numbers, one was the "direct" to Kennedy and the other was "direct" to Seattle (with change of equipment at SFO, connection by any other name but listed with direct flts). In the full OAG softcopy of every commercial scheduled flt segment in the world, worst case, I found a dozen flt numbers that all left Honolulu at the same time and arrived in Los Angeles at the same time ... but then would continue on with different destinations. I didn't bother to differentiate between connecting flts and (direct) change-of-equipment flts.

some past posts mentioning the "ROUTES" work
https://www.garlic.com/~lynn/2023g.html#90 Has anybody worked on SABRE for American Airlines
https://www.garlic.com/~lynn/2023g.html#74 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023c.html#8 IBM Downfall
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#76 IBM ITPS
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015f.html#5 Can you have a robust IT system that needs experts to run it?
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
https://www.garlic.com/~lynn/2013g.html#87 Old data storage or data base
https://www.garlic.com/~lynn/2011c.html#42 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2010j.html#53 Article says mainframe most cost-efficient platform

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, next, index - home