List of Archived Posts

2022 Newsgroup Postings (06/12 - 07/19)

IBM Quota
IBM Games
IBM Games
IBM Games
Retiree sues IBM alleging shortchanged benefits
RED and XEDIT fullscreen editors
RED and XEDIT fullscreen editors
RED and XEDIT fullscreen editors
VM Workship ... VM/370 50th birthday
VM/370 Going Away
VM/370 Going Away
VM/370 Going Away
VM/370 Going Away
VM/370 Going Away
IBM "Fast-Track" Bureaucrats
VM/370 Going Away
Context switch cost
VM Workshop
3270 Trivia
IBM "Fast-Track" Bureaucrats
3270 Trivia
IBM "Fast-Track" Bureaucrats
IBM "nine-net"
IBM "nine-net"
IBM "nine-net"
IBM "nine-net"
IBM "nine-net"
IBM "nine-net"
IBM "nine-net"
IBM "nine-net"
The "Animal Spirits of Capitalism" Are Devouring Us
Technology Flashback
IBM 37x5 Boxes
IBM 37x5 Boxes
IBM 37x5 Boxes
IBM 37x5 Boxes
IBM 23June1969 Unbundle
IBM 37x5 Boxes
Wall Street's Plot to Seize the White House
Single Loop Thinking: Non-Reflective Military Cycles of ENDS and MEANS
Best dumb terminal for serial connections
Wall Street's Plot to Seize the White House
WATFOR and CICS were both addressing some of the same OS/360 problems
WATFOR and CICS were both addressing some of the same OS/360 problems
IBM Chairman John Opel
IBM Chairman John Opel
IBM Chairman John Opel
Best dumb terminal for serial connections
Channel Program I/O Processing Efficiency
Channel Program I/O Processing Efficiency
Channel Program I/O Processing Efficiency
Channel Program I/O Processing Efficiency
Freakonomics
Channel Program I/O Processing Efficiency
Channel Program I/O Processing Efficiency
Channel Program I/O Processing Efficiency
Channel Program I/O Processing Efficiency
Behind the Scenes, McKinsey Guided Companies at the Center of the Opioid Crisis
Channel Program I/O Processing Efficiency
IBM CEO: Only 60% of office workers will ever return full-time
IBM CEO: Only 60% of office workers will ever return full-time
Channel Program I/O Processing Efficiency
Empire Burlesque. What comes after the American Century?
IBM Software Charging Rules
IBM Wild Ducks
IBM Wild Ducks
Channel Program I/O Processing Efficiency
SHARE LSRAD Report
Wealth of Two Nations: The US Racial Wealth Gap, 1860-2020
India Will Not Lift Windfall Tax On Oil Firms Until Crude Drops By $40
IBM Chairman John Opel
FedEx to Stop Using Mainframes, Close All Data Centers By 2024
FedEx to Stop Using Mainframes, Close All Data Centers By 2024
Technology Flashback
The Supreme Court Is Limiting the Regulatory State
FedEx to Stop Using Mainframes, Close All Data Centers By 2024
Washington Doubles Down on Hyper-Hypocrisy After Accusing China of Using Debt to "Trap" Latin American Countries
IBM Quota
IBM Quota
Enhanced Production Operating Systems
IBM Quota
There is No Nobel Prize in Economics
Enhanced Production Operating Systems
Enhanced Production Operating Systems
Enhanced Production Operating Systems
Enhanced Production Operating Systems
Enhanced Production Operating Systems
Enhanced Production Operating Systems
Enhanced Production Operating Systems
Enhanced Production Operating Systems
Enhanced Production Operating Systems
Enhanced Production Operating Systems
Enhanced Production Operating Systems
The Rice Paddy Navy
Enhanced Production Operating Systems
Enhanced Production Operating Systems II
Enhanced Production Operating Systems II
Enhanced Production Operating Systems II
Enhanced Production Operating Systems II
Enhanced Production Operating Systems II
Mainframe Channel I/O
The US's best stealth jets are pretty easy to spot on radar, but that doesn't make it any easier to stop them
Mainframe Channel I/O
John Boyd and IBM Wild Ducks
John Boyd and IBM Wild Ducks
FedEx to Stop Using Mainframes, Close All Data Centers By 2024
Price Wars
Price Wars

IBM Quota

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Quota
Date: 12 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022d.html#106

trivia: IBM asks me to teach a one week cp67 class ... science center brought out CP67/CMS to install at univ ... and I got to play with it on my weekend time (univ. had been shutting down the datacenter on weekends and I had it all to myself ... back to when I was hired as student programmer to reimplement 1401 MPIO on 360/30 ... although 48hrs w/o sleep made monday classes a little hard). About 6months after science center brought out cp67/cms to the univ ... they were having cp67/cms one week class at the beverly hills hilton. I arrived sunday night and got asked to teach the cp67 class ... friday, two days before, the cp67 people had resigned to join a commercial online cp67 startup.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
virtual machine commercial online operations
https://www.garlic.com/~lynn/submain.html#timeshare

past posts mentioning beverly hills hilton class:
https://www.garlic.com/~lynn/2017g.html#11 Mainframe Networking problems
https://www.garlic.com/~lynn/2010g.html#68 What is the protocal for GMT offset in SMTP (e-mail) header
https://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/99.html#131 early hardware

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Games

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Games
Date: 12 June 2022
Blog: Facebook
Date: 08/04/80 16:18:25
From: wheeler

to: distribution list; re: games?; new INV2 on d-disk, 'real time', still doesn't work well from 3101. Also new space war game MFF (& CSP, 'the umpire'). The umpire is currently up and running in VMTESTER this minute for users that wish to play with it.


... snip ... top of post, old email index

MFC&CSP written by MFC in PLI ... multiuser client/server spacewar game (still have copy of mff2.pli), client 3270 display spacewar screen (above ref. was 3270 emulation on ascii glass teletype 3101) ... used the internal SPM (originally done by IBM Pisa Science Center for CP67 and ported to vm370 ... superset of combined VMCF, IUCV, and SMSG) ... was supported by RSCS/VNET ... so clients could access via the internal network, didn't have to be on same machine as server. However almost immediately robot players starting appearing winning all the games ... Non-linear power increase was added for when move intervals dropped below human reaction time, in order to somewhat level the playing field.

other "game" trivia ... after transferring to san jose research, I got to wander around ibm and non-ibm datacenters in silicon valley. One of the places was online commercial TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
In aug1976, they had provided their CMS-based online computer conferencing system "free" to SHARE as VMSHARE (and later PCSHARE) ... archives
http://vm.marist.edu/~vmshare

and I had arranged to get monthly tape dump of all files for making available inside IBM (may have also contributed to having been blamed for online computer conferencing on the IBM internal network in the late 70s and early 80s ... precursor to TOOLSRUN). The biggest problem I had was with IBM lawyers worried that internal IBM employees might be contaminated exposed to customer information (and/or find out what they were being fed customer misinformation). On one stop by TYMSHARE in the 70s, they demonstrated a game called ADVENTURE, they found on (stanford) SAIL PDP10 and ported to VM370/CMS. I got a copy for making available inside IBM (accumulating quite a collection of "demo" programs ... i.e. games).
https://en.wikipedia.org/wiki/Adventure_game

Disclaimer: most IBM 3270 logon screens included "For IBM Business Use Only" ... the SJR 3270 logon screens had "For IBM Management Approved Use Only" (which could include "demo" programs).

some recent VMSHARE refs:
https://www.garlic.com/~lynn/2022c.html#62 IBM RESPOND
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#126 Google Cloud
https://www.garlic.com/~lynn/2022b.html#118 IBM Disks
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#93 IBM Color 3279
https://www.garlic.com/~lynn/2022b.html#49 4th generation language
https://www.garlic.com/~lynn/2022b.html#34 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021k.html#98 BITNET XMAS EXEC
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#71 book review: Broad Band: The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021j.html#48 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#99 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#95 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021h.html#47 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021h.html#1 Cloud computing's destiny
https://www.garlic.com/~lynn/2021g.html#90 Was E-mail a Mistake? The mathematics of distributed systems suggests that meetings might be better
https://www.garlic.com/~lynn/2021g.html#45 Cloud computing's destiny
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021e.html#55 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021e.html#30 Departure Email
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021c.html#12 Z/VM
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021b.html#81 The Golden Age of computer user groups
https://www.garlic.com/~lynn/2021b.html#69 Fumble Finger Distribution list
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#25 IBM Acronyms
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software
https://www.garlic.com/~lynn/2020.html#28 50 years online at home

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Games

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Games
Date: 13 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#1 IBM Games

other SPM trivia ... it was also used for developing numerous automated operator functions.

Note: in the 60s, the science center and two commercial online service bureaus (spinoffs from the science center) did a lot of CP67 for 7x24 availability operation (including darkroom and no human present). At the time, IBM leased/rented machines with charges (even "funny money" for IBM internal datacenters) based on the processor "system meter" ... that ran whenever any processor and/or channel was busy. Special channel programs were done for online terminals ... which would go idle, but instant on for arriving characters (allowing system meter to stop when system was idle, trivia: all processors and channels had to be idle for at least 400ms before the system meter would stop, long after IBM had converted to sales, MVS still had a timer task that woke up every 400ms, guaranteeing system meter would never stop).

Other 60s 7x24 work was system dump automatically written to disk file (instead of printer) and system would automagically re-ipl (with no human intervention).

IPCS, a large assembler application was developed for processing dump file (1st cp67 and later vm370) ... although function was little different from dealing with real paper dump. With appearance of REX (before renamed REXX and release for customers), I wanted to demo that it wasn't just another pretty scripting language ... and chose to reimplement the (at the time vm370) dump application in REX with ten times the performance (compared to the assembler implementation, some slight of hand for interpreted REX) and ten times the function ... doing it in half time over three months. I finished early, so started implementation of automagic library that looked for most common failure signatures.

I thought it would be released to customers in place of the current application, but for various reason it wasn't, even tho it was in use by nearly every PSR and internal datacenter. Eventually I got permission to give presentations at user group meetings on the implementation ... and within a few months, non-IBM implementations started to appear.

Trivia: old email from the 3090 service processor group (3092)
https://en.wikipedia.org/wiki/IBM_3090
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
about including it as part of 3092 service processor
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Games

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Games
Date: 13 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022e.html#2 IBM Games

late 70s&early 80s, I had been blamed for online computer conferencing (precursor to TOOLSRUN and the internal IBM forums). It really took off spring 1981 when I distributed a trip report of visit to Jim Gray at Tandem (only about 300 directly participated, but claims upwards of 25,000 were reading, folklore when corporate executive committee was told, 5of6 wanted to fire me) from IBM Jargon Dictionary:
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Retiree sues IBM alleging shortchanged benefits

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Retiree sues IBM alleging shortchanged benefits
Date: 14 June 2022
Blog: Facebook
Retiree sues IBM alleging shortchanged benefits
https://www.pionline.com/courts/retiree-sues-ibm-alleging-shortchanged-benefits

"Barbarians at the Gate"; AMEX was in competition with KKR for (private equity) LBO (reverse IPO) of RJR and KKR wins. KKR runs into trouble and hires away AMEX president to help with RJR (later goes on to be CEO of IBM)
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco

IBM had one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company, gone 404, but lives on at wayback machine.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

had already left IBM, but get a call from the bowels of Armonk asking if could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts. However, before getting started, the board brings in a new CEO and reverses the breakup ... new CEO also used some of the same techniques at RJR (gone 404, but lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
above some IBM related specifics from
https://www.amazon.com/Retirement-Heist-Companies-Plunder-American-ebook/dp/B003QMLC6K/

pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity

--
virtualization experience starting Jan1968, online at home since Mar1970

RED and XEDIT fullscreen editors

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RED and XEDIT fullscreen editors
Date: 15 June 2022
Blog: Facebook
RED and XEDIT fullscreen editors, some old email
https://www.garlic.com/~lynn/2006u.html#26

RED is much more mature (in use around IBM for some time before XEDIT existed), more feature/function, and faster
https://www.garlic.com/~lynn/2006u.html#email790606

apology to RED editor, having sent email to Endicott about RED better than XEDIT, response was that it wasn't Endicott's fault they shipped something not as good as RED, it was the RED author's fault that RED was much better than XEDIT.
https://www.garlic.com/~lynn/2006u.html#email800311
https://www.garlic.com/~lynn/2006u.html#email800312

after joining IBM, one of my hobbies was enhanced production opeating systems for internal datacenters, originally for CP67 (one of the earliest & long time customers was online sales&marketing support HONE), then CSC/VM for VM370 and after transferring to San Jose Research, SJR/VM
https://www.garlic.com/~lynn/2006u.html#email800429
https://www.garlic.com/~lynn/2006u.html#email800501

originally in the morph from CP67->VM370 lots of stuff was dropped (including multiprocessor support and lots of stuff I had done as undergraduate) and/or greatly simplified, old email starting to migrate lots of CP67 to VM370-R2PLC9 (for CSC/VM)
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

some of the first stuff I moved to VM370 was my CP67 stress test benchmarking (trivia: originally did "autolog" command for synthetic benchmarking, but it got picked up for a lot of automated operations) ... which consistently crashed VM370 ... so the next items was all the CP67 internal kernel serialization and other integrity features (in order to finish benchmarks w/o VM370 crashing). Later added multiprocessor support to CSC/VM (VM370 release3 base), originally for consolidated US HONE datacenter in Palo Alto, so they could add a 2nd processor to each of their 168 systems.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark

--
virtualization experience starting Jan1968, online at home since Mar1970

RED and XEDIT fullscreen editors

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RED and XEDIT fullscreen editors
Date: 15 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#5 RED and XEDIT fullscreen editors

Note: co-worker at the science center was responsible for the internal network ... later also used for the corporate sponsored univ. "BITNET":
https://en.wikipedia.org/wiki/BITNET

the original distributed development project was between the science center and endicott to implement virtual memory 370 virtual machines in CP67 (running on running 360/67). Then there was 370 virtual memory modification to CP67 ("CP67I") to run in those 370 virtual machines. This was in regular use a year before the first engineer 370 (370/145) with virtual memory was operational (in fact "CP67I" was first thing IPL'ed on that machine, which had hardware glitch ... which required a q&d CP67I temporary software change to get it up and running). Two people then came out from San Jose and did 3330 & 2305 device support for CP67I ... known as CP67SJ ... which was in regular use throughout the company on early 370s with virtual memory ... even for a time after VM370 became available. A lot of this was during the Future System period ... which was completely different from 370 and was going to complete 370 ... internal politics was also killing off 370 efforts ... and the lack of new 370 is credited with giving 370 clone makers their market foothold. I continued to work on 360&370 all during the FS period ... even periodically ridiculing what they were doing (which wasn't exactly career enhancing activity). When FS imploded, there was mad rush to get stuff back into the 370 product pipeline ... including quick&dirty 3033 and 3081; a lot more info
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

reference to Future System failure resulted in major IBM culture change, from Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference to "Future System" ...

"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
virtualization experience starting Jan1968, online at home since Mar1970

RED and XEDIT fullscreen editors

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: RED and XEDIT fullscreen editors
Date: 15 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#5 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022e.html#6 RED and XEDIT fullscreen editors

Note2: During VM370 Release 2, a lot of resources were diverted to FS ... so coming up for Release 3 ... there wasn't a lot of stuff ready... as a result, a little of my R2PLC9-based internal CSC/VM was picked up for VM370 Release 3 ... this included a small subset of my shared segment executables (which is referenced in one of the previous "RED" emails ... about no work has yet been done to move "RED" into read-only, "shared segment").

23Jun1969 unbundling announcement started to charge for SE services, maintenance and (application) software (but they were able to make the case that kernel software should still be free). In the wake of FS (contributing to rise of clone 370s and then FS imploding and mad rush to get stuff back into 370 product pipeline), there was decision to transition to charging for kernel software ... starting out with charging for kernel addons, and then increasing to all kernel software was charged for). An additional selection of my internal VM370 changes were selected as the guinea pig for "charged-for" kernel addons (initially for Release 3 VM370). Trivia: kernel structure reorg for multiprocessor support was included, but not the actual multiprocessor support. Then the decision was made to release actual multiprocessor support for Release 4. One of the initial "charged-for" rules was new hardware support wouldn't be charged (and couldn't have a pre-req of charged-for components). However, the (multiprocessor) kernel restructure was already part of my "charged-for" add-on ... the solution was to move around 90% of the code from my "charged-for" add-on into the "free" base (w/o changing the price of my kernel add-on) for Release 4.

Eventually in the 80s, all kernel code became charged-for ... along with the onset of the "OCO-wars" (make all code "object-code only", i.e. no free source). Some of the OCO-wars can be seen in VMSHARE posts ... aka Aug1976 TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
provided their CMS-based online, computer conference system "free" to the IBM SHARE user group
https://www.share.org/
VMSHARE archives here
http://vm.marist.edu/~vmshare

search of "MEMO", "NOTE", & "PROB" turns up 21 discussions for "OCO WARS" ... one of the comments

Append on 03/31/89 at 08:04 by Melinda Varian <BITNET: MAINT@PUCC>:

Steve, "VMFblah", "VMFbanana", etc., are generic terms to describe a collection of "tools" all of whose names start with the letters "VMF". Yes, we might just as well use "VMFxxxxx" here.

However, you should understand that these "tools" were introduced to do object-level maintenance, thus replacing our beloved, very elegant, extremely reliable source-level maintenance tools. The VMFxxxxx's have proven to be terribly buggy and difficult to use. They have greatly decreased both our peace of mind and the stability of our systems while greatly increasing our work load, for no other purpose than to allow IBM to introduce OCO, which many of us view as VM's death warrant.

Thus, I fear, our resentment shows.

Although the terms we use here are mild compared to those we use in private, there is no point in our gratuitously offending the IBMers who listen here, so we'll try to lighten it up. At the same time, I suspect that too many IBMers equate criticism with disloyalty. You need to keep in mind that the people here are the ones who are still loyal to IBM, who are still trying to keep it from shooting itself in the foot.

Melinda

*** APPENDED 03/31/89 08:04:59 BY PU/MELINDA ***


... snip ...

... other trivia: Melinda's history works
https://www.leeandmelindavarian.com/Melinda#VMHist

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
smp, muliprocessor, compare&swap, etc posts
https://www.garlic.com/~lynn/subtopic.html#smp
page mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap
location independent share segment posts
https://www.garlic.com/~lynn/submain.html#adcon
23jun69 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle

some recent posts mentioning OCO-wars
https://www.garlic.com/~lynn/2022b.html#118 IBM Disks
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021h.html#55 even an old mainframer can do it
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021d.html#2 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software
https://www.garlic.com/~lynn/2018e.html#91 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2018b.html#6 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2018.html#43 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2017j.html#16 IBM open sources it's JVM and JIT code
https://www.garlic.com/~lynn/2017g.html#101 SEX
https://www.garlic.com/~lynn/2017g.html#23 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2017e.html#18 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017.html#59 The ICL 2900

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Workship ... VM/370 50th birthday

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Workship ... VM/370 50th birthday
Date: 16 June 2022
Blog: Facebook
from long ago and far away:

Date: 02/04/87 09:08:06 PST
From: wheeler

I've been invited to Share VM workshop to give a talk (held at Asilomar the week of 2/23) ... I have my SEAS presentation, History of VM Performance.

I'm also scheduled to give a talk on the prototype HSDT spool file system at the VMITE (week of 3/3).

Did you read HSDT027 on 9370 networking, I've gotten several inquiries both from Endicott and Raleigh to discuss it.


... snip ... top of post, old email index, NSFNET email

Date: 02/04/87 12:32:44 PST From: wheeler

looks like i may also be giving the hsdt-wan (hsdt023) talk also at share vm workshop. i've given talk before outside and inside ibm several times (hsdt-wan has been presented to baybunch, several universities, and head of nsf).


... snip ... top of post, old email index, NSFNET email

History Presentation was made at SEAS 5-10Oct1986 (European SHARE, IBM mainframe user group), I gave it most recently at WashDC Hillgang user group 16Mar2011
https://www.garlic.com/~lynn/hill0316g.pdf

HSDT spool file system was for RSCS/VNET which used VM370 spool file system with a synchronous diagnose to access 4k blocked records. On moderately loaded VM370 RSCS/VNET might get only 5-8 4k records/sec (aggregate both read&write). I needed 75 4k records/sec for just one T1 link. I implemented VM370 spool file system in Pascal running in virtual address space.

I had started HSDT in the early 80s with T1 and faster computer links, both terrestrial and satellite and working with NSF Director, was suppose to get $20M to interconnect the NSF Supercomputer Centers. then congress cuts the budget, some other things happen and finally an RFP is released. Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

internal IBM politics prevent us from bidding on the RFP. the NSF director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse (as did claims that what we already had running was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

communication group (and other) executives were spreading all sorts of misinformation about how SNA products could be used for NSFnet ... somebody collected lots of that misinformation email and forwarded it to us ... old archived post with the email, heavily redacted and clipped to protect the guilty
https://www.garlic.com/~lynn/2006w.html#email870109

another recent history thread
https://www.garlic.com/~lynn/2022e.html#5 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022e.html#6 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022e.html#7 RED and XEDIT fullscreen editors

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

past posts mentioning HSDT spool file system (HSDTSFS)
https://www.garlic.com/~lynn/2021j.html#26 Programming Languages in IBM
https://www.garlic.com/~lynn/2021g.html#37 IBM Programming Projects
https://www.garlic.com/~lynn/2013n.html#91 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2012g.html#24 Co-existance of z/OS and z/VM on same DASD farm
https://www.garlic.com/~lynn/2012g.html#23 VM Workshop 2012
https://www.garlic.com/~lynn/2012g.html#18 VM Workshop 2012
https://www.garlic.com/~lynn/2011e.html#25 Multiple Virtual Memory

past posts mentioning history presentation at (DC) hillgang meeting
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022.html#93 HSDT Pitches
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021h.html#82 IBM Internal network
https://www.garlic.com/~lynn/2021g.html#46 6-10Oct1986 SEAS
https://www.garlic.com/~lynn/2021e.html#65 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021c.html#41 Teaching IBM Class
https://www.garlic.com/~lynn/2021b.html#61 HSDT SFS (spool file rewrite)
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS
https://www.garlic.com/~lynn/2019b.html#4 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017d.html#52 Some IBM Research RJ reports
https://www.garlic.com/~lynn/2015h.html#93 OT: Electrician cuts wrong wire and downs 25,000 square foot data centre
https://www.garlic.com/~lynn/2012g.html#18 VM Workshop 2012
https://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011c.html#88 Hillgang -- VM Performance
https://www.garlic.com/~lynn/2011c.html#86 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#72 A History of VM Performance

--
virtualization experience starting Jan1968, online at home since Mar1970

VM/370 Going Away

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM/370 Going Away
Date: 17 June 2022
Blog: Facebook
VM/370 Going Away(?)

Future System effort, 1st half 70s, completely different from 370 and was going to completely replace 370 (internal politics was killing off 370 efforts during the period, lack of new 370 products is credited with giving clone 370 makers their market foothold). with the demise of FS, there was a mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 projects in parallel. also head of POK managed to convince corporate to kiil vm370 product, shutdown the development group and transfer all the people to POK (or supposedly, MVS/XA wouldn't ship on time ... several yrs in the future). Endicott eventually managed to save the vm370 product mission, but had to reconstitute a development group from scratch ... can see comments about (poor) code quality during this period in various comments in the vmshare archives:
http://vm.marist.edu/~vmshare

After graduating (and leaving Boeing CFO office), I join the ibm science center ... and one of my hobbies was enhanced production operating systems for internal datacenters ... and (world-wide sales&marketing support) HONE systems were long-time customers ... starting with CP67 and then moving to VM370 (I had continued 360&370 work all during the FS period, including periodically ridiculing what they were doing, which wasn't exactly career enhancing activity).

Later part of the 70s, HONE went through a series of new executives, branch manager promoted to executive position in DPD dqtrs, discover to their horror that HONE wasn't MVS based and figure that they could "make their career" by directing HONE to move to MVS, directing all hands to work on the port. After 9-12 months, it would be shown it wouldn't work, declared a success, heads promoted uphill, and after a few months, it would repeat again. Also round the turn of that decade, a POK executive gave HONE a presentation of future of POK products and said that while Endicott had saved vm370 product, it would no longer be supported on POK machines. This caused such an uproar, and he had to come back and explain how HONE had misunderstood what he had said.

Mid-70s, Endicott had con'ed me into helping with 138/148 microcode assist (i.e. "ECPS"); the machines had 6kbyes of available microcode space and I was to identify the 6kbytes of highest executed vm370 kernel paths ... for reprogramming in microcode (at approx byte-for-byte with 10 times performance increase) ... which would continue into 4300s. Old archived post with original analysis (6kbytes accounted for 80% of kernel execution)
https://www.garlic.com/~lynn/94.html#21

Around time POK was telling HONE that vm370 wouldn't run on new POK machines, I got approval to give presentations at monthly BAYBUNCH user group meetings (hosted at Stanford SLAC) on ECPS implementation. After the meetings, Amdahl people would corner me to provide more information on ECPS. They explained that they had done MACROCODE ... 370-similar instructions running in microcode mode (originally done to quickly respond to plethora of 3033 trivial microcode changes that were required by latest MVS to run). They were then implementing HYPERVISOR support (hardware support to run multiple logical machines w/o vm370 software) ... note high-end processors was horizontal microcode that was difficult and time-consuming to implement, Amdahl MACROCODE drastically reduced that difficulty (note IBM's response to HYPERVISOR with LPAR&PR/SM for 3090 was nearly 8yrs later).

Note that while POK had killed VM370 development group and moved them to POK to work on MVS/XA, they did develop the VMTOOL, which was a small VM370 subset w/o function, performance and features of vm370 ... *only* intended for MVS/XA development (*never* to ship to customers). Later when customers weren't converting from MVS to MVS/XA as planned on 3081 (except Amdahl machines where they concurrently run MVS & MVS/XA under HYPERVISOR), they decided to provide VMTOOL as VM/MA and VM/SF for "migration aid" ... old email
https://www.garlic.com/~lynn/2007.html#email850304
https://www.garlic.com/~lynn/2008c.html#email850419

... the VMTOOL had 3081 SIE microcode instruction to help ... however SIE was also never intended for customer use either ... 3081 not having sufficient microcode space ... so the SIE microcode had to be paged in each time it was invoked ... old email about "trout" (aka 3090) did implement SIE for production use:
https://www.garlic.com/~lynn/2006j.html#email810630

3081 trivia: 3081 was suppose to be multiprocessor "only" machine, original 3081D, each processor slower than 3033, they then doubled processor cache size to make it faster than 3033 (however, latest Amdahl single processor had at least throughput of two processor 3081K). some FS, 3033, & 3081 info
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

Then there was a big POK/Kingston push for a large development group to upgrade VMTOOL to feature, performance and function of VM370 ... Endicott's only alternative was to pickup the full 370/XA support that Rochester had added to vm370. Some old reference to Rochester's VM370 XA support
https://www.garlic.com/~lynn/2011c.html#email860122
https://www.garlic.com/~lynn/2011c.html#email860123

My recent (IBM Retirees) discussion post
https://www.facebook.com/groups/62822320855/posts/10159776681680856/
has this email reference to this post
https://www.garlic.com/~lynn/2006u.html#26
with email from Rochester sending tape for copy of my SJR/VM system
https://www.garlic.com/~lynn/2006u.html#email800501

Then there was also a different attack moving HONE to MVS ... claiming the reason HONE couldn't be moved to MVS was they were running my enhanced system. HONE would be required to move to standard supported product (because what would HONE do if I was hit by a bus?) ... after which, then it would be possible for HONE to be migrated to MVS.

I had done (CP67) dynamic adaptive resource management algorithms as an undergraduate in the 60s ... which the science center picked up and shipped as part of cp67. In the morph of CP67->VM370 lots of stuff was dropped (including multiprocessor support and lots of stuff I had done as undergraduate) and/or greatly simplified. Then there were SHARE resolutions to put the "wheeler scheduler" back.

VM/HPO 3.4 (w/o "wheeler scheduler") from vmshare
https://www.garlic.com/~lynn/2007b.html#email860111
https://www.garlic.com/~lynn/2007b.html#email860113
also putting (my, originally done as undergraduate in the 60s) global LRU back into VM/HPO
https://www.garlic.com/~lynn/2006y.html#email860119

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
dynamic adaptive resource manager posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
glogal LRU page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock

a few threads this year, touching on same &/or similar subjects:
https://www.garlic.com/~lynn/2022e.html#8 VM Workship ... VM/370 50th birthday
https://www.garlic.com/~lynn/2022e.html#7 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022e.html#6 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022e.html#5 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022d.html#94 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022d.html#44 CMS Personal Computing Precursor
https://www.garlic.com/~lynn/2022d.html#30 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#17 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#83 VMworkshop.og 2022
https://www.garlic.com/~lynn/2022c.html#62 IBM RESPOND
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#126 Google Cloud
https://www.garlic.com/~lynn/2022b.html#118 IBM Disks
https://www.garlic.com/~lynn/2022b.html#111 The Rise of DOS: How Microsoft Got the IBM PC OS Contract
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#93 IBM Color 3279
https://www.garlic.com/~lynn/2022b.html#54 IBM History
https://www.garlic.com/~lynn/2022b.html#49 4th generation language
https://www.garlic.com/~lynn/2022b.html#34 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022b.html#22 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#20 CP-67
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#105 IBM PLI
https://www.garlic.com/~lynn/2022.html#89 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#62 File Backup
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2022.html#36 Error Handling

--
virtualization experience starting Jan1968, online at home since Mar1970

VM/370 Going Away

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM/370 Going Away
Date: 18 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away

For a time my wife reported to manager of one of the FS technology sectors ... she somewhat implies in group meetings most of the other sections would talk about way out blue sky ideas ... but no idea how to make any production implementation.

One of the final nails in the FS coffin was analysis by the IBM Houston Science Center that 370/195 applications moved to FS machine made out of the fastest available hardware ... would have throughput of 370/145 (about 30 times slowdown). Folklore is that Rochester did greatly simplified version of FS for the S/38 (plenty of performance headroom between technology and the low-end office market). Then Rochester does AS/400 as follow-on for combination of S/34, S/36, & S/38 ... eliminating some of the S/38 FS features.
https://en.wikipedia.org/wiki/IBM_AS/400

Part of my FS ridicule was filesystem "single level store" architecture ... somewhat from TSS/360. I was responsible for OS/360 at the univ. running 360/67 as 360/65 (360/67 sold for TSS/360 which never came to production fruition, later before I graduate I was hired into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services, when I graduate, I leave Boeing and join the IBM science center). The univ would shutdown the datacenter over the weekends and I had the whole place dedicated to myself for 48hrs straight.

The science center came out and installed CP67 (3rd location after science center and MIT Lincoln labs) which I could play with on weekends. The IBM TSS/360 SE was still around and we put together a simulated interactive fortran program edit, compile, and execute. The benchmark running CP67/CMS with 35users had better throughput and interactive response than TSS/360 (on same exact hardware) with only four users. Later I did a page mapped filesystem for CMS and would claim I learned what "not to do" from observing TSS/360 (for moderate benchmarks exercising the filesystem, had three times throughput the standard CMS filesystem).

However, I blamed the inability to get my CMS page mapped filesystem shipped because of the horrible reputation that paged filesystems got from the FS failure (although it was deployed on internal datacenters ... include the sales&marketing support HONE systems).

Part of S/38 filesystem simplification was all disks were part of same filesystem and a dataset might have pieces across all disks ... backing up met the whole filesystem had to be as single entity (all disks) ... and any single disk failure ... required the whole filesystem had to be restored (possibly taking 24hrs elapsed as disks were added). Implementation totally impractical in large mainframe datacenter with scores or even hundreds of disks. Trivia: backup/restore of S/38 was so disastrous that S/38 was early adopter of RAID.
https://en.wikipedia.org/wiki/RAID#History

In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named RAID 4.

... snip ...

disk trivia: when I transferred to San Jose Research, got to wander around IBM and non-IBM datacenters including bldg14 (disk engineering) and bldg5 (disk product test) across the street. At the time, bldg14 was running 7x24 around the clock, pre-schedule, stand-alone, mainframe testing. They said that they had recently tried MVS, but MVS had 15min mean-time-between-failure, requiring manual re-ipl. I offered to rewrite I/O subsystem, making it bullet proof and never fail, allowing any amount of on-demand concurrent testing ... greatly improving productivity. Downside was I was increasingly asked to play disk engineer ... and would periodically run into Ken. I wrote up what was done as internal research report and happened to mention the MVS 15min MTBF ... bringing the wrath of the MVS organization down on my head. informally I was told they tried unsuccessfully to have me separated from the company ... but then would make my career as unpleasant as possible ... the joke was on them ... I had already been told a number of times that I had no career, promotions, raises in IBM ... for offending various IBM careerists and bureaucrats, including at least for ridiculing FS ... periodically repeated from Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference to "Future System" ...

"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

In any case, later, just before 3380 drives were about to ship ... FE had hardware regression test of 57 simulated 3380 errors likely to occur, in all 57 cases, MVS would fail (requiring re-ipl) and in 2/3rds of the cases there was no indication of what caused the failure (I didn't feel badly at all).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
CMS paged mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

VM/370 Going Away

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM/370 Going Away
Date: 18 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away

AS/400 PowerPC trivia:
https://en.wikipedia.org/wiki/IBM_AS/400#History
and
https://en.wikipedia.org/wiki/IBM_AS/400#The_move_to_PowerPC

The last product we did at IBM was HA/CMP. It started out HA/6000 for the NYTimes to move their newspaper system (ATEX) off (DEC) Vaxcluster to RS/6000. However as I started doing technical/scale-up cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Ingres, Informix, Sybase, Oracle, all had vax/cluster support in the same source base with unix support ... lots of discussion on improving over vaxcluster and easing RDBMS port to HA/CMP). However, nearly every time we visited national labs (including LANL & LLNL) we would get hate mail from the IBM Kingston supercomputer group (working on more traditional supercomputer design).

During HA/CMP, the executive we reported to, transfers over to head up Somerset (he had previously come from Motorola) ... to do power/pc ... single chip 801/risc. Power was large multichip undertaking, I would periodically claim that a lot of transition to single chip power/pc bore heavy influence from Motorola's 88k RISC single chip.
https://en.wikipedia.org/wiki/AIM_alliance
https://en.wikipedia.org/wiki/AIM_alliance#Launch

The development of the PowerPC is centered at an Austin, Texas, facility called the Somerset Design Center. The building is named after the site in Arthurian legend where warring forces put aside their swords, and members of the three teams that staff the building say the spirit that inspired the name has been a key factor in the project's success thus far.

... snip ...

Also, Oct1991 senior VP backing the IBM Kingston group, retires and there are audits of his projects. Shortly later there is announcement of internal supercomputing conference (effectively trolling the company for technology). Early Jan1992 we have meeting in Ellison's (Oracle CEO) conference room on (commercial) cluster/scale-up, 16way by mid92, 128way by ye92. Then within a few weeks of the Ellison meeting, cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors. We leave IBM a few months later. Possibly contributing was mainframe DB2 group complaining that if we were allowed to continue, it would be years ahead of them.

Note: Jan1979 I had been con'ed into doing vm/4341 benchmarks for national lab that was looking at getting 70 for compute farm ... sort of the leading edge of the coming cluster supercomputing tsunami.

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801
posts mentioning some work on original sql/relational, System/R
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

VM/370 Going Away

From: Lynn Wheeler <lynn@garlic.com>
Subject: VM/370 Going Away
Date: 18 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#11 VM/370 Going Away

from (already posted more on as/400)
http://www.jfsowa.com/computer/memo125.htm

[GRAD was the code name for IBM's Future Systems project of the 1970s. The code names for its components were taken from various colleges and universities: RIPON was the hardware architecture, COLBY was the operating system, HOFSTRA/TULANE were the system programming language and library, and VANDERBILT was the largest of the three planned implementations. The smallest of the three, with considerable simplification, was eventually released as System/38, which evolved into the AS/400.

... snip ...

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
801/risc, iliad, romp, rios, power, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

VM/370 Going Away

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM/370 Going Away
Date: 18 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#11 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#12 VM/370 Going Away

as I've mentioned, as undergraduate, I was hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thought Renton datacenter possibly largest in the world, lots of politics between Renton manager and CFO, who just had a 360/30 for payroll up at Boeing field (although they enlarge the room for a 360/67 that I can play with when I'm not doing other stuff).

Later one of the science center people that had written a lot of CMS\APL software left IBM and joined BCS in DC. On one visit he talked about they had contract with USPS and he was using CMS\APL to do the analysis to justify raising the postal rate.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

other recent posts mentioning Boeing CFO:
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota
https://www.garlic.com/~lynn/2022d.html#100 IBM Stretch (7030) -- Aggressive Uniprocessor Parallelism
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#42 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022b.html#126 Google Cloud
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022b.html#10 Seattle Dataprocessing
https://www.garlic.com/~lynn/2022.html#120 Series/1 VTAM/NCP
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2022.html#12 Programming Skills

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "Fast-Track" Bureaucrats

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "Fast-Track" Bureaucrats
Date: 18 June 2022
Blog: Facebook
Mid-80s, top executives were predicting that IBM revenue would shortly double, mostly based on mainframe sales ... and they ere looking at needing to double (mainframe) manufacturing capacity and double the number of executives (needed to run the businesses). A side-effect was large number of "fast-track" newly minted MBAs (rapidly rotating through positions running different victim business units).

It reminded me of a decade earlier when prominent branch manager horribly offended one of IBM's largest financial industry customer. After joining IBM I got to wander IBM and customer locations, and the manager of this particular datacenter liked me to stop by and talk technology. The customer orders a Amdahl machine in retaliation (up until then clone mainframe makers were selling into technical, scientific, univ market ... but had yet to break into the true-blue commercial market and this would be the first). I was asked to go sit onsite at the customer to help obfuscate the reason for the order. I talk it over with the customer and then told IBM I declined the offer. I was then told the branch manager was good sailing buddy of IBM CEO and if I refused, I can forget any career, promotion, and/or raises (wasn't the only time I was fed that line), reminding me of epidemic "old boys network" and Learson's Management Briefing ZZ04-1312 about the IBM bureaucrats and careerists). Also from from Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference to "Future System" failure:

"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

In my executive exit interview, I was told that they could have forgiven me for being wrong, but they would never forgive me for being right.

past posts referencing Learson's Management Briefing ZZ04-1312:
https://www.garlic.com/~lynn/2022d.html#89 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#76 "12 O'clock High" In IBM Management School
https://www.garlic.com/~lynn/2022d.html#71 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022d.html#52 Another IBM Down Fall thread
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2021g.html#51 Intel rumored to be in talks to buy chip manufacturer GlobalFoundries for $30B
https://www.garlic.com/~lynn/2021g.html#32 Big Blue's big email blues signal terminal decline - unless it learns to migrate itself
https://www.garlic.com/~lynn/2021e.html#62 IBM / How To Stuff A Wild Duck
https://www.garlic.com/~lynn/2021d.html#51 IBM Hardest Problem(s)
https://www.garlic.com/~lynn/2021.html#0 IBM "Wild Ducks"
https://www.garlic.com/~lynn/2017j.html#23 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2017f.html#109 IBM downfall
https://www.garlic.com/~lynn/2017b.html#56 Wild Ducks
https://www.garlic.com/~lynn/2015d.html#19 Where to Flatten the Officer Corps
https://www.garlic.com/~lynn/2013.html#11 How do we fight bureaucracy and bureaucrats in IBM?
https://www.garlic.com/~lynn/2012f.html#92 How do you feel about the fact that India has more employees than US?

--
virtualization experience starting Jan1968, online at home since Mar1970

VM/370 Going Away

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM/370 Going Away
Date: 18 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#11 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#12 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#13 VM/370 Going Away

... periodically posted ... Learson about the IBM careerists and bureaucrats before FS implosion (and SYNCOPHANCY* and MAKE NO WAVES under Opel and Akers)

Management Briefing
Number 1-72: January 18,1972
ZZ04-1312

TO ALL IBM MANAGERS:

Once again, I'm writing you a Management Briefing on the subject of bureaucracy. Evidently the earlier ones haven't worked. So this time I'm taking a further step: I'm going directly to the individual employees in the company. You will be reading this poster and my comment on it in the forthcoming issue of THINK magazine. But I wanted each one of you to have an advance copy because rooting out bureaucracy rests principally with the way each of us runs his own shop.

We've got to make a dent in this problem. By the time the THINK piece comes out, I want the correction process already to have begun. And that job starts with you and with me.

Vin Learson


... and ...


+-----------------------------------------+
|           "BUSINESS ECOLOGY"            |
|                                         |
|                                         |
|            +---------------+            |
|            |  BUREAUCRACY  |            |
|            +---------------+            |
|                                         |
|           is your worst enemy           |
|              because it -               |
|                                         |
|      POISONS      the mind              |
|      STIFLES      the spirit            |
|      POLLUTES     self-motivation       |
|             and finally                 |
|      KILLS        the individual.       |
+-----------------------------------------+
"I'M Going To Do All I Can to Fight This Problem . . ."
by T. Vincent Learson, Chairman

... snip ...

How to Stuff a Wild Duck

"We are convinced that any business needs its wild ducks. And in IBM we try not to tame them." - T.J. Watson, Jr.

"How To Stuff A Wild Duck", 1973, IBM poster
https://collection.cooperhewitt.org/objects/18618011/

--
virtualization experience starting Jan1968, online at home since Mar1970

Context switch cost

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Context switch cost
Newsgroups: comp.arch
Date: Sat, 18 Jun 2022 19:07:34 -1000
> I believe POWER uses a variation of the same scheme.

ROMP, 16 segment registers, each with 12bit segment id (top 4bits of 32bit virtual address indexed a segment register) ... they referred to it as 40bit addressing ... the 12bit segment id plus the 28bit address (within segment). no address space id ... designed for CP.r ... and would change segment register values as needed. originally ROMP had no protection domain, inline code could change segment register values as easily as general registers could be changed ... however that had to be "fixed" when moving to a unix process model & programming environment.

POWER (RIOS) just doubled the segment id to 24bits ... and some documentation would still refer to it as 24+28=52bit addressing ... even tho program model had changed to unix with different processes and simulating virtual address space IDs with sets of segment ids.

370 table look-asides could complete reset anytime address space id changed (segment table address), higher-end 370s kept most recent addres space mappings and table lookaside entries were address space id (segment table address) associative. romp/rios entries were segment id associative (not address space associative).

801/risc, iliad, romp, rios, power, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Workshop

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Workshop
Date: 18 June 2022
Blog: Facebook
hope everybody had a good time, Old (archived) 2012 postings from Linkedin (original tiny "URLs" no longer work) discussion groups about 40th, includes some older stuff about 1987 workshop
https://www.garlic.com/~lynn/2012g.html#18 VM Workshop 2012
https://www.garlic.com/~lynn/2012g.html#23 VM Workshop 2012
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012g.html#41 VM Workshop 2012
https://www.garlic.com/~lynn/2012g.html#57 VM Workshop 2012

other trivia:

VM Knights
http://mvmua.org/knights.html
Mainframe Hall of Frame (alphabetical order)
https://www.enterprisesystemsmedia.com/mainframehalloffame
2005 esystems article (although they garbled some of the details), since gone through name changes and more recently 404
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
Performance History originally given at Oct86 SEAS (European SHARE), repeat presentation at 2011 (DC) Hillgang meeting
https://www.garlic.com/~lynn/hill0316g.pdf

--
virtualization experience starting Jan1968, online at home since Mar1970

3270 Trivia

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 3270 Trivia
Date: 18 June 2022
Blog: Facebook
3277 had .086sec hardware response ... 3278 moved a large amount of electronics back to the 3274 controller (cutting manufacturing cost) ... resulting in huge amount of protocol chatter over the coax driving hardware response to .3sec-.5sec ... this was in the period when lots of published studies about increase in productivity with quarter sec (or better) response ... i.e. for 3277 needed .25-.086=.164sec system response for human to see (.164+.086) quarter sec response ... in order to have quarter sec response with 3278, needed a time machine with negative system response ... i.e. .5sec (hardware response) minus negative .25sec (system response). Letters to the 3278 product administration about 3278 being much worse for interactive, resulted in a reply that 3278 wasn't met for interactive computing, but for data entry (aka electronic key punch).

3277 had logic in keyboard that a little soldering could change repeat key delay, and repeat key rate. I had repeat key rate faster than cursor on screen update ... had to learn timing to lift finger for cursor location it would coast to stop. 3270 also was half-duplex and would lock up typing same time screen write update. A fifo box was built unplug keyboard from screen, plug in fifo box and plug keyboard fifo box ... no more keyboard lockup (horrible for interactive computing) ... none worked with 3278. Also 3277 has enuf electronics able to wire a large tektronics graphics screen into 3277 for 3277GA ... sort of inexpensive 2250/3250

.. 3278 protocol so degraded thruput that later ibmpc 3277 hardware emulation card had 3-4 times upload/download thruput of a 3278 hardware emulation card

some 3270 (FCS and other) trivia

1980 STL (now SVL) was bursting at the seams and they were moving 300 people from the IMS group (and 300 3270 terminals) to offsite bldg with dataprocessing service back to the STL datacenter. They had tried "remote" 3270 (over telco lines), but found the human factors totally unacceptable (compared to channel connected 3270 controllers and my enhanced production operating systems). I get con'ed into doing channel-extender support to the offsite bldg so they could have channel attached controllers at the offste bldg with no perceptible difference in response. The hardware vendor then tries to get IBM to release my support, but there are some engineers in POK playing with some serial stuff who were afraid that it would make it harder to release their stuff ... and get it veto'ed.

Note that 3270 controllers were relative slow with exceptional channel busy ... getting them off the real IBM channels replaced with a fast channel-extender interface box increased STL 370/168 throughput by 10-15% (the 3270 controllers had been spread across the same channels shared with DASD ... and were interfering with DASD throughput, the fast channel-extender box radically cut the channel busy for the same amount of 3270 I/O). STL was considered using the channel extender box for all their 3270 controllers (even those purely in house).

In 1988, the IBM branch office asks me to help LLNL (national lab) get some serial stuff LLNL is playing with, released as standard ... which quickly becomes fibre channel standard (including some stuff I had done in 1980) ... started full-duplex 1gbit, 2gbit aggregate, 200mbyte/sec. Then in 1990, the POK engineers get their stuff released (when it is already obsolete) with ES/9000 as ESCON (17mbytes/sec).

Then some POK engineers become involved in FCS and define a protocol that radically reduces throughput, which is eventually released as FICON. The latest published FICON numbers is z196 "peak I/O" benchmark which got 2M IOPS with 104 FICON (running over 104 FCS). About the same time a FCS was announced for E5-2600 blades (commonly used in cloud datacenters) getting over a million IOPS (two such native FCS having higher throughput than 104 FICON running over 104 FCS).

... after joining IBM, one of my hobbies was production operating systems for internal datacenters. After transferring to san jose research in the 70s, I got to wander around IBM and non-IBM datacenters in silicon valley ... including bldg14 (disk engineering) and bldg15 (disk product test) across the street. At the time they were running around the clock, 7x24, prescheduled, stand-alone mainframe time. They said they had recently tried MVS, but it had 15min mean-time-between failure (in that environment). I offered to rewrite the I/O supervisor to be bullet proof and never fail allowing any amount of on-demand, concurrent testing (greatly improving productivity). Downside was they kept calling and I had to spend increasing amount of time playing disk engineer (diagnose problems). I was also getting .11 trivial interactive system response for my SJR/VM systems ... when the normal production MVS systems were rare to even get 1sec trivia interactive system response.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "Fast-Track" Bureaucrats

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "Fast-Track" Bureaucrats
Date: 19 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#14 IBM "Fast-Track" Bureaucrats

In the early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM. The first time, I tried to do it through plant site employee education. At first they agreed, but as I provided more information about how to prevail/win in competitive situations, they changed their mind. They said that IBM spends a great deal of money training managers on how to handle employees and it wouldn't be in IBM's best interest to expose general employees to Boyd, I should limit audience to senior members of competitive analysis departments. First briefing in bldg28 auditorium open to all. One of his quotes:

"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To Be or To Do, that is the question."

... snip ...

Trivia: In 89/90 the commandant of Marine Corps leverages Boyd for make-over of the corps ... at a time when IBM was desperately in need of make-over ... 1992 has one of the largest losses in history of US companies and was being reorg'ed into the 13 "baby blues" in preparation for breaking up the company (board brings in new CEO who reverses the breakup) ref gone behind paywall, mostly free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

When Boyd passes in 1997, the USAF had pretty much disowned him and it was the Marines at Arlington. There have continued to be Boyd conferences at Marine Corps Univ. in Quantico ... including lots of discussions about careerists and bureaucrats (as well as the "old boy networks" and "risk averse").

Chuck's tribute to John
http://www.usni.org/magazines/proceedings/1997-07/genghis-john
for those w/o subscription
http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html
John Boyd - USAF. The Fighter Pilot Who Changed the Art of Air Warfare
http://www.aviation-history.com/airmen/boyd.htm
"John Boyd's Art of War; Why our greatest military theorist only made colonel"
http://www.theamericanconservative.com/articles/john-boyds-art-of-war/
40 Years of the 'Fighter Mafia'
https://www.theamericanconservative.com/articles/40-years-of-the-fighter-mafia/
Fighter Mafia: Colonel John Boyd, The Brain Behind Fighter Dominance
https://www.avgeekery.com/fighter-mafia-colonel-john-boyd-the-brain-behind-fighter-dominance/
Updated version of Boyd's Aerial Attack Study
https://tacticalprofessor.wordpress.com/2018/04/27/updated-version-of-boyds-aerial-attack-study/
A New Conception of War
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/

Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html

specific past posts mentioning To Be or To Do:
https://www.garlic.com/~lynn/2022d.html#75 "12 O'clock High" In IBM Management School
https://www.garlic.com/~lynn/2022b.html#0 Dataprocessing Career
https://www.garlic.com/~lynn/2022b.html#21 To Be Or To Do
https://www.garlic.com/~lynn/2022b.html#26 To Be Or To Do
https://www.garlic.com/~lynn/2021h.html#80 Warthog/A-10
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2021f.html#36 The Blind Strategist: John Boyd and the American Art of War
https://www.garlic.com/~lynn/2021f.html#51 Martial Arts "OODA-loop"
https://www.garlic.com/~lynn/2020.html#44 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2021.html#0 IBM "Wild Ducks"
https://www.garlic.com/~lynn/2021.html#8 IBM CEOs
https://www.garlic.com/~lynn/2019c.html#25 virtual memory
https://www.garlic.com/~lynn/2019.html#12 Employees Come First
https://www.garlic.com/~lynn/2019.html#61 Employees Come First
https://www.garlic.com/~lynn/2019.html#82 The Sublime: Is it the same for IBM and Special Ops?
https://www.garlic.com/~lynn/2017k.html#13 Now Hear This-Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2017k.html#68 Innovation?, Government, Military, Commercial
https://www.garlic.com/~lynn/2019e.html#138 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2017j.html#104 Now Hear This-Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2019d.html#69 Decline of IBM
https://www.garlic.com/~lynn/2017g.html#47 The rise and fall of IBM
https://www.garlic.com/~lynn/2017f.html#14 Fast OODA-Loops increase Maneuverability
https://www.garlic.com/~lynn/2018.html#55 Now Hear This--Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2018e.html#29 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2017b.html#2 IBM 1970s
https://www.garlic.com/~lynn/2017c.html#64 Most people are secretly threatened by creativity
https://www.garlic.com/~lynn/2017c.html#74 Being Lazy Is the Key to Success, According to the Best-Selling Author of 'Moneyball'
https://www.garlic.com/~lynn/2017c.html#92 An OODA-loop is a far-from-equilibrium, non-linear system with feedback
https://www.garlic.com/~lynn/2018c.html#4 Cutting 'Old Heads' at IBM
https://www.garlic.com/~lynn/2017.html#89 The ICL 2900
https://www.garlic.com/~lynn/2016f.html#41 Misc. Success of Failure
https://www.garlic.com/~lynn/2016e.html#14 Leaked IBM email says cutting 'redundant' jobs is a 'permanent and ongoing' part of its business model
https://www.garlic.com/~lynn/2016c.html#20 To Be or To Do
https://www.garlic.com/~lynn/2016d.html#8 What Does School Really Teach Children
https://www.garlic.com/~lynn/2016.html#49 Strategy
https://www.garlic.com/~lynn/2015.html#54 How do we take political considerations into account in the OODA-Loop?
https://www.garlic.com/~lynn/2014h.html#52 EBFAS
https://www.garlic.com/~lynn/2014i.html#7 You can make your workplace 'happy'
https://www.garlic.com/~lynn/2014i.html#12 Let's Face It--It's the Cyber Era and We're Cyber Dumb
https://www.garlic.com/~lynn/2014i.html#13 IBM & Boyd
https://www.garlic.com/~lynn/2014d.html#91 IBM layoffs strike first in India; workers describe cuts as 'slaughter' and 'massive'
https://www.garlic.com/~lynn/2014e.html#9 Boyd for Business & Innovation Conference
https://www.garlic.com/~lynn/2014c.html#83 11 Years to Catch Up with Seymour
https://www.garlic.com/~lynn/2013m.html#23 "There IS no force, just inertia"
https://www.garlic.com/~lynn/2013m.html#28 The Reformers
https://www.garlic.com/~lynn/2013k.html#48 John Boyd's Art of War
https://www.garlic.com/~lynn/2013g.html#63 What Makes collecting sales taxes Bizarre?
https://www.garlic.com/~lynn/2014m.html#7 Information Dominance Corps Self Synchronization
https://www.garlic.com/~lynn/2014m.html#56 The Road Not Taken: Knowing When to Keep Your Mouth Shut
https://www.garlic.com/~lynn/2014m.html#61 Decimation of the valuation of IBM
https://www.garlic.com/~lynn/2013e.html#10 The Knowledge Economy Two Classes of Workers
https://www.garlic.com/~lynn/2013e.html#39 As an IBM'er just like the Marines only a few good men and women make the cut,
https://www.garlic.com/~lynn/2012o.html#65 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#71 Is orientation always because what has been observed? What are your 'direct' experiences?
https://www.garlic.com/~lynn/2012j.html#32 Microsoft's Downfall: Inside the Executive E-mails and Cannibalistic Culture That Felled a Tech Giant
https://www.garlic.com/~lynn/2012i.html#51 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'? thoughts please
https://www.garlic.com/~lynn/2012k.html#40 Core characteristics of resilience
https://www.garlic.com/~lynn/2012k.html#66 The Perils of Content, the Perils of the Information Age
https://www.garlic.com/~lynn/2012k.html#67 Coping With the Bounds: Speculations on Nonlinearity in Military Affairs
https://www.garlic.com/~lynn/2012h.html#17 Hierarchy
https://www.garlic.com/~lynn/2012h.html#21 The Age of Unsatisfying Wars
https://www.garlic.com/~lynn/2012h.html#24 Baby Boomer Guys -- Do you look old? Part II
https://www.garlic.com/~lynn/2012h.html#63 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'?
https://www.garlic.com/~lynn/2012f.html#23 Time to competency for new software language?
https://www.garlic.com/~lynn/2012f.html#52 Does the Experiencing Self "Out-OODA" the Remembering Self?
https://www.garlic.com/~lynn/2012e.html#14 Are mothers naturally better at OODA because they always have the Win in mind?
https://www.garlic.com/~lynn/2012e.html#20 Are mothers naturally better at OODA because they always have the Win in mind?
https://www.garlic.com/~lynn/2012e.html#60 Candid Communications & Tweaking Curiosity, Tools to Consider
https://www.garlic.com/~lynn/2012e.html#70 Disruptive Thinkers: Defining the Problem
https://www.garlic.com/~lynn/2012e.html#72 Sunday Book Review: Mind of War
https://www.garlic.com/~lynn/2012c.html#14 Strategy subsumes culture
https://www.garlic.com/~lynn/2012c.html#51 How would you succinctly desribe maneuver warfare?
https://www.garlic.com/~lynn/2012d.html#40 Strategy subsumes culture
https://www.garlic.com/~lynn/2012b.html#42 Strategy subsumes culture
https://www.garlic.com/~lynn/2012b.html#68 Original Thinking Is Hard, Where Good Ideas Come From
https://www.garlic.com/~lynn/2011k.html#88 Justifying application of Boyd to a project manager
https://www.garlic.com/~lynn/2011i.html#57 Low Carb Mavericks, John Boyd and the Art of War
https://www.garlic.com/~lynn/2011d.html#3 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#6 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#12 I actually miss working at IBM
https://www.garlic.com/~lynn/2011d.html#49 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011d.html#79 Mainframe technology in 2011 and beyond; who is going to run these Mainframes?
https://www.garlic.com/~lynn/2011g.html#13 The Seven Habits of Pointy-Haired Bosses
https://www.garlic.com/~lynn/2011c.html#37 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011e.html#45 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#90 PDCA vs. OODA
https://www.garlic.com/~lynn/2010q.html#60 I actually miss working at IBM
https://www.garlic.com/~lynn/2011.html#80 Chinese and Indian Entrepreneurs Are Eating America's Lunch
https://www.garlic.com/~lynn/2010p.html#82 TCM's Moguls documentary series
https://www.garlic.com/~lynn/2010i.html#32 Death by Powerpoint
https://www.garlic.com/~lynn/2010i.html#38 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010h.html#18 How many mainframes are there?
https://www.garlic.com/~lynn/2010h.html#20 How many mainframes are there?
https://www.garlic.com/~lynn/2010c.html#84 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010e.html#39 Agile Workforce
https://www.garlic.com/~lynn/2010e.html#40 Byte Tokens in BASIC
https://www.garlic.com/~lynn/2010f.html#20 Would you fight?
https://www.garlic.com/~lynn/2010f.html#43 F.B.I. Faces New Setback in Computer Overhaul
https://www.garlic.com/~lynn/2009s.html#4 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009s.html#41 Why Coder Pay Isn't Proportional To Productivity
https://www.garlic.com/~lynn/2009r.html#50 "Portable" data centers
https://www.garlic.com/~lynn/2009r.html#62 some '83 references to boyd
https://www.garlic.com/~lynn/2009q.html#37 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009p.html#34 big iron mainframe vs. x86 servers
https://www.garlic.com/~lynn/2009p.html#60 MasPar compiler and simulator
https://www.garlic.com/~lynn/2009o.html#47 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009h.html#5 mainframe replacement (Z/Journal Does it Again)
https://www.garlic.com/~lynn/2009h.html#71 My Vintage Dream PC
https://www.garlic.com/~lynn/2009h.html#74 My Vintage Dream PC
https://www.garlic.com/~lynn/2009b.html#25 The recently revealed excesses of John Thain, the former CEO of Merrill Lynch, while the firm was receiving $25 Billion in TARP funds makes me sick
https://www.garlic.com/~lynn/2008s.html#5 Greed - If greed was the cause of the global meltdown then why does the biz community appoint those who so easily succumb to its temptations?
https://www.garlic.com/~lynn/2008b.html#45 windows time service
https://www.garlic.com/~lynn/2007j.html#61 Lean and Mean: 150,000 U.S. layoffs for IBM?
https://www.garlic.com/~lynn/2007h.html#74 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007.html#20 MS to world: Stop sending money, we have enough - was Re: Most ... can't run Vista
https://www.garlic.com/~lynn/2000e.html#35 War, Chaos, & Business (web site), or Col John Boyd

--
virtualization experience starting Jan1968, online at home since Mar1970

3270 Trivia

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 3270 Trivia
Date: 19 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#18 3270 Trivia

folklore is that the motivation for token-ring was for 3270 controllers to terminal interconnect, some customer bldgs were starting to exceed load limits with the weight of 3270 coax runs from datacenters to terminals. The communication group was also fiercely fighting off client/server and distributed computing, trying to protect their dumb terminal paradigm and install base.

Late 80s, senior disk engineer gets a talk scheduled annual, internal, world-wide, communication group conference, supposedly on 3174 performance ... but opens the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The communication group had stranglehold on mainframe datacenters with their corporate strategic responsibility for everything that crossed datacenter wall and were fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm). The disk division was seeing data fleeing mainframe datacenters to more distributed computing friendly platforms, with drops in disk sales. They had come up with a number of solutions ... which were constantly being vetoed by the communication group.

A couple short yrs later, company has one of the largest loss ever in US corporate history and was being reorganized into the 13 "baby blues" in preparation for breaking up the company ... behind paywall, but mostly lives free at wayback machine.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

The IBM workstation business unit had done their own 4mbit token-ring card (PC/RT had PC/AT bus). For (microchannel) RS/6000 they were told that they couldn't do their own cards, but had to use the PS2 microchannel cards, which had been severely performance kneecapped by the communication group. For instance, the microchannel 16mbit token-ring card had lower per-card throughput than the PC/RT 4mbit token-ring card (a PC/RT 4mbit token-ring server would have higher throughput than RS/6000 16mbit token-ring server).

The communication group had also fiercely fought off release of mainframe TCP/IP support. When they lost, they changed their tactics and said that since they had corporate strategic ownership of everything that crosses datacenter walls, it had to be released through them. What shipped got aggregate 44kbyte/sec throughput using nearly whole 3090 processor. I did the changes to support RFC1044 and in some tuning tests at Cray Research between 4341 and Cray, got sustained 4341 channel throughput using only a modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

Later the communication group hires a silicon valley contractor to implement TCP/IP support directly inside VTAM. What he initially demo'ed had TCP/IP much faster than LU6.2. He was then told everybody "knows" that LU6.2 is much faster than a "proper" TCP/IP implementation and they would only be paying for a "proper" TCP/IP implementation.

RFC 1044 support posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
dumb terminal posts
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "Fast-Track" Bureaucrats

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "Fast-Track" Bureaucrats
Date: 19 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#14 IBM "Fast-Track" Bureaucrats
https://www.garlic.com/~lynn/2022e.html#19 IBM "Fast-Track" Bureaucrats

Boyd had story when he ran Lightweight Figther program in the Pentagon. One day, the 1star (he reported to) came in and found the room in animated technical argument. The 1star claimed it wasn't behavior befitting an officer and called a meeting in Pentagon auditorium with lots of people, publicly firing Boyd. A week later USAF 4star called meeting in the same auditorium with the same people and rehired Boyd and told the 1star to never do that again.

Boyd also had stories about planning Spinney's Time front page article, including making sure there was written approval for every piece of information; gone behind paywall, but mostly lives free at wayback machine
https://web.archive.org/web/20070320170523/http://www.time.com/time/magazine/article/0,9171,953733,00.html
also
https://content.time.com/time/magazine/article/0,9171,953733,00.html

SECDEF was really angry about the article and wanted to prosecute them for release of classified information (but they were fully covered). SECDEF then directed Boyd transferred to Alaska and banned from the Pentagon for life. At the time, Boyd had cover in congress and week later Boyd was invited to the Pentagon and asked what kind of office and furnishings would he like.

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

some recent posts mentioning Spinney's time artcile
https://www.garlic.com/~lynn/2021j.html#96 IBM 3278
https://www.garlic.com/~lynn/2021.html#0 IBM "Wild Ducks"
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2019e.html#128 Republicans abandon tradition of whistleblower protection at impeachment hearing
https://www.garlic.com/~lynn/2018d.html#34 Military Reformers
https://www.garlic.com/~lynn/2018b.html#63 Major firms learning to adapt in fight against start-ups: IBM
https://www.garlic.com/~lynn/2018b.html#45 More Guns Do Not Stop More Crimes, Evidence Shows
https://www.garlic.com/~lynn/2018.html#39 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017h.html#23 This Is How The US Government Destroys The Lives Of Patriotic Whistleblowers
https://www.garlic.com/~lynn/2017c.html#27 Pentagon Blocks Littoral Combat Ship Overrun From a GAO Report
https://www.garlic.com/~lynn/2017b.html#60 Why Does Congress Accept Perpetual Wars?
https://www.garlic.com/~lynn/2016h.html#96 This Is How The US Government Destroys The Lives Of Patriotic Whistleblowers
https://www.garlic.com/~lynn/2016h.html#21 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#18 The Winds of Reform

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "nine-net"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "nine-net"
Date: 19 June 2022
Blog: Facebook
old archived email about co-worker at SJR, getting nine-net
https://www.garlic.com/~lynn/2006j.html#email881216

I had PC/RT in (not-IBM) booth at Interop88, across corner central courtyard from SUN booth ... Case was in SUN booth doing SNMP and talked him into installing on PC/RT. Trivia: over weekend into early hrs, packet floods were crashing floor nets ... led to some spec in RFC1122.

Since early 80s, I had HSDT project, T1 and faster computer links (both terrestrial and satellites) and was working with NSF director, was supposed to get $20M to interconnect NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released, Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid, The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

other trivia, co-worker at science center responsible for internal network (larger than arpanet/internet from just about beginning until sometime mid/late 80s), and technology used for the corporate sponsored BITNET
https://en.wikipedia.org/wiki/BITNET

we transfer out to SJR in the late 70s and then he transfers to FSD San Diego. SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional references from Ed's website (Ed passed aug2020)
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "nine-net"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "nine-net"
Date: 19 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#22 IBM "nine-net"

... more than you ever want to know, from a post
https://www.garlic.com/~lynn/2022e.html#20 3270 Trivia
today in an IBM group

folklore is that the motivation for token-ring was for 3270 controllers to terminal interconnect, some customer bldgs were starting to exceed load limits with the weight of 3270 coax runs from datacenters to terminals. The communication group was also fiercely fighting off client/server and distributed computing, trying to protect their dumb terminal paradigm and install base.

Late 80s, senior disk engineer gets a talk scheduled annual, internal, world-wide, communication group conference, supposedly on 3174 performance ... but opens the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The communication group had stranglehold on mainframe datacenters with their corporate strategic responsibility for everything that crossed datacenter wall and were fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm). The disk division was seeing data fleeing mainframe datacenters to more distributed computing friendly platforms, with drops in disk sales. They had come up with a number of solutions ... which were constantly being vetoed by the communication group.

A couple short yrs later, company has one of the largest loss ever in US corporate history and was being reorganized into the 13 "baby blues" in preparation for breaking up the company ... behind paywall, but mostly lives free at wayback machine.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company (in my executive exit interview I was told they could have forgiven me for being wrong, but they were never going to forgive me for being right) but get a call from the bowels of Armonk (corporate hdqtrs) about helping with the breakup of the company. Business units were using "MOUs" for supplier contracts in other units ... after the breakup many of the supplier contracts would be in different companies. All the MOUs would have to be cataloged and turned into their own contracts (before we get started, the board brings in a new CEO who reverses the breakup).

The IBM workstation business unit had done their own 4mbit token-ring card (PC/RT had PC/AT bus). For (microchannel) RS/6000 they were told that they couldn't do their own cards, but had to use the PS2 microchannel cards, which had been severely performance kneecapped by the communication group. For instance, the microchannel 16mbit token-ring card had lower per-card throughput than the PC/RT 4mbit token-ring card (a PC/RT 4mbit token-ring server would have higher throughput than RS/6000 16mbit token-ring server).

The communication group had also fiercely fought off release of mainframe TCP/IP support. When they lost, they changed their tactics and said that since they had corporate strategic ownership of everything that crosses datacenter walls, it had to be released through them. What shipped got aggregate 44kbyte/sec throughput using nearly whole 3090 processor. I did the changes to support RFC1044 and in some tuning tests at Cray Research between 4341 and Cray, got sustained 4341 channel throughput using only a modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

Later the communication group hires a silicon valley contractor to implement TCP/IP support directly inside VTAM. What he initially demo'ed had TCP/IP much faster than SNA LU6.2. He was then told everybody "knows" that SNA LU6.2 is much faster than a "proper" TCP/IP implementation and they would only be paying for a "proper" TCP/IP implementation.

RFC 1044 support posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
dumb terminal posts
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "nine-net"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "nine-net"
Date: 20 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#22 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#23 IBM "nine-net"

co-worker left research/IBM in early 80s and was doing lots of consulting/contracting work in silicon valley. He did a lot of fortran enhances, fixes, optimization for the company selling HSPICE. For a large VLSI shop, he had fixed up and improved code optimization for somewhat problem C compiler port to the mainframe and used it to port a lot of BSD chip tools. One day IBM marketing guy stopped by and asked him what he was doing. He said he was doing mainframe ethernet support to use SGI as graphic workstation frontends for the mainframe. The marketing guy said he should be doing token-ring instead or the company might not find their mainframe service as timely as in the past. I get a phone call and had to listen to an hour of four letter words. Next morning the senior VP of engineering has press conference to say they were moving everything off IBM mainframes to SUN servers. Then IBM has a lot of task forces to investigate why silicon valley wasn't using IBM mainframes ... but weren't allowed to consider some of the fundamental reasons.

The new IBM research bldg in Almaden was heavily provisioned with CAT4 assuming 16mbit token-ring. However, they found that CAT4 ten mbit ethernet had higher aggregate network throughput, lower network latency and higher ethernet card throughput (AMD Lance chip) than 16mbit token-ring.

For RS/6000, performance kneecapped microchannel cards wasn't just token-ring but also all the rest of cards, graphics, scsi, etc. Eventually they came out with a RS6000/730 with VMEbus as a work around to corporate politics ... and able to use high-performance VMEbus workstation cards.

My wife was asked to respond to gov. request for super-secure, distributed large campus environment. She wrote in 3-tier network, super high-speed router backbones, ethernet, etc. We were then doing customer executive presentation of the design (significantly higher performance/throughput at a significantly lower price) and taking lots of arrows in the back from the SNA & token-ring forces (if they didn't like 2tier client/server, they really hated 3tier) ... all misinformation and innuendo since they couldn't argue any actual data.

misc past posts mentioning 3-tier networking
https://www.garlic.com/~lynn/2021k.html#28 APL
https://www.garlic.com/~lynn/2021d.html#52 IBM Hardest Problem(s)
https://www.garlic.com/~lynn/2021d.html#18 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021c.html#85 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2017i.html#35 IBM Shareholders Need Employee Enthusiasm, Engagemant And Passions
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2017d.html#21 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014d.html#50 Can we logon to TSO witout having TN3270 up ?
https://www.garlic.com/~lynn/2014c.html#21 The PDP-8/e and thread drifT?
https://www.garlic.com/~lynn/2013i.html#17 Should we, as an industry, STOP using the word Mainframe and find (and start using) something more up-to-date
https://www.garlic.com/~lynn/2013b.html#56 Dualcase vs monocase. Was: Article for the boss
https://www.garlic.com/~lynn/2013b.html#34 Ethernet at 40: Its daddy reveals its turbulent youth
https://www.garlic.com/~lynn/2013b.html#31 Ethernet at 40: Its daddy reveals its turbulent youth
https://www.garlic.com/~lynn/2011f.html#33 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011d.html#41 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011c.html#40 Other early NSFNET backbone
https://www.garlic.com/~lynn/2010o.html#4 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010f.html#57 Handling multicore CPUs; what the competition is thinking
https://www.garlic.com/~lynn/2010d.html#45 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2008s.html#42 Welcome to Rain Matrix: The Cloud Computing Network
https://www.garlic.com/~lynn/2008r.html#47 pc/370
https://www.garlic.com/~lynn/2008r.html#6 What if the computers went back to the '70s too?
https://www.garlic.com/~lynn/2008l.html#10 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008h.html#67 New test attempt
https://www.garlic.com/~lynn/2008e.html#21 MAINFRAME Training with IBM Certification and JOB GUARANTEE
https://www.garlic.com/~lynn/2008d.html#64 Interesting ibm about the myths of the Mainframe
https://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007g.html#76 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006k.html#9 Arpa address

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "nine-net"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "nine-net"
Date: 20 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#22 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#23 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"

trivia: my wife is co-inventor on early IBM token-passing patent ... I think used in the IBM Series/1 "chat ring".

Before that she was co-author of AWP39 Peer-Coupled Network architecture ... about same period as SNA was starting to emerge ... had to use "peer-coupled" to differentiate from SNA (system network architecture .. which had co-opted "network") ... since it wasn't a system, it wasn't a network, and wasn't an architecture.

Later she was con'ed into going to POK to be in charge of mainframe loosely-coupled (aka cluster) architecture where she authored Peer-Coupled Shared Data architecture. She didn't remain long because 1) little uptake (until much later with sysplex and parallel sysplex) except for IMS DBMS hot-standby and 2) constant battles with communication group trying to force her into using SNA/VTAM for loosely-coupled control.

Last product we did at IBM was HA/CMP. It started out HA/6000 for the NYTimes to move their newspaper system (ATEX) off VAXcluster to RS/6000. I rename it HA/CMP (High Availability Cluster Multi-Processing) when I start doing technical/scientific cluster scale-up work with national labs and commercial cluster scale-up work with RDBMS vendors (Ingres, Informix, Oracle, Sybase). They all had VAXcluster support in same source base with UNIX support (lots of discussions about how to ease adapting cluster to unix and improve on original vaxcluster). Old archived post about Jan1992 meeting in Ellison's conference room (Oracle CEO) on cluster scale-up ... 16-system cluster by mid-92, 128-system cluster by ye-92
https://www.garlic.com/~lynn/95.html#13

Within a few weeks of the Ellison meeting, cluster-scale-up is transferred, announced as an IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later). Possibly contributing was the mainframe DB2 group were complaining that if we were allowed to go ahead, it would be years ahead of them.

Other trivia: A decade earlier (Jan1979) I had been con'ed into doing VM/4341 benchmarks for national lab that was looking at getting 70 for a compute farm, sort of the leading edge of the coming cluster supercomputing tsunami. About the same time had also done some work with Jim Gray and Vera Watson on the original SQL/relational implementation. System/R.

Peer-Coupled Shared Data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
SQL/Relational "System/R" posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "nine-net"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "nine-net"
Date: 20 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#22 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#23 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"

1980 I'm con'ed into doing (mainframe) channel-extender support for STL (now SVL) that was bursting at the seams and moving 300 people from IMS DBMS group to offsite bldg (and 300 3270 terminals). They had tried "remote 3270" (19.2 baud telco links) but found the human factors totally unacceptable. Channel-extender support allow local channel-attached controllers to be placed at offsite bldg ... with no perceptible difference between offfsite and at STL (running my enhanced SJR/VM operating systems with .11sec trival interactive response). The hardware vendor then tries to get IBM to release my support, but there were some engineers in POK playing with some serial stuff that get it veto'ed (afraid that if it was in the market, it would make it more difficult to release their stuff).

1988, the IBM branch office wants me to help LLNL standardize some serial stuff they are playing with which quickly becomes fibre channel standard (1gbit fullduplex, 2gbit aggregate, 200mbyte/sec). Then in 1990, POK gets their stuff released with ES/9000 as ESCON when it is already obsolete (17mbyte/sec). Then some POK engineers become involved with FCS and define a heavy weight protocol that radically reduces the throughput that is eventually released as FICON. Latest published benchmarks I can find is z196 "peak I/O" getting 2M IOPS using 104 FICON (over 104 FCS). About the same time FCS was announced for E5-2600 blades claiming over million IOPS (two such FCS getting higher throughput than 104 FICON running over 104 FCS).

Note: STL needed large number of 3270 channel controllers spread across all the channels with DASD ... but they were (relatively) slow boxes and interfered with DASD throughput. Moving the 3270 channel controllers behind the high-speed channel-extender box improved 370/168 system throughput by 10-15% because of reduced channel busy (for the same amount of 3270 I/O activity). There was talk of using it for inhouse 3270 controllers ... not needing for channel-extender function ... but to improve system throughput.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "nine-net"

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "nine-net"
Date: 20 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#22 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#23 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#26 IBM "nine-net"

1jan1983 conversion from HOST/IMP to internetworking protocol there were approx 100 IMP nodes and 255 hosts at a time when the internal network was rapidly approaching 1000 nodes (internal network was larger than arpanet/internet from just about the beginning until sometime mid/late 80s) ... archived post with list of company locations that added one or more network nodes during 1983:
https://www.garlic.com/~lynn/2006k.html#8

internal network was somewhat kneecapped because corporate required all links be encrypted (especially a problem when links crossed national boundaries) and official SNA products top'ed out at 56kbits/sec. I was starting HSDT project with T1 and faster computer links (terrestrial and satellite) and had to really be inventive to get T1 (and faster links) as well as the encryptors. One of the 1st long haul was T1 satellite link between Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston (hudson valley, east coast),which eventually had a whole boatload of Floating Point Systems boxes.
https://en.wikipedia.org/wiki/Floating_Point_Systems

In part because of satellite round trip latency ... from nearly the first, did dynamic adaptive rate-based pacing as part of congestion control. In the 2nd half of the 80s, was on the XTP (Greg Chesson at SGI) technical advisory board and wrote rate-based into the XTP specification.

Could do TCP/IP just with transmission issues ... however Ed's VNET/RSCS used the VM370 spool file system as store&forward and delivery mechanism. VNET/RSCS had a synchronous 4kbyte buffer interface to the spool file system ... and typically get 5-8 blocks/sec (20-30kbytes/sec) ... not bad with 56kbit links. I needed more like 70/sec per T1. I did a rewrite of the system spool file system in Pascal and moved it out of the kernel to virtual address space ... with enormous amount of enhancements and asynchronous optimization.

I had earlier done a CMS paged mapped filesystem with a lot of throughput enhancements and tweaked that API for use by VNET/RSCS.

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
paged-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

misc. posts mentioning HSDT spool file system
https://www.garlic.com/~lynn/2022e.html#8 VM Workship ... VM/370 50th birthday
https://www.garlic.com/~lynn/2021j.html#26 Programming Languages in IBM
https://www.garlic.com/~lynn/2021g.html#37 IBM Programming Projects
https://www.garlic.com/~lynn/2013n.html#91 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2012g.html#24 Co-existance of z/OS and z/VM on same DASD farm
https://www.garlic.com/~lynn/2012g.html#23 VM Workshop 2012
https://www.garlic.com/~lynn/2012g.html#18 VM Workshop 2012
https://www.garlic.com/~lynn/2011e.html#25 Multiple Virtual Memory

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "nine-net"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "nine-net"
Date: 20 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#22 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#23 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#26 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#27 IBM "nine-net"

back to little internet lore; after leaving IBM was brought into small client/server startup as consultants. Two of the former Oracle people (that had been in Ellison's meetings) were there responsible for something called "commerce server", and they wanted to do payment transactions on the server. the startup had also invented this technology they called "SSL" they wanted to use, the result is now frequently called "electronic commerce". I had responsibility for the payment gateway that talked to the financial payment networks and the internet connections between commerce servers and payment gateway. Was planning on doing "high availability" ... but backbone was transitioning to hierarchical routing and had to have multiple connections into different places in the backbone using "multiple A-records".

I gave classes to the browser people on multiple A-record but they said it was too complex, even when showed client examples from 4.3 reno/tahoe client source. I made snide remark if it wasn't in Steven's book, they didn't do it ... it took another year. One of the early adapters was large sporting good company that advertised during sunday football half time ... but this was when service providers still had rolling shutdowns on sundays for maintenance ... even tho their servers also had multiple connections in different part of the backbone ... w/o multiple A-record there would still be "black-outs" ... it took another year to get browser multiple a-record support.

An early scale-up problem was http/https was using TCP sessions ... which has minimum 7 packet exchange and had linear scan of FINWAIT (session close) list for every incoming packet ... under load webservers were quickly hitting 95% utilization scanning FINWAIT list. Finally the startup (for their own use) ... installed a multiprocessor Sequent server ... DYNIX recently having a fix for dealing with long FINWAIT list. It took another six months before starting seeing other platforms with fixes for the FINWAIT scanning problem.

Another issue was trouble desks at the payment networks had a 5min first-level problem determination criteria for connectivity problems with payment card machines ... but was heavily dependent on circuit paradigm ... I had to do a whole lot of document, software, and trouble shooting to get anywhere close to 5mins for packet environment. Postel sponsored my talk on "Why The Internet Isn't Business Critical Dataprocessing" based on all the compensating work I had to develop.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

posts mentioning "Why The Internet Isn't Business Critical Dataprocessing"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#108 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017j.html#42 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017e.html#47 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2017d.html#92 Old hardware

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "nine-net"

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "nine-net"
Date: 20 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#22 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#23 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#26 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"

trivia: VNET/RSCS use of vm370 spool file system (delivery simulated card & printer images). I was talking to some of the people at internal network "backbone" which was starting to face spool bottleneck because of the number of 56kbit links a backbone node would have. Was getting ready to give presentation on my rewrite at the next corporate backbone meeting when got email that corporate backbone meetings were going to be restricted to management only ... the SNA group was generating all sort of misinformation about how the internal network would collapse if it wasn't converted to SNA (and they apparently didn't want any technical people at the meetings to contradict their claims ... as well as anything else on the agenda besides the conversion to SNA).

some email about SNA misinformation:
https://www.garlic.com/~lynn/2006x.html#email870302
https://www.garlic.com/~lynn/2011.html#email870306

they also had been generating a lot of misinformation about SNA/VTAM could be used for NSFNET, somebody had collected a lot of the email and forwarded to me ... heavily redacted and clipped (to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109

posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
posts mentioning NSFNET
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

The "Animal Spirits of Capitalism" Are Devouring Us

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The "Animal Spirits of Capitalism" Are Devouring Us
Date: 22 June 2022
Blog: Facebook
The "Animal Spirits of Capitalism" Are Devouring Us. The chase for profits is killing journalism--and a lot of other public goods.
https://www.motherjones.com/media/2022/06/the-animal-spirits-of-capitalism/

It's called "private" equity for a reason

Unless you closely read the finance pages, private equity is mostly hidden from view, but its effect on our lives can be more dramatic than what Congress does. Nearly 1 in 14 Americans now works for a company controlled by private equity. PE investors might own the building where you live, the daycare your toddler attends, the nursing home that cares for your mother, the pet store where you pick up kibble. And they are squeezing the lifeblood out of all of them. As Hannah reports,


.... snip ...

.. note; the industry had gotten such a bad name in the 80s S&L crisis that they changed their name to Private Equity and "junk" bonds became "high-yield" bonds.

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
S&L crisis
https://www.garlic.com/~lynn/submisc.html#s&l.crisis

--
virtualization experience starting Jan1968, online at home since Mar1970

Technology Flashback

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Technology Flashback
Date: 22 June 2022
Blog: Facebook
360/370 "IPL" read 24 bytes, PSW at zero and two channel commands at +8; IPL I/O would "TIC" (branch +8) to first channel command read to continue I/O execution. The channel commands would read more data, either additional channel commands (to continue the I/O) and/or instructions. When I/O eventually finishes, it would "load" the initial IPL PSW (at zero which presumably branches to loaded instructions). For operating system problems, "PSW Restart" would store the current PSW at +8 and LPSW at zero ... typically used if operating system was misbehaving and setup to do some sort of diagnostic function.

I had taken two credit hr intro to fortran/computers, at the end of the semester got programming job to re-implement 1401 MPIO on 360/30. The Univ. had 709/1401 and were sold a 360/67 for TSS/370 to replace 709/1401 (709 tape->tape, 1401 tape<->unit record front end for 709) ... pending available of the 360/67, the 1401 was replaced with 360/30 (for univ. to gain some 360 experience, 360/30 did have 1401 simulation and could directly execute 1401 programs). The univ. would shutdown the datacenter on weekends and I had the whole place to myself for 48hrs straight (although 48hrs w/o sleep could make monday morning classes hard). I was given a bunch of 360 hardware and software documents to study and within a few weeks had 2000 card 360 assembler program that implemented 1401 MPIO function (I got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc).

Within a year of taking intro course, when 360/67 came in, I was hired fulltime responsible for OS/360 (tss/360 never really came to production fruition, so ran as 360/65 with os/360). First sysgen I did was MFT release 9.5. Student fortran jobs had run less than second on 709 tape->tape. Initially on 360/65 took over a minute. I installed HASP and that cut the time in half. Then for MFT release 11, I redid SYSGEN to carefully place datasets and PDS members for optimized arm seek and multi-track search, cutting student fortran time by another 2/3rds to avg 12.9 secs. Never beat 709 until I got Univ. Waterloo WATFOR.

Univ. library got an ONR grant to do online catalog and part of the money went for 2321 datacell. Project was also selected to be betatest for original CICS product and debugging CICS was added to my list. First problem was wouldn't start, turns out that CICS had some hardcoded, undocumented BDAM options and library had built their BDAM datasets with different set of operations.

CICS &/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

Then before I graduate, I was hired into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the data, including offering services to non-Boeing entities). I thot Renton datacenter possibly largest in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly being staged in hallways around machine room. Lots of politics between Renton manager and CFO who only had a 360/30 for payroll up at Boeing field (although they enlarge the machine room to install a 360/67 for me to play with when I'm not doing other stuff).

Trivia: both Boeing people and IBM Boeing team told tale of 360 announcement, Boeing walks into the IBM marketing rep's office and orders lots of 360s (who hardly knew what 360 was). This was when IBM sales was still commission and the marketing rep was the highest paid employee that year. The next year, IBM converts to "quota" ... but the marketing rep makes year quota by end of Jan. (on another big Boeing order). IBM recalculates his quota, he leaves IBM.

Many years later at IBM, was doing some work with Jim Gray and Vera Watson on the original SQL/relational, System/R. Then when Jim leaves for Tandem, he palms off bunch of stuff ... including DBMS consulting with the IMS group.

posts mentioning original SQL/relational, System/R
https://www.garlic.com/~lynn/submain.html#systemr

Random trivia: a decade ago, I was asked to track down IBM's decision to change all 370s to virtual memory. Found technical assistant that reported to the executive. Basically OS/360(370) MVT storage management was so bad that regions had to typically be four times larger than used ... as a result, a standard 1mbyte 370/165 would only have four regions (insufficient to keep machine utilized/justified). Moving to 16mbyte virtual memoy could increase the number of (concurrently running) regions by a factor of four times with little or no paging.

I had done a lot OS/360 and CP67/CMS work as undergraduate. Along the way I implemented 2741 & tty ascii terminal support and an interactive editor supporting the CMS edit syntax ... in HASP (HASP programming conventions totally different than CMS ... so editor was from scratch) ... which I thot was better than IBM's CRJE ... running OS MVT release 18.

posts mentioning HASP, JES, and/or NJE
https://www.garlic.com/~lynn/submain.html#hasp

CPS trivia: After graduating I joined the IBM science center on the 4th flr (CP40/CMS, CP67/CMS, GML, internal network, technology for univ BITNET, etc). IBM boston programming center on the 3rd flr and responsible for CPS. When some of the CP67/CMS people split off from the science center (for vm370 development group), they initially moved to the 3rd flr and took over most of the people (who even did a CPS port to CMS). Some amount of CPS was subcontracted out
http://www.bitsavers.org/pdf/allen-babcock/cps/
http://www.bitsavers.org/pdf/allen-babcock/cps/CPS_Progress_Report_may66.pdf

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

other recent posts mentioning 1401 MPIO
https://www.garlic.com/~lynn/2022e.html#0 IBM Quota
https://www.garlic.com/~lynn/2022d.html#87 Punch Cards
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#35 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#35 Error Handling
https://www.garlic.com/~lynn/2022.html#26 Is this group only about older computers?
https://www.garlic.com/~lynn/2022.html#1 LLMPS, MPIO, DEBE
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021e.html#44 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021e.html#43 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021e.html#38 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021e.html#27 Learning EBCDIC
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#27 DEBE?
https://www.garlic.com/~lynn/2021.html#81 Keypunch
https://www.garlic.com/~lynn/2021.html#61 Mainframe IPL
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2020.html#32 IBM TSS

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 37x5 Boxes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 37x5 Boxes
Date: 23 June 2022
Blog: Facebook
Some of the people in the Cambridge Science Center unsessively tried to talk CPD into using (Series/1) Peachtree processor rather than the horribly anemic UC processor.

Later in 80s, IBM branch office for one the baby bells talked me into (trying to) turn out a NCP/VTAM emulator that they had implemented on Series/1 clusters, as IBM type-1 product. However, the communication group was notorious for corporate political tricks ... so lots of IBMers tried to build countermeasures for everything SNA/VTAM forces might come up with.

I took a detailed live traffic information for baby bell with greater than 64k terminals ... and ran it through the communication group's HONE 3275 configurator and presented the comparison at Fall 1986 SNA Architecture Review Board meeting in Raleigh. Tech people thought it was really great, but the response I got from the director was wanting to know who authorized me to present. Part of the presentation in a post archived here:
https://www.garlic.com/~lynn/99.html#67

also part of presentation one of the baby bell people gave at spring '86 (IBM user group) COMMON meeting
https://www.garlic.com/~lynn/99.html#70

Note: communication group constantly disputed the comparison ... but they never were able to say why, since the baby bell data was from live operation and the equivalent 3275 information came from the communication group's HONE configurator. What the communication group then did to torpedo the project can only be described as truth is stranger than fiction.

trivia: as undergraduate in the 60s, the univ. had hired me fulltime responsible for OS/360 (IBM originally sold 360/67 to replace 709/1401 for TSS/360, but never came to production fruition so ran as 360/65 with OS/360). Univ. shutdown the datacenter on weekends and I would have the place dedicated to myself for 48hrs straight (although 48hrs w/o sleep could make Monday classes hard). Science Center came out and installed CP67/CMS (3rd after the science center and MIT Lincoln Labs) and mostly limited to my weekend use (rewriting large amounts of CP67&CMS code) except for periodic evening demo sessions. CP67 came with 1052&2741 support with automagic terminal recognition (switching port scanner type with communication controller SAD CCW). Univ. had some number of ASCII TTY terminals ... so I added TTY terminal support (integrated with automagic terminal type and port scanner type switching). Trivia: the TTY port scanner "upgrade" arrived in HeathKit box. I then wanted to have a single dial in number (hunt group)
https://en.wikipedia.org/wiki/Line_hunting
didn't quite work since IBM had taken short cut and hard wired line speed for each port.

Thus was born the univ project to build our own clone controller ... building channel interface board for Interdata/3 programmed to emulate the IBM controller with the addition of supporting automatic line speed. Later it was enhanced with Interdata/4 for the channel interface and cluster of Interdata/3s for the port interfaces. Interdata (and later Perkin/Elmer) sell it commercially as IBM clone controller. Four of us at the univ. get written up responsible for (some part of the) clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
plug compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 37x5 Boxes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 37x5 Boxes
Date: 23 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#32 IBM 37x5 Boxes

The IBM communication group was generating massive amounts of misinformation as part of fierce battle fighting off client/server and distributed computing, trying to preserve their dumb terminal paradigm and install base. misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
... including their standard 37x5 boxes didn't support links faster than 56kbit/sec.

In the mid-80s, this included presentation to executive committee that customers weren't interested in 1.5mbit/T1 links, at least until sometime well into the 90s. They showed a study of 37x5 "fat links", multiple parallel 56kbit links treated as single logical link ... numbers of customer 2, 3, 4, ... etc parallel links ... dropping to zero around six links. What they didn't know, or avoided reporting, was at the time, typical T1/1.5mbit telco tariff was about the same as 5or6 56kbit links ... customers just jumped to full T1s supported by non-IBM hardware.

I had HSDT project starting in the early 80s with T1 and faster links (both terrestrial and satellite) ... and was having some hardware being built on the other side of the Pacific. Friday before a visit ... I got an announcement email from Raleigh about new internal online forum with the following definition:

low-speed: 9.6kbits/sec,
medium speed: 19.2kbitts/sec,
high-speed: 56kbits/sec,
very high-speed: 1.5mbits/sec


monday morning on wall of conference room on the other side of pacific, there were these definitions:

low-speed: <20mbits/sec,
medium speed: 100mbits/sec,
high-speed: 200mbits-300mbits/sec,
very high-speed: >600mbits/sec


Finally, communication group was in a corner and came out with 3737 for T1. The issue was that VTAM had limited "window" pacing algorithm, that shutdown transmission when the window was exhausted, waiting until it started receiving replies about data arriving. Even on shorthaul T1/1.5mbit terrestrial link, VTAM window was almost immediately exhausted ... so link would spend majority of the time idle with no transmission. Trying to mask the problem, the 3737 had a boatload of M68k processor and a whole boatload of memory with a mini-VTAM. The 3737 was defined as a CTCA adapter with the 3737 immediately telling host VTAM that the data had arrived (even before it had been transmitted) ... trying to encourage host VTAM to keep sending data. Even with all the processing and memory, 3737 was limited to about 2mbits/sec aggregate (US T1 is 1.5mbit/sec full-duplex, 3mbit/sec aggregate, EU T1 is 2mbit/sec full-duplex, 4mbit/sec aggregate). Some old 3737 email:
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2018f.html#email880715
https://www.garlic.com/~lynn/2011g.html#email881005
related
https://www.garlic.com/~lynn/2018f.html#email870725
earlier email
https://www.garlic.com/~lynn/2018f.html#email840606
https://www.garlic.com/~lynn/2018f.html#email840606b

Part of above reference is to early HSDT link, satellite T1 between Los Gatos and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston (hudson valley, east coast), which eventually had a whole boatload of Floating Point Systems boxes.
https://en.wikipedia.org/wiki/Floating_Point_Systems

Was also working with NSF director and supposed to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released, Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

IBM internal politics not allowing us to bid, The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 37x5 Boxes

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 37x5 Boxes
Date: 24 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#32 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes

SNA/VTAM didn't have any routing until APPN showed up.

For awhile when I was doing HSDT, I reported to the same executive that the person responsible for AWP164/APPN reported to. I would periodically needle him to come work on real networking (TCP/IP) ... since the SNA/VTAM people aren't going to appreciate you. In fact, SNA/VTAM non-concurred with the announcement of APPN. It wasn't announced until after the announcement letter was redone so there was NO implication that SNA & APPN were in any way related. It wasn't until much later that they would describe APPN as "SNA Advanced Peer-to-Peer Networking", with references to LU7 and LU8.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

AWP164/APPN posts
https://www.garlic.com/~lynn/2017d.html#29 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2015h.html#99 Systems thinking--still in short supply
https://www.garlic.com/~lynn/2014e.html#15 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014.html#99 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#26 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013j.html#66 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2013g.html#44 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2012o.html#52 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#68 ESCON
https://www.garlic.com/~lynn/2012c.html#41 Where are all the old tech workers?
https://www.garlic.com/~lynn/2011l.html#26 computer bootlaces
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010e.html#5 What is a Server?
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?
https://www.garlic.com/~lynn/2009q.html#83 Small Server Mob Advantage
https://www.garlic.com/~lynn/2009l.html#3 VTAM security issue
https://www.garlic.com/~lynn/2009i.html#26 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009e.html#56 When did "client server" become part of the language?
https://www.garlic.com/~lynn/2008d.html#71 Interesting ibm about the myths of the Mainframe
https://www.garlic.com/~lynn/2008b.html#42 windows time service
https://www.garlic.com/~lynn/2007r.html#10 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007q.html#46 Are there tasks that don't play by WLM's rules
https://www.garlic.com/~lynn/2007o.html#72 FICON tape drive?
https://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications
https://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?
https://www.garlic.com/~lynn/2007b.html#49 6400 impact printer
https://www.garlic.com/~lynn/2007b.html#48 6400 impact printer
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 37x5 Boxes

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 37x5 Boxes
Date: 24 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#32 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#34 IBM 37x5 Boxes

early/mid 80s, majority of IBM revenue still came from mainframe hardware.

late 80s, senior disk engineer gets a talk scheduled annual, internal, world-wide, communication group conference, supposedly on 3174 performance ... but opens the talk with the statement that the communication group was going to be responsible for the demise of the disk division (including pointing finger at head of the communication group). The communication group had stranglehold on mainframe datacenters with their corporate strategic responsibility for everything that crossed datacenter wall and were fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm). The disk division was seeing data fleeing mainframe datacenters to more distributed computing friendly platforms, with drops in disk sales. They had come up with a number of solutions ... which were constantly being vetoed by the communication group.

The GPD/ADSTAR VP of software was trying to get around corporate politics by investing in distributed computing startups and funding MVS Posix support (aka OMVS, didn't actually cross datacenter walls, but made it easier for startups to integrate mainframe into distributed computing environment). The software VP would periodically have us in to talk about his investments and ask us to stop by and provide any assistance.

frequently mentioned, a couple short yrs later, company has one of the largest loss ever in US corporate history (stranglehold was cutting all mainframe revenue, not just disks) and was being reorganized into the 13 "baby blues" in preparation for breaking up the company ... behind paywall, but mostly lives free at wayback machine.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left the company (in my executive exit interview I was told they could have forgiven me for being wrong, but they were never going to forgive me for being right) but get a call from the bowels of corporate hdqtrs/Armonk about helping with the breakup of the company. Business units were using "MOUs" for supplier contracts in other units ... after the breakup many of the supplier contracts would be in different companies. All the MOUs would have to be cataloged and turned into their own contracts (before we get started, the board brings in a new CEO who reverses the breakup).

Turn of the century, reports were that mainframe hardware was a few percent of revenue and declining. EC12 time-frame, reports were that mainframe hardware was a couple percent of revenue (and still declining) but mainframe group was 25% of revenue (and 40% of profit) ... aka software&services.

communication group dumb terminal posts
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23June1969 Unbundle

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23June1969 Unbundle
Date: 24 June 2022
Blog: Facebook
After joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters ... and HONE was long time customer back to CP67 days.

23jun1969 unbundling announcement started charging for SE services, (application) software (but managed to make the case that kernel software was free), maint. etc. Online sales&marketing support HONE system originally started out in the wake of the 23Jun1969 unbundling. Part of SE training in SE group onsite at customer. With unbundling, they couldn't figure out how to NOT charge for trainee SE at customer site. HONE started out SEs practicing with guest operating systems running in CP67 virtual machines. Science center had also ported APL\360 to CP67/CMS as CMS\APL and had redone storage management so could go from 16kbyte workspaces to large virtual memory (up to 16mbyte) demand page workspaces ... and also had done APIs for system services ... like file I/O. HONE then started offering CMS\APL-based sales&marketing support applications ... which eventually came to dominate all HONE activity (and SE practicing with guest virtual machines evaporated).

a little drift on 23jun1969 unbundling

When Big Blue Went to War: A History of the IBM Corporation's Mission in Southeast Asia During the Vietnam War (1965-1975)
https://www.amazon.com/When-Big-Blue-Went-War-ebook/dp/B07923TFH5/
loc4695-4700:

Why IBM insisted on making us un-bundle in a war zone I never did understand. Yes, we were a part of the Data Processing Division, but an exception in a war zone could have been made if anyone higher up had argued the case. That policy change caused me to convert, overnight, four or five especially talented Systems Engineers to Marketing Representatives, because, according to the new IBM rules, SEs could not be on customer premises without billing for their time but our Marketing Reps could come and go as they pleased. We suddenly had a few new technical salesmen who continued to teach COBOL and FORTRAN as needed during their new, perhaps unwelcome and temporary careers.

... snip ...

In the first half of 70s, IBM started the Future System project, completely different than 360/370 and was going to completely replace 370. Internal politics was shutting down 370 projects ... and the lack of new 370 products is credited with giving clone 370 makers their market foothold. When FS imploded there was mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 in parallel. More FS background
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

I had continued to work on 360/370 stuff all during FS and periodically ridiculed their efforts (which wasn't exactly career enhancing activity). With the rise of 370 clone makers, the decision was changed to start charging for kernel (operating system) software ... and some of my stuff was selected to be released as charged/priced kernel add-on guinea pig (and I had to spend time with business planners and lawyers on kernel software pricing policy).

23June1969 Unbundle posts
https://www.garlic.com/~lynn/submain.html#unbundle
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future Systems posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 37x5 Boxes

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 37x5 Boxes
Date: 24 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#32 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#34 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#35 IBM 37x5 Boxes

Co-worker at the cambridge science center was responsible for the (non-SNA) internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) ... also was one of the people that tried to get CPD to use the Series/1 (significantly better) peachtree processor for 37x5 (rather than the UC). we transfer to San Jose Research in the late 70s, he then transfers to FSD San Diego and then leaves IBM (passes Aug2020).

"It's Cool to Be Clever: The Story of Edson C. Hendricks, the Genius Who Invented the Design for the Internet"
https://www.amazon.com/Its-Cool-Be-Clever-Hendricks/dp/1897435630/
Edson Hendricks
https://en.wikipedia.org/wiki/Edson_Hendricks
SJMerc article about Edson and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional references from Ed's website (Ed passed aug2020)
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

list of some of the correspondence (from above):

030481_2.gif -- images of pages one and two of the letter sent by Mike Engel to IBM CEO John Opel, requesting a formal review ("Open Door") of IBM's decision to cancel our Internet initiative.
051581.gif -- an image of Mike Engel' follow-up letter to the above, complaining that IBM was being unresponsive and requesting a personal meeting with Mr. Opel to discuss the importance of the emerging Internet to IBM.
051981a.gif -- an image of John Opel's response to Mike Engel's first letter, supporting the decision to terminate our "VNET/ARPANET" project due to lack of "business potential."
060381.gif -- an image of John Opel's response to Mike Engel's second letter, denying any further review and refusing to meet in person to discuss the matter, technically violating IBM's Open Door Policy as loudly proclaimed by IBM back then.
031981.gif -- Dale Johnson's request to Thomas J. Watson, Jr.(who was still on IBM's board at that time), requesting his review of IBM's decision-making in canceling our effort to join IBM with the emerging Internet.
051981b.gif -- A rejection letter to Dale Johnson from John Opel (making three) in response to his request for review to Thomas J. Watson, Jr., above.
091181_2.gif -- pages one and two of John Opel's "Management Briefing" giving glossy lip service to his and IBM's claim of management excellence.


... snip ...

I've already commented quite a bit in this post, about my HSDT project and other dealings with communication group. HSDT spool file system was for RSCS/VNET which used VM370 spool file system with a synchronous diagnose to access 4k blocked disk records. On moderately loaded VM370 RSCS/VNET might get only 5-8 4k records/sec (aggregate both read&write). I needed 70 4k records/sec for just one T1 link. I implemented (replacement) VM370 spool file system in Pascal running in virtual address space.

I was talking to the corporate backbone people, some of the backbone systems were startingto feel the problem as number of 56kbit/sec links increased. I was gettting reading to give talk at the next backbone meeting. Then got email that corporate backbone meetings were going to be restricted to managers only. The communication group had started spreading misinformation about how the internal network would collapse if it wasn't converted to SNA (and they apparently didn't want any technical people at the meetings to contradict their claims ... as well as anything else on the agenda besides the conversion to SNA). some email about SNA misinformation:

https://www.garlic.com/~lynn/2006x.html#email870302
https://www.garlic.com/~lynn/2011.html#email870306

communication group (and other) executives also were spreading all sorts of misinformation about how SNA products could be used for NSFnet ... somebody collected lots of that misinformation email and forwarded it to us ... old archived post with the email, heavily redacted and clipped to protect the guilty
https://www.garlic.com/~lynn/2006w.html#email870109

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Wall Street's Plot to Seize the White House

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Wall Street's Plot to Seize the White House
Date: 24 June 2022
Blog: Facebook
Wall Street's Plot to Seize the White House: Facing the Corporate Roots of American Fascism
http://coat.ncf.ca/our_magazine/links/53/53-index.html
%he Fascist Plot to Seize Washington
http://coat.ncf.ca/our_magazine/links/53/Plot1.html
The American Liberty League
http://coat.ncf.ca/our_magazine/links/53/all-both.html

recent posts mentioning US banks/businesses supporting Fascists/Nazis/Hitler:
https://www.garlic.com/~lynn/2022c.html#113 The New New Right Was Forged in Greed and White Backlash
https://www.garlic.com/~lynn/2021k.html#7 The COVID Supply Chain Breakdown Can Be Traced to Capitalist Globalization
https://www.garlic.com/~lynn/2021j.html#80 "The Spoils of War": How Profits Rather Than Empire Define Success for the Pentagon
https://www.garlic.com/~lynn/2021j.html#72 In U.S., Far More Support Than Oppose Separation of Church and State
https://www.garlic.com/~lynn/2021j.html#20 Trashing the planet and hiding the money isn't a perversion of capitalism. It is capitalism
https://www.garlic.com/~lynn/2021i.html#59 The Uproar Ovear the "Ultimate American Bible"
https://www.garlic.com/~lynn/2021f.html#46 Under God
https://www.garlic.com/~lynn/2021d.html#11 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2021.html#33 Fascism
https://www.garlic.com/~lynn/2020.html#0 The modern education system was designed to teach future factory workers to be "punctual, docile, and sober"

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
racism posts
https://www.garlic.com/~lynn/submisc.html#racism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war

Smedley Butler
https://en.wikipedia.org/wiki/Smedley_Butler
Business Plot
https://en.wikipedia.org/wiki/Business_Plot
War Is a Racket
https://en.wikipedia.org/wiki/War_Is_a_Racket
War profiteering
https://en.wikipedia.org/wiki/War_profiteering
"War Is a Racket" wiki includes reference to: Perpetual war
https://en.wikipedia.org/wiki/Perpetual_war

some past posts mentioning Smedley Butler
https://www.garlic.com/~lynn/2022.html#51 Haiti, Smedley Butler, and the Rise of American Empire
https://www.garlic.com/~lynn/2022.html#9 Capitol rioters' tears, remorse don't spare them from jail
https://www.garlic.com/~lynn/2021j.html#104 Who Knew ?
https://www.garlic.com/~lynn/2021i.html#56 "We are on the way to a right-wing coup:" Milley secured Nuclear Codes, Allayed China fears of Trump Strike
https://www.garlic.com/~lynn/2021i.html#54 The Kill Chain
https://www.garlic.com/~lynn/2021i.html#37 9/11 and the Saudi Connection. Mounting evidence supports allegations that Saudi Arabia helped fund the 9/11 attacks
https://www.garlic.com/~lynn/2021i.html#33 Afghanistan's Corruption Was Made in America
https://www.garlic.com/~lynn/2021h.html#101 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021h.html#96 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021h.html#38 $10,000 Invested in Defense Stocks When Afghanistan War Began Now Worth Almost $100,000
https://www.garlic.com/~lynn/2021g.html#67 Does America Like Losing Wars?
https://www.garlic.com/~lynn/2021g.html#50 Who Authorized America's Wars? And Why They Never End
https://www.garlic.com/~lynn/2021g.html#22 What America Didn't Understand About Its Longest War
https://www.garlic.com/~lynn/2021f.html#80 After WW2, US Antifa come home
https://www.garlic.com/~lynn/2021f.html#21 A People's Guide to the War Industry
https://www.garlic.com/~lynn/2021c.html#96 How Ike Led
https://www.garlic.com/~lynn/2021b.html#91 American Nazis Rally in New York City
https://www.garlic.com/~lynn/2021.html#66 Democracy is a threat to white supremacy--and that is the cause of America's crisis
https://www.garlic.com/~lynn/2021.html#32 Fascism
https://www.garlic.com/~lynn/2019e.html#145 The Plots Against the President
https://www.garlic.com/~lynn/2019e.html#112 When The Bankers Plotted To Overthrow FDR
https://www.garlic.com/~lynn/2019e.html#107 The Great Scandal: Christianity's Role in the Rise of the Nazis
https://www.garlic.com/~lynn/2019e.html#106 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#91 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#69 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019e.html#63 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019c.html#36 Is America A Christian Nation?
https://www.garlic.com/~lynn/2019c.html#17 Family of Secrets
https://www.garlic.com/~lynn/2017f.html#41 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017e.html#105 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017e.html#60 The Illusion Of Victory: America In World War I
https://www.garlic.com/~lynn/2017e.html#23 Ironic old "fortune"
https://www.garlic.com/~lynn/2016h.html#69 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#38 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#11 Smedley Butler
https://www.garlic.com/~lynn/2016h.html#3 Smedley Butler
https://www.garlic.com/~lynn/2016h.html#2 Smedley Butler
https://www.garlic.com/~lynn/2016c.html#79 Qbasic
https://www.garlic.com/~lynn/2016b.html#39 Failure as a Way of Life; The logic of lost wars and military-industrial boondoggles
https://www.garlic.com/~lynn/2016b.html#31 Putin holds phone call with Obama, urges better defense cooperation in fight against ISIS
https://www.garlic.com/~lynn/2016.html#31 I Feel Old
https://www.garlic.com/~lynn/2015g.html#3 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015c.html#13 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2012l.html#58 Singer Cartons of Punch Cards

--
virtualization experience starting Jan1968, online at home since Mar1970

Single Loop Thinking: Non-Reflective Military Cycles of ENDS and MEANS

From: Lynn Wheeler <lynn@garlic.com>
Subject: Single Loop Thinking: Non-Reflective Military Cycles of ENDS and MEANS
Date: 25 June 2022
Blog: Facebook
Single Loop Thinking: Non-Reflective Military Cycles of ENDS and MEANS
https://benzweibelson.medium.com/single-loop-thinking-non-reflective-military-cycles-of-ends-and-means-c7643fe51290

Systematic logic seeks to break things down (reductionism) into inputs linked to outputs, or where 'A plus B leads to C' in a reliable, uniform, repeatable and verifiable manner. An institution curates such systematic constructs formulaically so that users in the future can refer to an increasing stockpile of solutions paired with historical problems; we become armed with solutions searching along our paths in reality for possible matches to emerging problems in our way.[3] "Single loop learners are task oriented, oriented exclusively to identifying the best means to meet their defined ends...[s]ingle loop learners are isolationist in this way."[4] This elevation of 'goal-rational orientation' suggests a fixation on goals/ENDS where everything is reduced to a means-end calculation. Rutgers criticizes this logic as it "disguises how and by whom the goals in question are to be established and which values underlie them."[5] Single-loop thinking prevents any operator inquiry into those values as they violate the closed, single-loop cycle. An illustration of how modern militaries engage in 'single loop learning' can be seen below from the 2020 edition of Joint Planning publication 5-0.[6]

... snip ...

Failure as a Way of Life; The logic of lost wars and military-industrial boondoggles
http://www.theamericanconservative.com/articles/failure-as-a-way-of-life/

"This completes the loop in what is a classic closed system, where the outside world does not matter and is not allowed to intrude. Col. John Boyd, America's greatest military theorist, said that all closed systems collapse. The Washington establishment cannot adjust, it cannot adapt, it cannot learn. It cannot escape serial failure."

... snip ...

note in Boyd briefings, talking about OODA-loop
https://en.wikipedia.org/wiki/OODA_loop

he would emphasize constantly observing from every facet (countermeasure to biases, aka observation, orientation, confirmation, cognitive, etc).

Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html

Success of Failure posts
https://www.garlic.com/~lynn/submisc.html#success.of.failuree
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

--
virtualization experience starting Jan1968, online at home since Mar1970

Best dumb terminal for serial connections

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Best dumb terminal for serial connections
Newsgroups: alt.folklore.computers
Date: Sat, 25 Jun 2022 08:53:08 -1000
Anssi Saari <as@sci.fi> writes:

Did they? I remember buying an ISA card by SIIG with the 16550 UART that had the huge 16 byte buffer back in 1993 when I got a 14400 bps modem. With the V.42bis compression I think you could have at least 2x compression so double data rate over the serial link. Might've had a 486 by then though.

In my computers the built in ports had just the no buffer 16450s. Same with the multi-I/O boards (2xserial, 1xparallel, 2xIDE on single board).

Maybe the buffered UARTs were common later? I moved away from modems to ISDN (ISA card) around 1996 or 1997, then ADSL (PCI card) and then mostly ethernet for communication.


1993, had left IBM ... and was doing work from home. I got offer to do modem drivers (unix, windows, dos) for PAGESAT in return for downlink with full netnews feed. I had a RS6000/320 and SGI Indy on my desk and couple 486 machines (one running waffle, bulletin board software that I made the netnews feed available on) ... had 16550 UART boards. Also wrote article on modem drivers & pagesat for boardwatch magazine (had a picture of me in backyard with PAGESAT dish). Started out 9600, but they had to double it to 19.2 with increase in images (and there were periodic further increases).

16550
https://en.wikipedia.org/wiki/16550_UART
Pagesat
http://www.art.net/lile/pagesat/netnews.html
Pagesat at 115.2kbps
http://www.art.net/lile/ncit/service.html
Boardwatch
https://en.wikipedia.org/wiki/Boardwatch
Waffle
https://en.wikipedia.org/wiki/Waffle_(BBS_software)

misc. past posts mentioning pagesat
https://www.garlic.com/~lynn/2022b.html#7 USENET still around
https://www.garlic.com/~lynn/2022.html#11 Home Computers
https://www.garlic.com/~lynn/2021i.html#99 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#95 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2018e.html#55 usenet history, was 1958 Crisis in education
https://www.garlic.com/~lynn/2018e.html#51 usenet history, was 1958 Crisis in education
https://www.garlic.com/~lynn/2017h.html#118 AOL
https://www.garlic.com/~lynn/2017h.html#110 private thread drift--Re: Demolishing the Tile Turtle
https://www.garlic.com/~lynn/2017g.html#51 Stopping the Internet of noise
https://www.garlic.com/~lynn/2017b.html#21 Pre-internet email and usenet (was Re: How to choose the best news server for this newsgroup in 40tude Dialog?)
https://www.garlic.com/~lynn/2016g.html#59 The Forgotten World of BBS Door Games - Slideshow from PCMag.com
https://www.garlic.com/~lynn/2015h.html#109 25 Years: How the Web began
https://www.garlic.com/~lynn/2015d.html#57 email security re: hotmail.com
https://www.garlic.com/~lynn/2013l.html#26 Anyone here run UUCP?
https://www.garlic.com/~lynn/2012b.html#92 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2010g.html#82 [OT] What is the protocal for GMT offset in SMTP (e-mail) header time-stamp?
https://www.garlic.com/~lynn/2010g.html#70 What is the protocal for GMT offset in SMTP (e-mail) header
https://www.garlic.com/~lynn/2009l.html#21 Disksize history question
https://www.garlic.com/~lynn/2009j.html#19 Another one bites the dust
https://www.garlic.com/~lynn/2007g.html#77 Memory Mapped Vs I/O Mapped Vs others
https://www.garlic.com/~lynn/2006m.html#11 An Out-of-the-Main Activity
https://www.garlic.com/~lynn/2005l.html#20 Newsgroups (Was Another OS/390 to z/OS 1.4 migration
https://www.garlic.com/~lynn/2001h.html#66 UUCP email
https://www.garlic.com/~lynn/2000e.html#39 I'll Be! Al Gore DID Invent the Internet After All ! NOT
https://www.garlic.com/~lynn/aepay4.htm#miscdns misc. other DNS

--
virtualization experience starting Jan1968, online at home since Mar1970

Wall Street's Plot to Seize the White House

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Wall Street's Plot to Seize the White House
Date: 24 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#38 Wall Street's Plot to Seize the White House

"Griftopia" and "Economic Hit Man" continuation of "War is a Racket"

Griftopia
https://www.amazon.com/Griftopia-Machines-Vampire-Breaking-America-ebook/dp/B003F3FJS2/
Griftopia
https://en.wikipedia.org/wiki/Griftopia
gritopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia

The New Confessions of an Economic Hit Man
https://www.amazon.com/New-Confessions-Economic-Hit-Man-ebook/dp/B017MZ8EBM/
Confessions of an Economic Hit Man (also references "War Is a Racket" and "War Is a Racket" references "Economic Hit Man")
https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit_Man
John Perkins (hit man author)
https://en.wikipedia.org/wiki/John_Perkins_(author)

Five Examples of How Economic Hit Men Still Operate Globally Today
https://www.bkconnection.com/bkblog/jeevan-sivasubramaniam/five-examples-of-how-economic-hit-men-still-operate-globally-today

from above:

Matt Taibbi: Eric Holder Back to Wall Street-Tied Law Firm After Years of Refusing to Jail Bankers
https://www.democracynow.org/2015/7/8/eric_holder_returns_to_wall_street
Companies Avoid Paying $200 Billion in Tax. Businesses avoid taxes by channeling their overseas' investments through offshore financial hubs
https://www.wsj.com/articles/companies-avoid-paying-200-billion-in-tax-1435161106

economic mess posts (note Jan1999 I was asked to help try and stop the coming economic mess, we failed)
https://www.garlic.com/~lynn/submisc.html#economic.mess
too big to fail (too big to prosecute, too big to jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
tax fraud, tax evasion, tax avoidance, tax havens
https://www.garlic.com/~lynn/submisc.html#tax.evasion
capitalism
https://www.garlic.com/~lynn/submisc.html#capitalism

recent posts mentioning "Economic Hit Man:
https://www.garlic.com/~lynn/2022c.html#88 How the Ukraine War - and COVID-19 - is Affecting Inflation and Supply Chains
https://www.garlic.com/~lynn/2022b.html#104 Why Nixon's Prediction About Putin and Ukraine Matters
https://www.garlic.com/~lynn/2021k.html#71 MI6 boss warns of China 'debt traps and data traps'
https://www.garlic.com/~lynn/2021k.html#21 Obama's Failure to Adequately Respond to the 2008 Crisis Still Haunts American Politics
https://www.garlic.com/~lynn/2021i.html#97 The End of World Bank's "Doing Business Report": A Landmark Victory for People & Planet
https://www.garlic.com/~lynn/2021h.html#29 More than a Decade After the Volcker Rule Purported to Outlaw It, JPMorgan Chase Still Owns a Hedge Fund
https://www.garlic.com/~lynn/2021f.html#34 Obama Was Always in Wall Street's Corner
https://www.garlic.com/~lynn/2021f.html#26 Why We Need to Democratize Wealth: the U.S. Capitalist Model Breeds Selfishness and Resentment
https://www.garlic.com/~lynn/2021e.html#97 How capitalism is reshaping cities
https://www.garlic.com/~lynn/2021e.html#71 Bill Black: The Best Way to Rob a Bank Is to Own One (Part 1/9)
https://www.garlic.com/~lynn/2021d.html#75 The "Innocence" of Early Capitalism is Another Fantastical Myth
https://www.garlic.com/~lynn/2019e.html#106 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#92 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#38 World Bank, Dictatorship and the Amazon
https://www.garlic.com/~lynn/2019e.html#18 Before the First Shots Are Fired
https://www.garlic.com/~lynn/2019d.html#79 Bretton Woods Institutions: Enforcers, Not Saviours?
https://www.garlic.com/~lynn/2019d.html#54 Global Warming and U.S. National Security Diplomacy
https://www.garlic.com/~lynn/2019d.html#52 The global economy is broken, it must work for people, not vice versa
https://www.garlic.com/~lynn/2019c.html#40 When Dead Companies Don't Die - Welcome To The Fat, Slow World
https://www.garlic.com/~lynn/2019c.html#36 Is America A Christian Nation?
https://www.garlic.com/~lynn/2019c.html#17 Family of Secrets
https://www.garlic.com/~lynn/2019.html#85 LUsers
https://www.garlic.com/~lynn/2019.html#45 Jeffrey Skilling, Former Enron Chief, Released After 12 Years in Prison
https://www.garlic.com/~lynn/2019.html#43 Billionaire warlords: Why the future is medieval
https://www.garlic.com/~lynn/2019.html#42 Army Special Operations Forces Unconventional Warfare
https://www.garlic.com/~lynn/2019.html#41 Family of Secrets
https://www.garlic.com/~lynn/2019.html#13 China's African debt-trap ... and US Version

--
virtualization experience starting Jan1968, online at home since Mar1970

WATFOR and CICS were both addressing some of the same OS/360 problems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: WATFOR and CICS were both addressing some of the same OS/360 problems
Date: 26 June 2022
Blog: Facebook
WATFOR and CICS were both addressing some of the same major OS/360 problems

I took 2credit hr into to Fortran/computers, univ had 709/1401, 709 IBSYS tape->tape (with 1401 unit record front end) where student fortran jobs ran less than sec. Univ was sold 360/67 for TSS/360, but TSS/360 never came to production fruition ... especially for all the admin & business school Cobol apps. Within a year of taking the 2credit hr intro class, the 360/67 came in and I was hired fulltime responsible for OS/360 (360/67 running as 360/65 with OS/360). Initially student Fortran ran over a minute. I then installed HASP and cut time in half. For MFT release 11 SYSGEN, I rearranged cards to carefully place datasets and PDS members to optimize arm seek and multi-track search ... cutting student Fortran time by another 2/3rds to 12.9secs. Student Fortran never beat 709 until install Univ. of Waterloo WATFOR.

Big problem was OS/360 was heavily disk activity for step processing and file open/close (open/close SVCs were actually long series of SVCLIB members that were individually sequentially loaded and executed ... for every open/close). Single step processing and associated file open/close was approx. 4.3secs. WATFOR on 360/65 ran at 20,000 "cards" per minute (or approx 333/sec) ... full tray of batched student jobs (3000 cards & approx 100 jobs) took 4.3secs plus 9secs or 13.3secs (little less than 10jobs/sec). WATFOR was single step job, that would open necessary files and then batch compile&execute all the incoming jobs (similar to CICS processing for transactions) ... making as little use of OS/360 heavy weight system services as possible, while running.

... i.e. move 709->360 ... student fortran jobs went from <1sec to >1min, 100 student jobs went from a little over a min to 2hrs. WATFOR brought 100 job batch (around 3000 cards) to 9sec plus the OS360 one step job overhead (approx. 4.3sec after careful placement for optimized arm seek and multi-track search)

The other problem that both WATFOR and CICS faced was OS/360 storage management ... obtain all the necessary storage at startup and then provide their own memory management while running. This problem is claimed to also be the motivation for moving all 370s to virtual memory. A decade ago, I was asked to track down the decision and found and assistant to the executive. Turns out OS/360 storage management was so bad that regions typically had to be four times larger than actually used, as a result a standard 1mbyte 370/165 would only have four (concurrently executing) regions, insufficient to maintain 165 utilization & justification. Moving MVT to VS2 virtual memory would allow increasing number of regions by a factor of four times with little or no paging. Old archived post with pieces of that email exchange on justification for 370 virtual memory and other subjects like spooling and my implementing terminal & edit support inside HASP for CRJE-like facility:
https://www.garlic.com/~lynn/2011d.html#73

The univ library had gotten an ONR grant for online catalog and used part of the money for 2321 datacell. Effort was also selected as betatest for the CICS program product ... and debugging CICS was added to my tasks. Early problem was CICS wouldn't come up ... turns out it had some undocumented hard coded BDAM dataset options and library had created its datasets with a different set of options. CICS at start up would do all its dataset opens and storage acquisition and then while memory do as much of its own resource management as possible ... minimizing use of OS/360 systems services while actually running (analogous to the WATFOR implementation).

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thot Renton datacenter was possibly largest in the world, something like couple hundred million in 360 systems (360/65s were arriving faster than they could be installed, boxes constantly being staged in hallways around the machine room). Lots of politics between renton manager and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarged it to install a single processor 360/67 for me to play with when I wasn't doing other stuff).

During that period, they also brought the Boeing Huntsville two processor 360/67 up to Seattle (Boeing had been sold it for TSS/360 like the univ, but ran it as two single processor 360/65s with OS/360). The Huntsville 360/67 had been for 2250 CAD design applications ... but found running under OS/360 the storage management problems increased the longer the application ran (2250 CAD applications severely aggravated the OS/360 memory management problem). Boeing Huntsville had essentially done sort of an early version of VS2, modifying MVT Release 13 to run with 360/67 virtual memory ... it didn't do any page in/out, it just fiddled the virtual memory tables to work around OS/360 memory management problems.

some CICS history (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm

Waterloo WATFIV has some WATFOR history
https://en.wikipedia.org/wiki/WATFIV

mentions Boeing Computer Services, wasn't official formed until after I had graduated and joined IBM.
https://www.boeing.com/news/frontiers/archive/2003/august/cover4.html

WATFOR, CICS, and/or Boeing Computer Services posts
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022e.html#13 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota
https://www.garlic.com/~lynn/2022d.html#100 IBM Stretch (7030) -- Aggressive Uniprocessor Parallelism
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#78 US Takes Supercomputer Top Spot With First True Exascale Machine
https://www.garlic.com/~lynn/2022d.html#69 Mainframe History: How Mainframe Computers Evolved Over the Years
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#20 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022d.html#8 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#70 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#39 After IBM
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022c.html#3 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022b.html#126 Google Cloud
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022b.html#89 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2022b.html#59 CICS, BDAM, DBMS, RDBMS
https://www.garlic.com/~lynn/2022b.html#35 Dataprocessing Career
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022b.html#10 Seattle Dataprocessing
https://www.garlic.com/~lynn/2022.html#120 Series/1 VTAM/NCP
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#38 IBM CICS
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#12 Programming Skills

--
virtualization experience starting Jan1968, online at home since Mar1970

WATFOR and CICS were both addressing some of the same OS/360 problems

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: WATFOR and CICS were both addressing some of the same OS/360 problems
Date: 26 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems

PTFs (program temporary fix) frequently "replaced" members in system libraries (especially SVCLIB and LINKLIB) ... by making existing member null and adding the replacement to the end ... destroying my carefully optimized ordering ... degrading performance. Sometimes the performance degradation rate was high enough, that I would effectively have to do a partial SYSGEN process (rebuild SYSRES) to get the optimized ordering and performance back ... other times it was slow enough that managed to limp to the next release (before being forced to do a new SYSGEN).

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Chairman John Opel

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Chairman John Opel
Date: 26 June 2022
Blog: Facebook
Note: contains repeat of many comments in recent threads

Chairman Learson trying to block the rise of the careerists and bureaucrats destroying the Watson legacy:

Management Briefing
Number 1-72: January 18,1972
ZZ04-1312

TO ALL IBM MANAGERS:

Once again, I'm writing you a Management Briefing on the subject of bureaucracy. Evidently the earlier ones haven't worked. So this time I'm taking a further step: I'm going directly to the individual employees in the company. You will be reading this poster and my comment on it in the forthcoming issue of THINK magazine. But I wanted each one of you to have an advance copy because rooting out bureaucracy rests principally with the way each of us runs his own shop.

We've got to make a dent in this problem. By the time the THINK piece comes out, I want the correction process already to have begun. And that job starts with you and with me.

Vin Learson


... and ...


+-----------------------------------------+
|           "BUSINESS ECOLOGY"            |
|                                         |
|                                         |
|            +---------------+            |
|            |  BUREAUCRACY  |            |
|            +---------------+            |
|                                         |
|           is your worst enemy           |
|              because it -               |
|                                         |
|      POISONS      the mind              |
|      STIFLES      the spirit            |
|      POLLUTES     self-motivation       |
|             and finally                 |
|      KILLS        the individual.       |
+-----------------------------------------+
"I'M Going To Do All I Can to Fight This Problem . . ."
by T. Vincent Learson, Chairman

... snip ...

How to Stuff a Wild Duck

"We are convinced that any business needs its wild ducks. And in IBM we try not to tame them." - T.J. Watson, Jr.

"How To Stuff A Wild Duck", 1973, IBM poster
https://collection.cooperhewitt.org/objects/18618011/

however from the budding Future System disaster in the early 70s, from Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993 ....
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

more FS info
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html

According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, kildall worked on cp/67-cms at npg (gone 404, but lives on at the wayback machine)
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html
npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

trivia: (CP67)/CMS was precursor to personal computing; Some of the MIT/7094 CTSS people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
went to the 5th flr, project mac, and MULTICS.
https://en.wikipedia.org/wiki/Multics
Others went to the 4th flr, IBM Cambridge Science Center, did virtual machine CP40/CMS (on 360/40 with hardware mods for virtual memory, morphs into CP67/CMS when 360/67 standard with virtual memory becomes available, precursor to vm370), online and performance apps, CTSS RUNOFF

Future System posts:
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Chairman John Opel

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Chairman John Opel
Date: 26 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel

Note: contains repeat of many comments in recent threads

early/mid 80s, majority of IBM revenue still came from mainframe hardware.

late 80s, senior disk engineer gets a talk scheduled annual, internal, world-wide, communication group conference, supposedly on 3174 performance ... but opens the talk with the statement that the communication group was going to be responsible for the demise of the disk division (including pointing finger at head of the communication group). The communication group had stranglehold on mainframe datacenters with their corporate strategic responsibility for everything that crossed datacenter wall and were fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm). The disk division was seeing data fleeing mainframe datacenters to more distributed computing friendly platforms, with drops in disk sales. They had come up with a number of solutions ... which were constantly being vetoed by the communication group.

The GPD/ADSTAR VP of software was trying to get around corporate politics by investing in distributed computing startups and funding MVS Posix support (aka OMVS, didn't actually cross datacenter walls, but made it easier for startups to integrate mainframe into distributed computing environment). The software VP would periodically have us in to talk about his investments and ask us to stop by and provide any assistance.

frequently mentioned, a couple short yrs later, company has one of the largest loss ever in US corporate history (stranglehold was cutting all mainframe revenue, not just disks) and was being reorganized into the 13 "baby blues" in preparation for breaking up the company ... behind paywall, but mostly lives free at wayback machine.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left the company (in my executive exit interview I was told they could have forgiven me for being wrong, but they were never going to forgive me for being right) but get a call from the bowels of corporate hdqtrs/Armonk about helping with the breakup of the company. Business units were using "MOUs" for supplier contracts in other units ... after the breakup many of the supplier contracts would be in different companies. All the MOUs would have to be cataloged and turned into their own contracts (before we get started, the board brings in a new CEO who reverses the breakup).

Turn of the century, reports were that mainframe hardware was a few percent of revenue and declining. EC12 time-frame, reports were that mainframe hardware was a couple percent of revenue (and still declining) but mainframe group was 25% of revenue (and 40% of profit) ... aka software&services.

mid-80s, top executives were saying that IBM revenue was about to double (on mainframes). They were funding big increase in mainframe manufacturing and there was also large numbers of "fast-track" (newly minted) MBAs (to manage the increase in business) that were being quickly rotated through decision positions in selected victim business units.

Late 70s and early 80s, I was blamed for online computer conferencing (precursor to the IBM forums and modern social media) on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s). It really took off spring of 1981 when I distributed a trip report of visit to Jim Gray at Tandem. Only about 300 directly participated but claims upwards of 25,000 were reading. There were six copies of approx. 300 pages printed along with executive summary and summary of the summary and packaged in Tandem 3-ring binders and sent to the executive committee (folklore is 5of6 wanted to fire me) ... from summary of summary:

• The perception of many technical people in IBM is that the company is rapidly heading for disaster. Furthermore, people fear that this movement will not be appreciated until it begins more directly to affect revenue, at which point recovery may be impossible

• Many technical people are extremely frustrated with their management and with the way things are going in IBM. To an increasing extent, people are reacting to this by leaving IBM Most of the contributors to the present discussion would prefer to stay with IBM and see the problems rectified. However, there is increasing skepticism that correction is possible or likely, given the apparent lack of commitment by management to take action

• There is a widespread perception that IBM management has failed to understand how to manage technical people and high-technology development in an extremely competitive environment.


... snip ...

... took another decade (1981-1992) ... from IBM Jargon
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
communication group dumb terminal posts
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Somewhat related ... I had taken two credit hr intro to fortran/computers, at the end of the semester got programming job to re-implement 1401 MPIO on 360/30. The Univ. had 709/1401 and were sold a 360/67 for TSS/370 to replace 709/1401 (709 tape->tape, 1401 tape<->unit record front end for 709) ... pending available of the 360/67, the 1401 was replaced with 360/30 (for univ. to gain some 360 experience, 360/30 did have 1401 simulation and could directly execute 1401 programs). The univ. would shutdown the datacenter on weekends and I had the whole place to myself for 48hrs straight (although 48hrs w/o sleep could make monday morning classes hard). I was given a bunch of 360 hardware and software documents to study and within a few weeks had 2000 card 360 assembler program that implemented 1401 MPIO function (I got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc).

Within a yr of taking intro class, the 360/67 was come in and I was hired fulltime to be responsible for OS/360 (TSS/360 never came to product fruition so 360/67 was ran as 360/65 with OS/360).

recent post.comments on WATFOR & CICS (and MPIO)
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022e.html#43 WATFOR and CICS were both addressing some of the same OS/360 problems

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Chairman John Opel

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Chairman John Opel
Date: 26 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#45 IBM Chairman John Opel

Note: contains repeat of many comments in recent threads

In the early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM. The first time, I tried to do it through plant site employee education. At first they agreed, but as I provided more information about how to prevail/win in competitive situations, they changed their mind. They said that IBM spends a great deal of money training managers on how to handle employees and it wouldn't be in IBM's best interest to expose general employees to Boyd, I should limit audience to senior members of competitive analysis departments. First briefing in bldg28 auditorium open to all. One of his quotes:

"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or To Do, that is the question."

... snip ...

Trivia: In 89/90 the commandant of Marine Corps leverages Boyd for make-over of the corps ... at a time when IBM was desperately in need of make-over ... 1992 has one of the largest losses in history of US companies and was being reorg'ed into the 13 "baby blues" in preparation for breaking up the company (see "IBM Left Behind" up thread).

When Boyd passes in 1997, the USAF had pretty much disowned him and it was the Marines at Arlington ... and his effects go to the Gray Library and Research Center at Quantico. There have continued to be Boyd conferences at Marine Corps Univ. in Quantico ... we discuss careerists and bureaucrats (as well as the "old boy networks" and "risk averse").

Chuck's tribute to John
http://www.usni.org/magazines/proceedings/1997-07/genghis-john
for those w/o subscription
http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html
John Boyd - USAF. The Fighter Pilot Who Changed the Art of Air Warfare
http://www.aviation-history.com/airmen/boyd.htm
"John Boyd's Art of War; Why our greatest military theorist only made colonel"
http://www.theamericanconservative.com/articles/john-boyds-art-of-war/
40 Years of the 'Fighter Mafia'
https://www.theamericanconservative.com/articles/40-years-of-the-fighter-mafia/
Fighter Mafia: Colonel John Boyd, The Brain Behind Fighter Dominance
https://www.avgeekery.com/fighter-mafia-colonel-john-boyd-the-brain-behind-fighter-dominance/
Updated version of Boyd's Aerial Attack Study
https://tacticalprofessor.wordpress.com/2018/04/27/updated-version-of-boyds-aerial-attack-study/
A New Conception of War
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/

One of the boyd conferences at MCU, the former commandant comes in and wants to talk (for an hour or two) ... totally throws the schedule off, but nobody complains. I was in back corner of room with laptop ... after he finishes he makes a beeline for me ... all I could think of was some marines had set me up ... however former commandant and I were the only ones in the room that actually knew boyd. Boyd told large number of stories ... his bios don't always reflect how he told the stories.

For instance, "40 sec Boyd" ... he always won in 20sec ... asked why 40sec ... he said that there might be somebody in the world almost as good as he was ... and he might need the extra time.

Boyd also had stories about detail planning that went into Spinney's Time front page article, including making sure there was written approval for every piece of information; gone behind paywall, but mostly lives free at wayback machine
https://web.archive.org/web/20070320170523/http://www.time.com/time/magazine/article/0,9171,953733,00.html
also
https://content.time.com/time/magazine/article/0,9171,953733,00.html

SECDEF was really angry about the article and wanted to prosecute them for release of classified information (but was fully covered). SECDEF then directed Boyd transferred to Alaska and banned from the Pentagon for life. At the time, Boyd had cover in congress and week later Boyd was invited to the Pentagon and asked what kind of office and furnishes would he like.

Boyd had story when he ran Lightweight Figther program in the Pentagon. One day, the 1star (he reported to) came in and found the room in animated technical argument. The 1star claimed it wasn't behavior befitting an officer and called a meeting in Pentagon auditorium with lots of people, publicly firing Boyd. A week later USAF 4star called meeting in the same auditorium with the same people and rehired Boyd and told the 1star to never do that again.

Boyd also told a lot of stories about plane design ... I had to research subject to at least have some idea. It would just be the two of us ... and it seemed like he like carrying on multiple conversations at the same time ... and it could be really tiring figuring out which conversation his latest statement was about so would have the context for a response.

"Boyd" Posts & web references
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Best dumb terminal for serial connections

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Best dumb terminal for serial connections
Newsgroups: alt.folklore.computers
Date: Mon, 27 Jun 2022 08:18:20 -1000
Ahem A Rivet's Shot <steveo@eircom.net> writes:

[1] Of course these days we have serial attached SCSI.

re:
https://www.garlic.com/~lynn/2022e.html#40 Best dumb terminal for serial connections

trivia: circa 1990 , IBM Hursley did 9333, SCSI disk drives ... with serial copper running packetized SCSI protocol, originally was 80mbits/sec full duplex (160mbits/sec aggregate). 1988, I was asked to work with LLNL on standardizing some of serial stuff they were playing with, quickly becomes fibre channel standard (initially 1gbit, full duplex, 2gbit, 200GBYTE) I had hoped that 9333 could morpth into interoperable 1/8 & 1/4 speed FCS, but instead it morphs into SSA
https://en.wikipedia.org/wiki/Serial_Storage_Architecture

old archived post mentioning HA/CMP, FCS, and 9333
https://www.garlic.com/~lynn/95.html#13

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
fiber channel standard/FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 28 June 2022
Blog: Facebook
Somewhat referenced in this recent post/comments about in 60s optimizing arm seek and multi-track search for OS/360
https://www.garlic.com/~lynn/2022e.html#43 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems

recent posts about OS/360 (pds directory) multi-track search paradigm ... more than decade after as undergraduate optimizing members in PDS directory at the univ.
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2019b.html#15 Tandem Memo

All the standard IBM performance experts and been brought through the customer before they got around to having me in. Issue was that OS/360 CKD in the mid-60s was technology trade-off with relatively more abundant I/O resource than real storage resource, use channel, controller, and disk resource with multi-track search to find VTOC and PDS directory information. However, by mid-70s, that trade-off was inverted and it was becoming more efficient to cache location information in storage than to use multi-track search.

Peak load from hundreds of store controller application requesting member loads would happen by time-zone and sort of roll across the country time-zones. Multi-track search of store-controller PDS directory would saturate at two/sec (120/min) with each request having nearly 1/2sec busy for the disk, controller, and channel (not just the store controller PDS dataset disk busy, but also the associated controller and channels, locking out access to other disks) ... so there would be several minute delay for each request ... as well as slowing down any disk activity requiring the same controller and/or channel.

There were comments about CDC, Memorex and STK. Mutli-track search disk (controller and channel) busy was primarily based on disk rotation speed. However CDC, Memorex, and STK disk controllers had other controller busy improvement. VM370 formatted three page records per 3330 track and had chained page I/O channel programs to try and transfer (read or write) three page record transfers per rotation, even when rotation position records were on different tracks (on the same cylinder). VM370 formatting placed "dummy" records between the page records to increase rotational latency allowing time for chained channel program to switch heads within the rotational latency between records.

Old archived post discussing "dummy record" and maximizing transfer per rotation (channel program latencies involved the combined latencies of disk command latency, controller command latency, and channel latency).
https://www.garlic.com/~lynn/2013e.html#61
from above

turns out 370 channel timing architecture requires 110 byte short dummy record to provide the latency for execution of 3330 seek head fetch & execute ... but the 3330 tracks only had space for three 4k data records plus 101 byte short dummy records. Turns out 145, 148, 165, 168 with 3830 controller actually could do the switch head in the 101 byte latency (and many OEM disk controllers could do the head switch operation in the latency of a 50byte short dummy record) ... only the 158 integrated channels were so slow that they required the latency of full 110 byte short dummy record to perform the switch head operation.

... snip ...

I had written a channel program that could reformat the dummy record size for live, active used disks ... and used it to test efficiency of switch head operation with different length dummy records for large number of different (internal) processors and also made it available to some customers to test OEM disk controllers.

Note for the quick&dirty 303x effort (after failure of Future System effort), they took 158 engine with just the integrated channel microcode for the 303x channel director. A 3031 was two 158 engines, one with just the integrated channel microcode and one with just the 370 instruction microcode. A 3032 was 168-3 redone to use 303x channel director for external channels. A 3033 started out 168-3 logic remapped to 20% faster chips ... and used 303x channel director. All 303x machines had same I/O efficiency characteristics as 158 ... as well as later 3081 channel I/O characteristics.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts about CKD, FBA, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 28 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency

some other related I/O information

In the middle of the 70s, I was increasingly vocal that trade-off of I/O (like multi-track search) versus real memory caching the information. In the early 80s, I wrote that relative system disk throughput had declined by order of magnitude (factor of ten times) since the mid-60s (systems got 40-50 times faster, disks got 3-5 faster). A GPD disk division executive took exception and directed the disk division performance group to refute the claim. After a few weeks and came back and essentially said that I had slightly under stated the problem. They then respun the analysis for configuring datasets for improved throughput and used it for a (mainframe user group) SHARE
https://www.share.org/
presentation (16Aug1984, SHARE 63, B874).

After transferring to IBM SJR in 1977, I get to wander around IBM and customer datacenters in silicon valley, including bldg14 (disk engineering) and bldg15 (disk product test) across the street. They were running prescheduled, stand-alone, 7x24, mainframe testing. They said they had tried MVS (for testing), but had 15min mean-time-between-failure (requiring manual re-IPL in that environment). I offered to rewrite I/O supervisor to make it bullet-proof and never fail ... allowing any amount of ondemand, concurrent testing, greatly improving productivity. Downside was they developed almost kneejerk to blame me for any problems, and I had to spend increasing amount of time shooting their hardware problems. I then write an (internal) research report about work needed to be done and happen to mention MVS 15min MTBF, bringing down the wrath of the MVS group on my head. Later when 3380 disks were about to be released, FE had regression test of 57 errors likely to occur and MVS was failing (requiring re-IPL) in all cases (and in 2/3rds of the errors, no indication of what cause the failure). I didn't feel badly.

Bldg15 got the 1st engineering 3033 (outside POK) and since testing only took percent or two of processor, we found spare 3830 and couple strings of 3330 drives and setup private online service (ran 3270 coax under the street and added to the 3270 terminal switch on my desk). One Monday, got irate call asking what I had done to 3033 system (significant degradation, they claimed they did nothing). Eventually found that 3830 controller had been replaced with engineer 3880 controller. 3830 had fast horizontal microcode processing. 3880 had special hardware path for data transfer, but an extremely slow processor for everything else ... significantly driving up channel busy (and radically cutting amount of concurrent activity). They managed to mask some of the degradation before customer ship. However, the trout/3090 had design number channels for target throughput based on assumption that 3880 was same as 3830 (but supporting 3380 3mbyte/sec data rate). When they found out how bad 3880 channel busy really was, they realized they had to significantly increase the number of channel to achieve target throughput. The increase in channels required an additional TCM ... and 3090 semi-facetiously said that they would bill the 3880 controller group for the increase in 3090 manufacturing cost.

note that marketing respun the large number channels that 3090 required (because of the big increase in 3880 controller channel busy) to 3090 being a fabulous I/O machine

1980, STL (renamed SVL) was bursting at the seams and 300 people from the IMS group were being moved to offsite bldg with dataprocessing service back to STL datacenter. They had tried remote 3270 support, but found the human factors intolerable. I get con'ed into doing channel-extender support, allowing channel-attached 3270 controller to be placed at the offsite bldg, with no perceptible human factors difference offsite and in STL. The hardware vendor then tries to get IBM to release my support. There were some engineers in POK playing with some serial stuff that were afraid if it was in the market, it would make it hardware to get there stuff release, and got it vetoed.

In 1988, the IBM branch office gets me to help LLNL (national lab) get some serial stuff they were playing with, standardized; which quickly becomes Fibre Channel Standard (including some of stuff I had done in 1980), initially 1gbit, full-duplex, 2gbit aggregate, 200mbyte/sec. Then in 1990, the POK engineers get their stuff released with ES/9000 as ESCON, when it is already obsolete, 17mbytes/sec. Then some POK engineers get involved in FCS and define a heavy-weight protocol that radically reduces the throughput ... which is eventually released as FICON. The latest public benchmark I can find is the "peak I/O" benchmark for Z196 that used 104 FICON (running over 104 FCS) to get 2M IOPS. At about the same time there is a FCS announced for E5-2600 blades claiming over a million IOPS (i.e. two such FCS get higher throughput than 104 FICON running over 104 FCS).

past about getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts mentioning SHARE B874 presentation
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2021j.html#78 IBM 370 and Future System
https://www.garlic.com/~lynn/2021i.html#23 fast sort/merge, OoO S/360 descendants
https://www.garlic.com/~lynn/2021g.html#44 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021e.html#33 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021.html#79 IBM Disk Division
https://www.garlic.com/~lynn/2021.html#59 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS
https://www.garlic.com/~lynn/2019d.html#63 IBM 3330 & 3380
https://www.garlic.com/~lynn/2019b.html#94 MVS Boney Fingers
https://www.garlic.com/~lynn/2019.html#78 370 virtual memory
https://www.garlic.com/~lynn/2018e.html#93 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2017j.html#96 thrashing, was Re: A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017i.html#46 Temporary Data Sets
https://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017e.html#5 TSS/8, was A Whirlwind History of the Computer
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017b.html#70 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#32 Virtualization's Past Helps Explain Its Current Importance

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 28 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency

post mentioning WATFOR, CICS and optimizing OS/360
https://www.garlic.com/~lynn/2022e.html#43 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems

as undergraduate I was hired fulltime responsible for OS/360 (and applications). Univ shutdown datacenter on weekends and I had whole place dedicated, although 48hrs w/o sleep could make monday classes hard. CSC came out installed CP67/CMS (3rd installation after cambridge and mit lincoln labs), I was mostly limited to playing with it on weekends. First few months was primarily rewriting CP67 code for running OS/360 in virtual machine. Archived post with pieces of old SHARE presentation about early work
https://www.garlic.com/~lynn/94.html#18

OS/360 time w/o CP67, 322sec; initial under CP67, 856sec ... CP67 cpu 534sec; after a few months rewriting code, under CP67, 435sec ... CP67 cpu 113sec ... reduction CP67 cpu from 534sec to 113sec.

I then did dynamic adaptive resource manager (frequently referred to as "wheeler scheduler"), new page replacement algorithm and thrashing controls, ordered arm seek, and chained page I/O requests (to maximize transfers per revolution ... on fixed head 2301 went from 80 transfers/sec to 270 transfers/sec). IBM would pick most the stuff (as well as other stuff like ascii/tty terminal support) and include in shipped product.

After I graduate and joined science center, one of my hobbies was enhanced production operating system for internal datacenters (including world-wide sales&marketing HONE was long time customer). Some of the science center people spin off, take over boston programming center (on 3rd flr) to form vm370 group. In the morph of CP67->vm370 they greatly simplify and/or drop lots of stuff including multiprocessor support as well as much of the stuff I did before joining IBM. I continuue to work on CP67 and then porting lots of CP67 into VM370 (for internal production CSC/VM), all during the Future System period (even periodically ridiculing FS ... which wasn't exactly career enhancing activity) ... FS info
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

note that during FS, the lack of new 370 stuff is credited with giving the clone 370 system makers their market foothold. Future syste postts
https://www.garlic.com/~lynn/submain.html#futuresys

During VM370 R2, I get most of the CP67 integrity (for production use and to stop VM370 constantly crashing during benchmarks) and other enhancements moved to VM370 for CSC/VM. Little 370 work was being done during FS, so when FS imploded, they picked I then get hardware multiprocessor support into my R3-based CSC/VM ... originally primarily for HONE. The US HONE datacenters had been consolidated in Palo Alto and enhanced with single-system image, loosely-coupled with large disk farm, load balancing and fall over across eight system complex (each 3330 string was "string-switch" to two 3830 controllers, and each 3830 controllers had four channel switch, provided each 3330 drive connecting to 8 systems). Hardware multiprocessing support allowed them to add 2nd 168-3 cpu to each of the systems. Old email about my initial (internal IBM) CSC/VM
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

HONE was originally created after the 23JUN1969 unbundling (starting to charge for application software, maintenance, SE services, but decision was made that kernel software would still be free) as part of branch office SE training (but CMS\APL sales&marketing support applications were also added which then came to dominate all HONE activity). With FS implosion, rise of clone 370s, and mad rush to get stuff back into 370 product pipeline, it was decided to start charging for kernel software. Initially it was just operating system addons (as part of transition to charging for all kernel addons) and some of my dynamic adaptive resource management was chosen to be guinea pig for the initial charged-for addon (and I got to spend a lot of time with lawyers and business people about kernel software charging policy). Also, SHARE resolutions had been asking for the re-introduction of the CP67 "wheeler scheduler" for VM370.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging technology related posts
https://www.garlic.com/~lynn/subtopic.html#clock
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
multiprocessor and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 28 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency

VM370 "wheeler scheduler" trivia/joke: Initial review by somebody in corporate was that I had no manual tuning knobs (like MVS SRM). I tried to explain that "dynamic adaptive" met constantly monitoring configuration and workload and dynamically adapting&tuning. He said all "modern" systems had manual tuning knobs and he wouldn't sign off on announce/ship until I had manual tuning knobs. So I created manual tuning knobs which could be changed with an "SRM" command (parody/ridiculing MVS), provided full documentation and formulas on how they worked. What very few realized that in "degress of freedon" (from dynamic feedback/feedfoward) for the SRM manual tuning knobs, it was less than the dynamic adaptive algorithms ... so the dynamic adaptive algorithms could correct for any manual tuning knobs setting.

Other trivia: as part of final release, there was 2000 benchmarks done that took 3 months elapsed time. The first 1000 benchmarks varied configuration and workload across known environemtns with added stress testing for various compute intensive, file i/o intensive, page i/o intensive, memory size intensive, etc. One of the people at the science center had done an APL-based analytical system model (incidentally had been made available on HONE as the Performance Predictor, SEs could enter customer workload and configuration info and ask "what-if" questions about changes in workload and/or configuration) which would predict the result for each benchmark based on configuration&workload and then compare its prediction with actual result. The second 1000 benchmark configurations and workloads were purely driven by the APL-application searching for possible anomalous combinations.

dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

some recent performance predictor posts
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history
https://www.garlic.com/~lynn/2019d.html#106 IBM HONE
https://www.garlic.com/~lynn/2019c.html#85 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019b.html#27 Online Computer Conferencing
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2017j.html#109 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2017j.html#103 why VM, was thrashing
https://www.garlic.com/~lynn/2017h.html#68 Pareto efficiency
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017b.html#27 Virtualization's Past Helps Explain Its Current Importance
https://www.garlic.com/~lynn/2016c.html#5 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016b.html#109 Bimodal Distribution
https://www.garlic.com/~lynn/2016b.html#54 CMS\APL
https://www.garlic.com/~lynn/2016b.html#36 Ransomware
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015f.html#69 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015c.html#71 A New Performance Model
https://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
https://www.garlic.com/~lynn/2014b.html#83 CPU time
https://www.garlic.com/~lynn/2014b.html#81 CPU time
https://www.garlic.com/~lynn/2013g.html#6 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2012n.html#27 System/360--50 years--the future?
https://www.garlic.com/~lynn/2012f.html#60 Hard Disk Drive Construction
https://www.garlic.com/~lynn/2012.html#50 Can any one tell about what is APL language

--
virtualization experience starting Jan1968, online at home since Mar1970

Freakonomics

From: Lynn Wheeler <lynn@garlic.com>
Subject: Freakonomics
Date: 28 June 2022
Blog: Facebook
Freakonomics has chapter on in the early 90s, there was expectation for big explosion in crime. The analysis was the big upswing in abortions after Roe/Wade and all the unwanted children that would be responsible for those crimes, didn't appear.

Freakonomics Rev Ed: A Rogue Economist Explores the Hidden Side of Everything
https://www.amazon.com/Freakonomics-Rev-Ed-Economist-Everything-ebook-dp-B000MAH66Y/dp/B000MAH66Y/
Abortion and Crime, Revisited
https://freakonomics.com/podcast/abortion-and-crime-revisited/
Freakonomics Summary and Analysis of Chapter 4
https://www.gradesaver.com/freakonomics/study-guide/summary-chapter-4
Freakonomics
https://en.wikipedia.org/wiki/Freakonomics

somewhat intertwined with this is US high incarceration rate (having been the highest in the world)
https://en.wikipedia.org/wiki/United_States_incarceration_rate
and the rise of the for-profit prison industry, wanting young, mostly docile/non-violent offenders
https://en.wikipedia.org/wiki/United_States_incarceration_rate#Prison_privatization

posts mentioning freakonomics
https://www.garlic.com/~lynn/2021h.html#4 Getting the lead out: a quirky tale of saving the world
https://www.garlic.com/~lynn/2021b.html#94 Fecalnomics
https://www.garlic.com/~lynn/2017f.html#12 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016f.html#3 E.R. Burroughs
https://www.garlic.com/~lynn/2015g.html#27 OT: efforts to repeal strict public safety laws
https://www.garlic.com/~lynn/2015e.html#74 prices, was Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2013d.html#46 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012k.html#12 The Secret Consensus Among Economists
https://www.garlic.com/~lynn/2012i.html#57 a clock in it, was Re: Interesting News Article
https://www.garlic.com/~lynn/2012e.html#57 speculation
https://www.garlic.com/~lynn/2011c.html#30 The first personal computer (PC)
https://www.garlic.com/~lynn/2011b.html#88 NASA proves once again that, for it, the impossible is not even difficult
https://www.garlic.com/~lynn/2011.html#53 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2007h.html#55 ANN: Microsoft goes Open Source

other posts mentioning for-profit prison industry
https://www.garlic.com/~lynn/2019b.html#89 How Private Equity Is Turning Public Prisons Into Big Profits
https://www.garlic.com/~lynn/2017f.html#19 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017f.html#12 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016c.html#89 Qbasic
https://www.garlic.com/~lynn/2016c.html#39 Qbasic
https://www.garlic.com/~lynn/2016b.html#70 Qbasic
https://www.garlic.com/~lynn/2015h.html#4 Decimal point character and billions
https://www.garlic.com/~lynn/2015e.html#85 prices, was Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014c.html#10 Royal Pardon For Turing
https://www.garlic.com/~lynn/2013j.html#82 copyright protection/Doug Englebart

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 28 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency

IBM had 2303 & 2301 fixed head drums. 2301 was similar to 2303 but read/wrote four heads in parallel, four times the transfer rate, 1/4 the number of tracks, each 4 times larger. CP67 "borrowed" TSS/360 2301 format with 9 4k blocks formatted across two tracks (with one record spanning end of one track and start of next). At 60revs/sec, it was capable of 30*9 4k transfers, or 270/sec. CP67 had fifo queue for of DASD requests, done one transfer at a time. ... where 2301 peaked at around 80/sec because avg rotational delay/transfer.

I did ordered arm seek for 2311&2314 ... and chaining multiple page requests for same cylinder (optimized transfers per revolution) ... which was all tracks on fixed-head drum ... but could require "seek head" to different track between transfers. The added "seek head" CCW was basis of processing delay requiring dummy records between page records on 3330 DASD ... 101 bytes was possible for all 370s but 158 integrated channel (and later 303x channel director and 3081 channels) ... 158 slow processing required 110 bytes (which wasn't available on 3330 tracks).

trivia; it turns out that 4341 channel processing was so fast that with a little tweak (data streaming), the engineering 4341 in bldg. 15 could be used testing 3380 at 3mbyte/sec.

posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 29 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#53 Channel Program I/O Processing Efficiency

Part of masking how slow the 3880 was, they cached a bunch of information from the most recent I/O (including channel path) ... if you hit it from a different channel path, all that had to be undone. I rewrote I/O supervisor for bldg14&15 (disk engineering & product test) ... I did superfast instruction path for channel load balancing and alternate path. With 3830 controllers it would get higher throughput ... with 3880 controllers it was much worse performance (sort of analogous to running applications with processor cache affinity in a multiprocessor machine, aka restart application on same processor because lots of the application was already in the cache, restarting on different processor could mean flushing data out of previous cache and reloading into new cache).

I claimed I could do I/O redrive after an interrrupt in 370 instructions almost as fast as XA could do with dedicated outboard processor. A huge justification for its inclusion in XA was MVS pathlength for I/O redrive was 5k to 10k instructions ... which met the device was idle between the end of the previous interrupt and the start of redrive of pending, queued I/O.

The first time this showed up was when they replaced a 3830 on our bldg15 system with 3880 over the weekend. IBM had guidelines that new products should have nearly the same or better throughput than the previous. They had started out trying to mask how slow the 3880 was by presenting end of operation interrupt ... before the 3880 had actually cleaned up all the associated information ... expecting that operating system "redrive" ... restarting device with new queued I/O would take at least couple thousand instructions (more than enough time for 3880 to clean things up). They had done initial verification, acceptance tests with MVS and found it acceptable. When they dropped the 3880 into bldg15 with my system running on the 3033 ... things completely fell apart. Under load, I was always doing redrive of any queued operation ... well before the 3880 was actually done ... so the 3880 had to respond with CC=1, sm+busy (controller busy) ... and then when it was really done, it would have to present CUE interrupt.

I had lots of ridicule about the XA features that were done to compensate for horrible MVS code (analogous to the justification for making all 370 machines virtual memory ... because it was needed to compensate for how bad MVT storage management was). ref to decision
https://www.garlic.com/~lynn/2011d.html#73

posts getting to play disk engineer in bldg 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some past posts mentioning redrive and 3880 controller
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2017g.html#64 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2016h.html#50 Resurrected! Paul Allen's tech team brings 50-year -old supercomputer back from the dead
https://www.garlic.com/~lynn/2016e.html#56 IBM 1401 vs. 360/30 emulation?
https://www.garlic.com/~lynn/2016b.html#79 Asynchronous Interrupts
https://www.garlic.com/~lynn/2015f.html#88 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2014k.html#22 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2013n.html#69 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013n.html#56 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2012p.html#17 What is a Mainframe?
https://www.garlic.com/~lynn/2012o.html#28 IBM mainframe evolves to serve the digital world
https://www.garlic.com/~lynn/2012m.html#6 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012c.html#23 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2012c.html#20 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2012b.html#2 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2011p.html#120 Start Interpretive Execution
https://www.garlic.com/~lynn/2011k.html#86 'smttter IBMdroids
https://www.garlic.com/~lynn/2011.html#36 CKD DASD
https://www.garlic.com/~lynn/2010e.html#30 SHAREWARE at Its Finest
https://www.garlic.com/~lynn/2009r.html#52 360 programs on a z/10
https://www.garlic.com/~lynn/2009q.html#74 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2008d.html#52 Throwaway cores
https://www.garlic.com/~lynn/2007t.html#77 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2007h.html#9 21st Century ISA goals?
https://www.garlic.com/~lynn/2007h.html#6 21st Century ISA goals?
https://www.garlic.com/~lynn/2006g.html#0 IBM 3380 and 3880 maintenance docs needed
https://www.garlic.com/~lynn/2004p.html#61 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2003m.html#43 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003f.html#40 inter-block gaps on DASD tracks
https://www.garlic.com/~lynn/2002b.html#2 Microcode? (& index searching)
https://www.garlic.com/~lynn/2001h.html#28 checking some myths.
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2000c.html#75 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/96.html#19 IBM 4381 (finger-check)

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 29 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#53 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#54 Channel Program I/O Processing Efficiency

A little (cache-related) topic drift ... when I put multiprocessor support into VM370 release 3 (it had been dropped in the initial morph of CP67->VM370) ... originally for HONE, so they could add a 2nd 168 processor to all their systems in "single system image" complex ... they also had 158 system for development/test that had 2nd processor added. Now 370s two-processor machines had the processor speed slowed down by 10% (to allow for some cross-cache protocol overhead between the two processors) so hardware of two processor system was only 1.8 times a single processor.

MVS also had much of the operating system bracketed with serialization lock, allowing only one processor to execute the code at a time (forcing the other to stall in a lock-spin loop waiting) as a result, MVS documentation would only claim a two-processor would only 1.2-1.5 times the throughput of a single processor.

The multiprocessor code I had for HONE had almost no lock-spin time and there was various cache-affinity features. As a result, applications tended to have better cache hit ratio than running on single processor machine ... which would significantly more often be switching executable code (resulting in previous cached information being flushed and lots of cache miss wait for new information being loaded). The lack of any significant lock-spin or other multiprocessor "overhead" and improved cache hit rate, met that the processors were executing instructions faster ... even system throughput hitting two times a single processor (base hardware was only 1.8 times single processor, but offset by improved cache hit rate).

SMP, multiprocessing, and/or compare&swap instruction post
https://www.garlic.com/~lynn/subtopic.html#smp
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 29 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#53 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#54 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#55 Channel Program I/O Processing Efficiency

The HSDTSFS did contiguous allocation, read ahead, write behind, multi-record transfers, much faster and better logging and recovery. I had previously done a page-mapped CMS filesystem for CP67 that ran much faster and then ported to VM370 ... I used the same API with a few enhancements. The data format appeared similar enough that existing applications with existing spool file diagnose API would continue to work.

I had started HSDT in the early 80s with T1 and faster computer links (both terrestrial and satellite) and was working with NSF director and was suppose to get $20M to interconnect (TCP/IP) NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually RFP is released; preliminary announce:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

internal IBM politics prevent us from bidding on the RFP. the NSF director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse (as did claims that what we already had running was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

... in any case, recent HSDT & HSDTSFS post concerning the "VM Workshop"
https://www.garlic.com/~lynn/2022e.html#8 VM Workship ... VM/370 50th birthday

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
page mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some other posts mentioning HSDTSFS
https://www.garlic.com/~lynn/2021j.html#26 Programming Languages in IBM
https://www.garlic.com/~lynn/2021g.html#37 IBM Programming Projects
https://www.garlic.com/~lynn/2013n.html#91 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2012g.html#24 Co-existance of z/OS and z/VM on same DASD farm
https://www.garlic.com/~lynn/2012g.html#23 VM Workshop 2012
https://www.garlic.com/~lynn/2012g.html#18 VM Workshop 2012
https://www.garlic.com/~lynn/2011e.html#25 Multiple Virtual Memory

--
virtualization experience starting Jan1968, online at home since Mar1970

Behind the Scenes, McKinsey Guided Companies at the Center of the Opioid Crisis

From: Lynn Wheeler <lynn@garlic.com>
Subject: Behind the Scenes, McKinsey Guided Companies at the Center of the Opioid Crisis.
Date: 29 June 2022
Blog: Facebook
Behind the Scenes, McKinsey Guided Companies at the Center of the Opioid Crisis. The consulting firm offered clients "in-depth experience in narcotics," from poppy fields to pills more powerful than Purdue's OxyContin.
https://www.nytimes.com/2022/06/29/business/mckinsey-opioid-crisis-opana.html

a couple other recent articles:

Congress Has to Ask How Much McKinsey Hurt the F.D.A.
https://www.nytimes.com/2022/04/26/opinion/mckinsey-fda-opioids.html
Lawmakers Dismiss McKinsey's Apology on Opioid Crisis as 'Empty'
https://www.nytimes.com/2022/04/27/business/mckinsey-congress-opioids.html

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

specifc posts mentioning McKinsey
https://www.garlic.com/~lynn/2022d.html#84 Destruction Of The Middle Class
https://www.garlic.com/~lynn/2022c.html#97 Why Companies Are Becoming B Corporations
https://www.garlic.com/~lynn/2021i.html#36 We've Structured Our Economy to Redistribute a Massive Amount of Income Upward
https://www.garlic.com/~lynn/2021h.html#101 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021h.html#96 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021g.html#54 Republicans Have Taken a Brave Stand in Defense of Tax Cheats
https://www.garlic.com/~lynn/2021f.html#17 Jamie Dimon: Some Americans 'don't feel like going back to work'
https://www.garlic.com/~lynn/2021.html#21 ESG Drives a Stake Through Friedman's Legacy
https://www.garlic.com/~lynn/2012l.html#53 CALCULATORS

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 30 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
other recent post in thread:
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#53 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#54 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#55 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#56 Channel Program I/O Processing Efficiency

re: Performance Predictor. I had long left IBM, but turn of century was brought into large financial outsourcing datacenter doing something like half of all US credit card processing (for banks and other financial institutions). They had something like 40+ mainframes @$30M constant rolling upgrades, none older than previous generation; all running 450K cobol statement application, number needed to finish batch settlement in the overnight batch window. They had large performance group that had been managing the care&feeding for decades. I used some other performance analysis technology (from science center days) and found 14% improvement.

There was another performance consultant from Europe that during IBM troubles of the early 90s (and unloading lots of stuff), had acquired the right to descendant of the performance predictor, ran it through a APL->C converter and was using it for large (IBM mainframe and non-IBM) datacenter performance consulting business ... who found a different 7% improvement ... total 21% improvement (savings on >$1B IBM mainframe).

trivia: the outsourcing company had been spun off 1992 from AMEX in the largest IPO up until that time ... several of the executives had previously reported to Gerstner (when he was president of AMEX).

Posts mentioning Gerstner
https://www.garlic.com/~lynn/submisc.html#gerstner

some past posts mentioning the 450K cobol statement application
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#68 How Gerstner Rebuilt IBM
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021c.html#49 IBM CEO
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs
https://www.garlic.com/~lynn/2019e.html#155 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019b.html#62 Cobol

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CEO: Only 60% of office workers will ever return full-time

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CEO: Only 60% of office workers will ever return full-time
Date: 30 June 2022
Blog: Facebook
IBM CEO: Only 60% of office workers will ever return full-time. "I think we've learned a new normal," Fortune 500 exec says
https://therealdeal.com/2022/06/28/ibm-ceo-only-60-of-office-workers-will-ever-return-full-time/

possibly for being blamed for online computer conferencing (predating IBM forums and social media) in the late 70s and early 80s (folklore is 5of6 corporate executive committee wanted to fire me), I was transferred from San Jose Research to Yorktown, but continued to live in San Jose (with office in SJR, later Almaden, and wing in Los Gatos with offices and labs) ... but had to commute to Yorktown a couple times a month ... continued to be periodically told I had no career, no promotions, no raises.

I did have situation where I did written opened door about no raises and got written response back from head of HR that after I careful review of my whole career, I was being paid exactly what I was suppose to. I then responded (with copies of original open door and head of HR response), that I was being asked to interview new graduates for a group that would be working under my technical direction and they were getting offers 30% higher than I was making. Never got a response, but a few weeks later I got a 30% raise (putting me on level playing field with new graduate offers). It was one of the many times that co-workers had to remind me that in IBM, Business Ethics was an OXYMORON.

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning Business Ethics is OXYMORON (in IBM)
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021j.html#39 IBM Registered Confidential
https://www.garlic.com/~lynn/2021i.html#82 IBM Downturn
https://www.garlic.com/~lynn/2021h.html#61 IBM Starting Salary
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021e.html#15 IBM Internal Network
https://www.garlic.com/~lynn/2021c.html#42 IBM Suggestion Program
https://www.garlic.com/~lynn/2021c.html#41 Teaching IBM Class
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#12 IBM "811", 370/xa architecture
https://www.garlic.com/~lynn/2021.html#83 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2018f.html#96 IBM Career
https://www.garlic.com/~lynn/2017e.html#9 Terminology - Datasets
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
https://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2011b.html#59 Productivity And Bubbles
https://www.garlic.com/~lynn/2010g.html#44 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010g.html#0 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010b.html#38 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009r.html#50 "Portable" data centers
https://www.garlic.com/~lynn/2009o.html#36 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2009e.html#37 How do you see ethics playing a role in your organizations current or past?
https://www.garlic.com/~lynn/2009.html#53 CROOKS and NANNIES: what would Boyd do?
https://www.garlic.com/~lynn/2007j.html#72 IBM Unionization

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CEO: Only 60% of office workers will ever return full-time

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CEO: Only 60% of office workers will ever return full-time
Date: 30 June 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#59 IBM CEO: Only 60% of office workers will ever return full-time

All during the FS period, I ontinued working on 360&370 and would periodically ridicule FS ... which wasn't exactly a career enhancing activity. FS, completely different than 370 and was going to completely replaced ... more information
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

internal politics was killing off 370 efforts and the claim was that lack of new IBM 370 stuff during FS is what gave clone 370 makers their market foothold. After joining IBM, still got to wander around IBM and customer datacenters, including one of my hobbies was enhanced production operating systems for interal datacenters (including world-wide sales&marketing support HONE systems was long time customers), attend user group meetings (like SHARE) ... and the manager of one of the largest financial datacenters (vast sea of IBM mainframes) liked me to stop by and talk technology.

Then the branch manager did something that horribly offended the customers ... and in retaliation, they order an Amdahl machine (Amdahl had been selling into univ, scientific, technical, but this would be first for a true blue commercial customer). I was then asked to go onsite for a year to help obfuscate why the customer was ordering an Amdahl. I talk it over with the customer and told IBM nope. I was then told that the branch manager was good sailing buddy of IBM CEO and if I didn't do this I could forget having a career, promotions, raises ("old boys", you scratch their back, they scratch yours).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

Chairman Learson trying to block the rise of the (old boy) careerists and bureaucrats destroying the Watson legacy.

Management Briefing
Number 1-72: January 18,1972
ZZ04-1312

TO ALL IBM MANAGERS:

Once again, I'm writing you a Management Briefing on the subject of bureaucracy. Evidently the earlier ones haven't worked. So this time I'm taking a further step: I'm going directly to the individual employees in the company. You will be reading this poster and my comment on it in the forthcoming issue of THINK magazine. But I wanted each one of you to have an advance copy because rooting out bureaucracy rests principally with the way each of us runs his own shop.

We've got to make a dent in this problem. By the time the THINK piece comes out, I want the correction process already to have begun. And that job starts with you and with me.

Vin Learson


....


+-----------------------------------------+
|           "BUSINESS ECOLOGY"            |
|                                         |
|                                         |
|            +---------------+            |
|            |  BUREAUCRACY  |            |
|            +---------------+            |
|                                         |
|           is your worst enemy           |
|              because it -               |
|                                         |
|      POISONS      the mind              |
|      STIFLES      the spirit            |
|      POLLUTES     self-motivation       |
|             and finally                 |
|      KILLS        the individual.       |
+-----------------------------------------+
"I'M Going To Do All I Can to Fight This Problem . . ."
by T. Vincent Learson, Chairman

... snip ...

How to Stuff a Wild Duck

"We are convinced that any business needs its wild ducks. And in IBM we try not to tame them." - T.J. Watson, Jr.

"How To Stuff A Wild Duck", 1973, IBM poster
https://collection.cooperhewitt.org/objects/18618011/

past posts mentioning Learson "Business Ecology"
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#37 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#15 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#14 IBM "Fast-Track" Bureaucrats
https://www.garlic.com/~lynn/2022d.html#103 IBM'S MISSED OPPORTUNITY WITH THE INTERNET
https://www.garlic.com/~lynn/2022d.html#89 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#76 "12 O'clock High" In IBM Management School
https://www.garlic.com/~lynn/2022d.html#71 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022d.html#52 Another IBM Down Fall thread
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2021g.html#51 Intel rumored to be in talks to buy chip manufacturer GlobalFoundries for $30B
https://www.garlic.com/~lynn/2021g.html#32 Big Blue's big email blues signal terminal decline - unless it learns to migrate itself
https://www.garlic.com/~lynn/2021e.html#62 IBM / How To Stuff A Wild Duck
https://www.garlic.com/~lynn/2021d.html#51 IBM Hardest Problem(s)
https://www.garlic.com/~lynn/2021.html#0 IBM "Wild Ducks"
https://www.garlic.com/~lynn/2017j.html#23 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2017f.html#109 IBM downfall
https://www.garlic.com/~lynn/2017b.html#56 Wild Ducks
https://www.garlic.com/~lynn/2015g.html#60 [Poll] Computing favorities
https://www.garlic.com/~lynn/2015d.html#19 Where to Flatten the Officer Corps
https://www.garlic.com/~lynn/2013.html#18 How do we fight bureaucracy and bureaucrats in IBM?
https://www.garlic.com/~lynn/2013.html#11 How do we fight bureaucracy and bureaucrats in IBM?
https://www.garlic.com/~lynn/2012k.html#65 How do you feel about the fact that India has more employees than US?
https://www.garlic.com/~lynn/2012f.html#92 How do you feel about the fact that India has more employees than US?

and from the budding Future System disaster in the 70s, from Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993 ....
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

recent "The Post-IBM World" posts
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#14 IBM "Fast-Track" Bureaucrats
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#6 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022d.html#89 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#71 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022d.html#52 Another IBM Down Fall thread
https://www.garlic.com/~lynn/2022d.html#49 IBM Dug A Hole
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022b.html#99 CDC6000
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#88 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#58 Interdata Computers
https://www.garlic.com/~lynn/2022b.html#50 100 days after IBM split, Kyndryl signs strategic cloud pact with AWS
https://www.garlic.com/~lynn/2022b.html#37 Leadership
https://www.garlic.com/~lynn/2022b.html#12 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#53 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#39 Mainframe I/O
https://www.garlic.com/~lynn/2021j.html#113 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#93 IBM 3278
https://www.garlic.com/~lynn/2021j.html#76 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#70 IBM Wild Ducks
https://www.garlic.com/~lynn/2021i.html#79 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#64 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021h.html#3 Cloud computing's destiny
https://www.garlic.com/~lynn/2021g.html#51 Intel rumored to be in talks to buy chip manufacturer GlobalFoundries for $30B
https://www.garlic.com/~lynn/2021g.html#43 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#32 Big Blue's big email blues signal terminal decline - unless it learns to migrate itself
https://www.garlic.com/~lynn/2021g.html#5 IBM's 18-month company-wide email system migration has been a disaster, sources say
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#24 IBM Remains Big Tech's Disaster
https://www.garlic.com/~lynn/2021e.html#62 IBM / How To Stuff A Wild Duck
https://www.garlic.com/~lynn/2021d.html#66 IBM CEO Story
https://www.garlic.com/~lynn/2021c.html#15 IBM Wild Ducks
https://www.garlic.com/~lynn/2021b.html#97 IBM Glory days
https://www.garlic.com/~lynn/2021b.html#7 IBM & Apple
https://www.garlic.com/~lynn/2021.html#39 IBM Tech
https://www.garlic.com/~lynn/2021.html#0 IBM "Wild Ducks"

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 01 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#53 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#54 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#55 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#56 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency

recent comment about 360: IBM POK:

POK people are doomed and damned since Fred Brooks ate Gene Andahl and robed his legacy.

Just because Frederick Phillips Brooks Jr. is fortunate son of the descendants of pilgrim fathers and Gene Amdahl was just a commoner from rural areas of South Dakota.

Brooks obviously incompetent and has made a career out of exploiting people from the lower classes for free. S/360 was made by a 12-men team.

To takeover that golden goose, Brooks fired Amdahl. More is better, isnt it? So Brooks's desicion was made at scale: 150 units of the POK drones were ordered to overperform the Amdahl's creation. The Future Systems, you know. With The Future Systems IBM occured ine of the all-time greatest FAILS of all time. S/370 mod. 168 used 1/16 hardware compared to 3081 with same or better performance on real-world applications.

All of these Royal banquet of FS payed by S/370 budged just for price of cancelling many of promising S/370 hardware and software projects.

That's why 370's channels produced so weak design.

In 1975 Gene Amdahl launched in production 470/V6 designed and built with the large scale IC for magnitude less costs of design and production. With non-cutted channels, exactly.


... snip ...

... then there is cancellation of ACS/360 ... claims that (Amdahl) would advance the computer state-of-the-art too fast and IBM would loose control of the market
https://people.cs.clemson.edu/~mark/acs_end.html

lots more future system
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

oh, and a contribution to spring/summer 81 "tandem memos" (I was blamed for online computer conferencing):

Date: 04/23/81 09:57:42
To: wheeler

your ramblings concerning the corp(se?) showed up in my reader yesterday. like all good net people, i passed them along to 3 other people. like rabbits interesting things seem to multiply on the net. many of us here in pok experience the sort of feelings your mail seems so burdened by: the company, from our point of view, is out of control. i think the word will reach higher only when the almighty $$$ impact starts to hit. but maybe it never will. its hard to imagine one stuffed company president saying to another (our) stuffed company president i think i'll buy from those inovative freaks down the street. '(i am not defending the mess that surrounds us, just trying to understand why only some of us seem to see it).

bob tomasulo and dave anderson, the two poeple responsible for the model 91 and the (incredible but killed) hawk project, just left pok for the new stc computer company. management reaction: when dave told them he was thinking of leaving they said 'ok. 'one word. 'ok. ' they tried to keep bob by telling him he shouldn't go (the reward system in pok could be a subject of long correspondence). when he left, the management position was 'he wasn't doing anything anyway. '

in some sense true. but we haven't built an interesting high-speed machine in 10 years. look at the 85/165/168/3033/trout. all the same machine with treaks here and there. and the hordes continue to sweep in with faster and faster machines. true, endicott plans to bring the low/middle into the current high-end arena, but then where is the high-end product development?


... snip ... top of post, old email index

aka 1985, "trout" would be announced as 3090.
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

... at the science center we use to comment that POK liked failures because it required larger departments for executives (failures demonstrating how hard the problem was ... not the wrong people)

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Empire Burlesque. What comes after the American Century?

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Empire Burlesque. What comes after the American Century?
Date: 2 July 2022
Blog: Facebook
Empire Burlesque. What comes after the American Century?
https://harpers.org/archive/2022/07/what-comes-after-the-american-century/

In February 1941, as Adolf Hitler's armies prepared to invade the Soviet Union, the Republican oligarch and publisher Henry Luce laid out a vision for global domination in an article titled the american century. World War II, he argued, was the result of the United States' immature refusal to accept the mantle of world leadership after the British Empire had begun to deteriorate in the wake of World War I. American foolishness, the millionaire claimed, had provided space for Nazi Germany's rise. The only way to rectify this mistake and prevent future conflict was for the United States to join the Allied effort.

... snip ...

.... an alternative is that the members of congress that got the "neutrality" laws were motivated by the enormous US war profiteering they saw during WW1 ... then the oligarchs and war profiteers funding press where neutrality was respun as isolationism.

John Foster Dulles played major role in rebuilding Germany's economy, industry and military, 20s thru early 40s. The Brothers: John Foster Dulles, Allen Dulles, and Their Secret World War,
https://www.amazon.com/Brothers-Foster-Dulles-Allen-Secret-ebook/dp/B00BY5QX1K/
loc865-68:

In mid-1931 a consortium of American banks, eager to safeguard their investments in Germany, persuaded the German government to accept a loan of nearly $500 million to prevent default. Foster was their agent. His ties to the German government tightened after Hitler took power at the beginning of 1933 and appointed Foster's old friend Hjalmar Schacht as minister of economics.

loc905-7:

Foster was stunned by his brother's suggestion that Sullivan & Cromwell quit Germany. Many of his clients with interests there, including not just banks but corporations like Standard Oil and General Electric, wished Sullivan & Cromwell to remain active regardless of political conditions.

loc938-40:

At least one other senior partner at Sullivan & Cromwell, Eustace Seligman, was equally disturbed. In October 1939, six weeks after the Nazi invasion of Poland, he took the extraordinary step of sending Foster a formal memorandum disavowing what his old friend was saying about Nazism

... snip ...

From the law of unintended consequences; 1943 US Strategic Bombing Program, they needed German industrial and military targets and locations, they got the information and detailed plans from wallstreet.

June1940, Germany had a victory celebration at the NYC Waldorf-Astoria with major industrialists. Lots of them were there to hear how to do business with the Nazis
https://www.amazon.com/Man-Called-Intrepid-Incredible-Narrative-ebook/dp/B00V9QVE5O/

Later (replay of the NAZI celebration), 5000 corporations/industrialists from across the US had conference (also) at NYC Waldorf-Astoria and in part because they had gotten such bad reputation for the depression and supporting Nazi Germany, they funded a large propaganda campaign to equate capitalism with Christianity (as part of refurbishing their horribly corrupt and venal image)
https://www.amazon.com/One-Nation-Under-God-Corporate-ebook/dp/B00PWX7R56/

in part, by the early 50s was adding "under god" to the pledge of allegiance, slightly cleaned up version
https://en.wikipedia.org/wiki/Pledge_of_Allegiance

Roosevelt appointed Kennedy as ambassador to England ... supposedly as counterforce to any British biases
https://worldhistoryproject.org/1937/president-roosevelt-appoints-joseph-p-kennedy-sr-ambassador-to-britain

... but then intelligence that Kennedy was involved with Nazis, Intrepid points finger at Ambassador ("papa") Kennedy ... they start bugging the US embassy because classified information was leaking to the Germans. They eventually identified a clerk as responsible but couldn't prove ties to Kennedy. However Kennedy is claiming credit for Chamberland capitulating to Hitler on many issues ... also making speeches in Britain and the US that Britain could never win a war with Germany and if he was president, he would be on the best of terms with Hitler.
https://www.amazon.com/Man-Called-Intrepid-Incredible-Narrative-ebook/dp/B00V9QVE5O/
loc2645-52:

The Kennedys dined with the Roosevelts that evening. Two days later, Joseph P. Kennedy spoke on nationwide radio. A startled public learned he now believed "Franklin D. Roosevelt should be re-elected President." He told a press conference: "I never made anti-British statements or said, on or off the record, that I do not expect Britain to win the war." British historian Nicholas Bethell wrote: "How Roosevelt contrived the transformation is a mystery." And so it remained until the BSC Papers disclosed that the President had been supplied with enough evidence of Kennedy's disloyalty that the Ambassador, when shown it, saw discretion to be the better part of valor.

... snip ..

The Coming of American Fascism, 1920-1940
https://www.historynewsnetwork.org/article/the-coming-of-american-fascism-19201940
American Nazis Rally in New York City. On February 20, 1939, the pro-Nazi German American Bund drew more than 20,000 people to a rally in Madison Square Garden.
https://newspapers.ushmm.org/events/american-nazis-rally-in-new-york-city
When The Bankers Plotted To Overthrow FDR
https://www.npr.org/2012/02/12/145472726/when-the-bankers-plotted-to-overthrow-fdr
The Plots Against the President: FDR, A Nation in Crisis, and the Rise of the American Right
https://www.amazon.com/Plots-Against-President-Nation-American-ebook/dp/B07N4BLR77/

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
racism posts
https://www.garlic.com/~lynn/submisc.html#racism

some recent posts mentioning fascism/fascist
https://www.garlic.com/~lynn/2022e.html#38 Wall Street's Plot to Seize the White House
https://www.garlic.com/~lynn/2022d.html#4 Alito's Plan to Repeal Roe--and Other 20th Century Civil Rights
https://www.garlic.com/~lynn/2022c.html#113 The New New Right Was Forged in Greed and White Backlash
https://www.garlic.com/~lynn/2022.html#107 The Cult of Trump is actually comprised of MANY other Christian cults
https://www.garlic.com/~lynn/2022.html#28 Capitol rioters' tears, remorse don't spare them from jail
https://www.garlic.com/~lynn/2022.html#9 Capitol rioters' tears, remorse don't spare them from jail
https://www.garlic.com/~lynn/2021k.html#33 How Delaware Became the World's Biggest Offshore Haven
https://www.garlic.com/~lynn/2021k.html#7 The COVID Supply Chain Breakdown Can Be Traced to Capitalist Globalization
https://www.garlic.com/~lynn/2021k.html#2 Who Knew ?
https://www.garlic.com/~lynn/2021j.html#104 Who Knew ?
https://www.garlic.com/~lynn/2021j.html#80 "The Spoils of War": How Profits Rather Than Empire Define Success for the Pentagon
https://www.garlic.com/~lynn/2021j.html#72 In U.S., Far More Support Than Oppose Separation of Church and State
https://www.garlic.com/~lynn/2021j.html#20 Trashing the planet and hiding the money isn't a perversion of capitalism. It is capitalism
https://www.garlic.com/~lynn/2021i.html#60 How Did America's Sherman Tank Win against Superior German Tanks in World War II?
https://www.garlic.com/~lynn/2021i.html#59 The Uproar Ovear the "Ultimate American Bible"
https://www.garlic.com/~lynn/2021i.html#57 "We are on the way to a right-wing coup:" Milley secured Nuclear Codes, Allayed China fears of Trump Strike
https://www.garlic.com/~lynn/2021i.html#56 "We are on the way to a right-wing coup:" Milley secured Nuclear Codes, Allayed China fears of Trump Strike
https://www.garlic.com/~lynn/2021h.html#101 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021g.html#58 The Storm Is Upon Us
https://www.garlic.com/~lynn/2021f.html#80 After WW2, US Antifa come home
https://www.garlic.com/~lynn/2021f.html#46 Under God
https://www.garlic.com/~lynn/2021d.html#59 WW2 Strategic Bombing
https://www.garlic.com/~lynn/2021c.html#96 How Ike Led
https://www.garlic.com/~lynn/2021c.html#93 How 'Owning the Libs' Became the GOP's Core Belief
https://www.garlic.com/~lynn/2021c.html#23 When Nazis Took Manhattan
https://www.garlic.com/~lynn/2021c.html#19 When Nazis Took Manhattan
https://www.garlic.com/~lynn/2021c.html#18 When Nazis Took Manhattan
https://www.garlic.com/~lynn/2021b.html#92 American Nazis Rally in New York City
https://www.garlic.com/~lynn/2021b.html#91 American Nazis Rally in New York City
https://www.garlic.com/~lynn/2021.html#66 Democracy is a threat to white supremacy--and that is the cause of America's crisis
https://www.garlic.com/~lynn/2021.html#51 Sacking the Capital and Honor
https://www.garlic.com/~lynn/2021.html#46 Barbarians Sacked The Capital
https://www.garlic.com/~lynn/2021.html#44 American Fascism
https://www.garlic.com/~lynn/2021.html#34 Fascism
https://www.garlic.com/~lynn/2021.html#33 Fascism
https://www.garlic.com/~lynn/2021.html#32 Fascism
https://www.garlic.com/~lynn/2020.html#16 Boyd: The Fighter Pilot Who Loathed Lean?
https://www.garlic.com/~lynn/2020.html#14 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2020.html#6 Onward, Christian fascists
https://www.garlic.com/~lynn/2020.html#0 The modern education system was designed to teach future factory workers to be "punctual, docile, and sober"

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Software Charging Rules

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Software Charging Rules
Date: 2 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#61 Channel Program I/O Processing Efficiency

A little follow-up to the growing hordes of people in Hudson Valley. After 23jun1969 and starting to charge for software (initially just "application" software ... but after the FS disaster contributing to the rise of clone 370 makers, transition was to charge for all software), the "rules" were that revenue had to cover original development plus ongoing support, maintenance, any additional development. Business people would typical do three forecasts at low, medium, and high price, in part to see if there was price sensitivity ... but forecast #customers times price had to meet the revenue rules. Because of growing staffs, there were some products where costs exceeded the revenue requirements.

The first gimmick was to find a similar product that did meet the rules and announce them was a "combined" product ... where the revenue from one covered the cost of the other. First that I'm aware of was JES2 networking, which didn't meet revenue rules under any circumstances. They then found VM370 VNET/RSCS which met the requirement at $30/month (but it was never get corporate approval for announce because POK was in the process of convincing corporate to kill VM370). It was announces as combined JES2/VNET at $600/month ... where the VNET revenue easily covered JES2 networking costs (note early JES2 networking software inside IBM still carried "TUCC" in cols 68-71 of the source from the univ that had created it originally for HASP).

The next ploy was merging products into the same organization which happened for the VM370 performance products to underwrite ISPF. ISPF had 200 people and revenue would never meet costs ... however VM370 performance products only had three people and their revenue would easily cover ISPF costs.

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
hasp, jes, nje, etc posts
https://www.garlic.com/~lynn/submain.html#hasp

posts mentioning JES2 NJE "TUCC" in cols 68-71
https://www.garlic.com/~lynn/2022c.html#121 Programming By Committee
https://www.garlic.com/~lynn/2022.html#78 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2021k.html#89 IBM PROFs
https://www.garlic.com/~lynn/2021h.html#92 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#53 PROFS
https://www.garlic.com/~lynn/2021e.html#14 IBM Internal Network
https://www.garlic.com/~lynn/2021b.html#75 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021.html#57 ES/9000 as POK was being scaled way back
https://www.garlic.com/~lynn/2019e.html#126 23Jun1969 Unbundling
https://www.garlic.com/~lynn/2019b.html#30 This Paper Map Shows The Extent Of The Entire Internet In 1973
https://www.garlic.com/~lynn/2018e.html#63 EBCDIC Bad History
https://www.garlic.com/~lynn/2017g.html#34 Programmers Who Use Spaces Paid More
https://www.garlic.com/~lynn/2017g.html#7 Mainframe Networking problems
https://www.garlic.com/~lynn/2017g.html#4 Mapping the decentralized world of tomorrow
https://www.garlic.com/~lynn/2017e.html#25 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017d.html#81 Mainframe operating systems?
https://www.garlic.com/~lynn/2016g.html#75 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016e.html#124 Early Networking
https://www.garlic.com/~lynn/2016d.html#46 PL/I advertising
https://www.garlic.com/~lynn/2015g.html#99 PROFS & GML

other posts mentioning ISPF and VM370 performance products
https://www.garlic.com/~lynn/2022c.html#45 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2021j.html#83 Happy 50th Birthday, EMAIL!
https://www.garlic.com/~lynn/2019e.html#120 maps on Cadillac Seville trip computer from 1978
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2017i.html#23 progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017e.html#25 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017c.html#2 ISPF (was Fujitsu Mainframe Vs IBM mainframe)
https://www.garlic.com/~lynn/2013i.html#36 The Subroutine Call
https://www.garlic.com/~lynn/2012n.html#64 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012k.html#33 Using NOTE and POINT simulation macros on CMS?
https://www.garlic.com/~lynn/2012f.html#62 Hard Disk Drive Construction
https://www.garlic.com/~lynn/2011p.html#106 SPF in 1978
https://www.garlic.com/~lynn/2011m.html#42 CMS load module format
https://www.garlic.com/~lynn/2010m.html#84 Set numbers off permanently
https://www.garlic.com/~lynn/2010g.html#50 Call for XEDIT freaks, submit ISPF requirements
https://www.garlic.com/~lynn/2010g.html#6 Call for XEDIT freaks, submit ISPF requirements
https://www.garlic.com/~lynn/2009s.html#46 DEC-10 SOS Editor Intra-Line Editing
https://www.garlic.com/~lynn/2008h.html#43 handling the SPAM on this group
https://www.garlic.com/~lynn/2007g.html#4 ISPF Limitations (was: Need for small machines ... )
https://www.garlic.com/~lynn/2003o.html#42 misc. dmksnt
https://www.garlic.com/~lynn/2001m.html#33 XEDIT on MVS

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Wild Ducks

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Wild Ducks
Date: 3 July 2022
Blog: LinkedIn
Note that the IBM century/100yrs celebration, one of the 100 videos was on wild ducks, ... but it was customer wild ducks... all references to employee wild ducks has been expunged

Chairman Learson trying to block the rise of the (old boy) careerists and bureaucrats destroying the Watson legacy.

Management Briefing
Number 1-72: January 18,1972
ZZ04-1312

TO ALL IBM MANAGERS:

Once again, I'm writing you a Management Briefing on the subject of bureaucracy. Evidently the earlier ones haven't worked. So this time I'm taking a further step: I'm going directly to the individual employees in the company. You will be reading this poster and my comment on it in the forthcoming issue of THINK magazine. But I wanted each one of you to have an advance copy because rooting out bureaucracy rests principally with the way each of us runs his own shop.

We've got to make a dent in this problem. By the time the THINK piece comes out, I want the correction process already to have begun. And that job starts with you and with me.

Vin Learson


... and ...


+-----------------------------------------+
|           "BUSINESS ECOLOGY"            |
|                                         |
|                                         |
|            +---------------+            |
|            |  BUREAUCRACY  |            |
|            +---------------+            |
|                                         |
|           is your worst enemy           |
|              because it -               |
|                                         |
|      POISONS      the mind              |
|      STIFLES      the spirit            |
|      POLLUTES     self-motivation       |
|             and finally                 |
|      KILLS        the individual.       |
+-----------------------------------------+
"I'M Going To Do All I Can to Fight This Problem . . ."
by T. Vincent Learson, Chairman

... snip ...

How to Stuff a Wild Duck

"We are convinced that any business needs its wild ducks. And in IBM we try not to tame them." - T.J. Watson, Jr.

"How To Stuff A Wild Duck", 1973, IBM poster
https://collection.cooperhewitt.org/objects/18618011/

and from the budding Future System disaster in the 70s, from Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993 ....
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

more FS info
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

Late 70s and early 80s, I was blamed for online computer conferencing (precursor to the IBM forums and modern social media) on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s). It really took off spring of 1981 when I distributed a trip report of visit to Jim Gray at Tandem. Only about 300 directly participated but claims upwards of 25,000 were reading. There were six copies of approx. 300 pages printed along with executive summary and summary of the summary and packaged in Tandem 3-ring binders and sent to the executive committee (folklore is 5of6 wanted to fire me) ... from summary of summary:

• The perception of many technical people in IBM is that the company is rapidly heading for disaster. Furthermore, people fear that this movement will not be appreciated until it begins more directly to affect revenue, at which point recovery may be impossible

• Many technical people are extremely frustrated with their management and with the way things are going in IBM. To an increasing extent, people are reacting to this by leaving IBM Most of the contributors to the present discussion would prefer to stay with IBM and see the problems rectified. However, there is increasing skepticism that correction is possible or likely, given the apparent lack of commitment by management to take action

• There is a widespread perception that IBM management has failed to understand how to manage technical people and high-technology development in an extremely competitive environment.


... from IBM Jargon
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

... but it takes another decade (1981-1992) ... IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company. gone behind paywall, but mostly lives free at wayback machine.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

had already left IBM, but get a call from the bowels of Armonk asking if could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts. However, before getting started, the board brings in a new CEO and reverses the breakup.

some past IBM "Wild Ducks" posts
https://www.garlic.com/~lynn/2022.html#53 Automated Benchmarking
https://www.garlic.com/~lynn/2021j.html#70 IBM Wild Ducks
https://www.garlic.com/~lynn/2021g.html#32 Big Blue's big email blues signal terminal decline - unless it learns to migrate itself
https://www.garlic.com/~lynn/2021e.html#62 IBM / How To Stuff A Wild Duck
https://www.garlic.com/~lynn/2021c.html#41 Teaching IBM Class
https://www.garlic.com/~lynn/2021c.html#17 IBM Wild Ducks
https://www.garlic.com/~lynn/2021c.html#16 IBM Wild Ducks
https://www.garlic.com/~lynn/2021c.html#15 IBM Wild Ducks
https://www.garlic.com/~lynn/2021b.html#97 IBM Glory days
https://www.garlic.com/~lynn/2021.html#0 IBM "Wild Ducks"
https://www.garlic.com/~lynn/2019.html#82 The Sublime: Is it the same for IBM and Special Ops?
https://www.garlic.com/~lynn/2019.html#61 Employees Come First
https://www.garlic.com/~lynn/2019.html#33 Cluster Systems
https://www.garlic.com/~lynn/2017j.html#23 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2017c.html#93 An OODA-loop is a far-from-equilibrium, non-linear system with feedback
https://www.garlic.com/~lynn/2017b.html#56 Wild Ducks
https://www.garlic.com/~lynn/2016e.html#96 IBM Wild Ducks
https://www.garlic.com/~lynn/2016e.html#14 Leaked IBM email says cutting 'redundant' jobs is a 'permanent and ongoing' part of its business model
https://www.garlic.com/~lynn/2015g.html#60 [Poll] Computing favorities
https://www.garlic.com/~lynn/2015g.html#17 There's No Such Thing as Corporate DNA
https://www.garlic.com/~lynn/2015.html#80 Here's how a retired submarine captain would save IBM

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Wild Ducks

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Wild Ducks
Date: 3 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#64 IBM Wild Ducks

In the early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM. The first time, I tried to do it through plant site employee education. At first they agreed, but as I provided more information about how to prevail/win in competitive situations, they changed their mind. They said that IBM spends a great deal of money training managers on how to handle employees and it wouldn't be in IBM's best interest to expose general employees to Boyd, I should limit audience to senior members of competitive analysis departments. First briefing in bldg28 auditorium open to all.

Trivia: In 89/90 the commandant of Marine Corps leverages Boyd for make-over of the corps ... at a time when IBM was desperately in need of make-over ... 1992 has one of the largest losses in history of US companies and was being reorg'ed into the 13 "baby blues" in preparation for breaking up the company,

When Boyd passes in 1997, the USAF had pretty much disowned him and it was the Marines at Arlington (and his effects go to Gray Research Center & Library in Quantico). There have continued to be Boyd conferences at Marine Corps Univ. in Quantico ... including lots of discussions about careerists and bureaucrats (as well as the "old boy networks" and "risk averse").

Chuck's tribute to John
http://www.usni.org/magazines/proceedings/1997-07/genghis-john
for those w/o subscription
http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html
John Boyd - USAF. The Fighter Pilot Who Changed the Art of Air Warfare
http://www.aviation-history.com/airmen/boyd.htm
"John Boyd's Art of War; Why our greatest military theorist only made colonel"
http://www.theamericanconservative.com/articles/john-boyds-art-of-war/
40 Years of the 'Fighter Mafia'
https://www.theamericanconservative.com/articles/40-years-of-the-fighter-mafia/
Fighter Mafia: Colonel John Boyd, The Brain Behind Fighter Dominance
https://www.avgeekery.com/fighter-mafia-colonel-john-boyd-the-brain-behind-fighter-dominance/
Updated version of Boyd's Aerial Attack Study
https://tacticalprofessor.wordpress.com/2018/04/27/updated-version-of-boyds-aerial-attack-study/
A New Conception of War
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/

One of his quotes:

"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To Be or To Do, that is the question."

... snip ...

Trivia: one of Boyd's stories is he was quite vocal that the electronics across the trail wouldn't work (possibly in punishment, he is put in command of spook base). Boyd biography has "spook base" a $2.5B "windfall" for IBM.

Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Channel Program I/O Processing Efficiency

From: Lynn Wheeler <lynn@garlic.com>
Subject: Channel Program I/O Processing Efficiency
Date: 03 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#53 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#54 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#55 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#56 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#61 Channel Program I/O Processing Efficiency

VM370 "wheeler scheduler" trivia/joke (repeated post also in this thread): Initial review by somebody in corporate was that I had no manual tuning knobs (like MVS SRM). I tried to explain that "dynamic adaptive" met constantly monitoring configuration and workload and dynamically adapting&tuning. He said all "modern" systems had manual tuning knobs and he wouldn't sign off on announce/ship until I had manual tuning knobs. So I created manual tuning knobs which could be changed with an "SRM" command (parody/ridiculing MVS), provided full documentation and formulas on how they worked. What very few realized that in "degrees of freedom," (from dynamic feedback/feed forward) for the SRM manual tuning knobs, it was less than the dynamic adaptive algorithms ... so the dynamic adaptive algorithms could correct for any manual tuning knobs setting.

dynamic adaptive resource manager posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

Later, the last product did at IBM was HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

was on HA/CMP marketing trip to hong kong and riding up elevator in the "tinkertoy" bank bldg with customer and some from the branch ... and from the back of the elevator came the question was I the wheeler of the "wheeler scheduler", he said they had studied at the univ. (newly minted IBMer having graduated from Univ. of Waterloo). I said yes ... and I asked if anybody had mentioned my SRM "joke".

trivia: HA/CMP had started out as HA/6000, originally for the NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I renamed it HA/CMP (High Availability Cluster Multi-Processing). when I started doing technical/scientific cluster scale-up with national labs and commercial cluster scalup with RDBMS vendors (started with Informix, Ingres, Oracle, Sybase, that had vaxcluster support in same source base with unix ... and did various things to aid the migration); 16-way mid92, 128-system ye92. Then cluster scale-up is transferred, announced as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we can't work with anything having more than four processors. We leave IBM a few months later.

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

SHARE LSRAD Report

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: SHARE LSRAD Report
Date: 04 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#128 SHARE LSRAD Report

In jan1979, I was con'ed into doing benchmarks on engineering vm/4341 system for national lab that was looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). Then in the 80s (with 4341 shipping), large customers had orders of hundreds of vm/4341s for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). Inside IBM, departmental conference rooms were becoming scarce commodity, so many taken over for vm/4341 systems). MVS really wanted to play in that market, but all the new non-datacenter DASD were FBA (and MVS never did FBA support, the only new CKD DASD was datacenter 3880/3380). Eventually there is CKD 3375 (CKD emulated on 3370) for MVS. It didn't do them much good, MVS traditionally were operators and support, numbered in tens of people/system ... distributed was unattended dark rooms and tens of distributed vm/4341s per support person. Trivia: there hasn't been CKD DASD manufactured for decades, all being simulated on industry standard fixed-block disks. Even 3380 was starting the transition ... which can be seen in the formulas calculating records/track ... where record size has to be rounded up to fixed cell size.

4300s sold in the same mid-range market as VAX/VMS and in about the same number for single or small unit orders. Big difference are the large vm/4300s orders for distributed computing. This is old archived post with decade of VAX sales, sliced and diced by year, model, US/non-US ... starting in the mid-80s can see the mid-range market starting to move to workstations and large PC servers.
https://www.garlic.com/~lynn/2002f.html#0

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

Wealth of Two Nations: The US Racial Wealth Gap, 1860-2020

From: Lynn Wheeler <lynn@garlic.com>
Subject: Wealth of Two Nations: The US Racial Wealth Gap, 1860-2020
Date: 04 July 2022
Blog: Facebook
Wealth of Two Nations: The US Racial Wealth Gap, 1860-2020
https://www.nakedcapitalism.com/2022/07/wealth-of-two-nations-the-us-racial-wealth-gap-1860-2020.html

How Generations of Black Americans Lost Their Land to Tax Liens. Sales of repossessed assets have stripped thousands of families of their property--along with the potential to increase wealth.
https://www.bloomberg.com/news/features/2022-06-29/tax-liens-cost-generations-of-black-americans-their-land

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
related capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

India Will Not Lift Windfall Tax On Oil Firms Until Crude Drops By $40

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: India Will Not Lift Windfall Tax On Oil Firms Until Crude Drops By $40
Date: 05 July 2022
Blog: Facebook
India Will Not Lift Windfall Tax On Oil Firms Until Crude Drops By $40
https://oilprice.com/Latest-Energy-News/World-News/India-Will-Not-Lift-Windfall-Tax-On-Oil-Firms-Until-Crude-Drops-By-40.html

... and what is the US doing about the huge unearnded/windfall profit explosions at Big Oil????

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia

Some "Big Oil" posts
https://www.garlic.com/~lynn/2022d.html#96 Goldman Sachs predicts $140 oil as gas prices spike near $5 a gallon
https://www.garlic.com/~lynn/2022c.html#117 Documentary Explores How Big Oil Stalled Climate Action for Decades
https://www.garlic.com/~lynn/2021i.html#28 Big oil's 'wokewashing' is the new climate science denialism
https://www.garlic.com/~lynn/2021g.html#72 It's Time to Call Out Big Oil for What It Really Is
https://www.garlic.com/~lynn/2021g.html#16 Big oil and gas kept a dirty secret for decades
https://www.garlic.com/~lynn/2021g.html#13 NYT Ignores Two-Year House Arrest of Lawyer Who Took on Big Oil
https://www.garlic.com/~lynn/2021g.html#3 Big oil and gas kept a dirty secret for decades
https://www.garlic.com/~lynn/2021e.html#77 How climate change skepticism held a government captive
https://www.garlic.com/~lynn/2018d.html#112 NASA chief says he changed mind about climate change because he 'read a lot'
https://www.garlic.com/~lynn/2014m.html#27 LEO
https://www.garlic.com/~lynn/2013e.html#43 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012e.html#30 Senators Who Voted Against Ending Big Oil Tax Breaks Received Millions From Big Oil
https://www.garlic.com/~lynn/2012d.html#61 Why Republicans Aren't Mentioning the Real Cause of Rising Prices at the Gas Pump
https://www.garlic.com/~lynn/2007s.html#67 Newsweek article--baby boomers and computers

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Chairman John Opel

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Chairman John Opel
Date: 05 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#45 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#46 IBM Chairman John Opel

Boyd would tell story about being asked to review a new USAF air-to-air missile. They presented him lots of stats and film where the missile hit flares (on a drone) every time. Boyd said replay the film and asked them to stop it just before the missile hit the flare and asked what sort of guidance does it have. They say heat seeking ... he asks what sort of heat seeking ... and eventually got them to say "pin-point" heat seeking. He then asks what is the hottest part of a jet. They say the jet engine. He says no, its in the plume 30yrds behind the jet .. the missile will be lucky to hit 10% of time ... when it is shooting straight up the tailpipe. They gather up all their material and leave. Roll forward to Vietnam and he is proved correct. Boyd claims that the USAF general on the ground in Vietnam, at one point grounds all fighter jets and had the USAF missiles replaced with Navy sidewinders (with better than twice the hit rate of USAF missile). The general lasts three months before being replaced and called to Pentagon, he had violated fundamental Pentagon USAF rules ... reducing USAF budget ... but the absolute worst was increasing the Navy budget (using Navy missiles instead of USAF).

Trivia, in the late 90s, happen to run into somebody that had been involved in the original sidewinder guidance ... also got involved with a person that was up for MacArthur Genius award (computer medical diagnosis from x-ray and slide images) ... but got turned down, during the interview he mentioned having worked on the original guidance system for cruise missiles, he was told that working on military projects disqualified him for selection. MacArthur Fellows Program
https://en.wikipedia.org/wiki/MacArthur_Fellows_Program

The MacArthur Fellows Program, also known as the MacArthur Fellowship and commonly but unofficially known as the "Genius Grant", is a prize awarded annually by the John D. and Catherine T. MacArthur Foundation typically to between 20 and 30 individuals, working in any field, who have shown "extraordinary originality and dedication in their creative pursuits and a marked capacity for self-direction" and are citizens or residents of the United States.[1]

... snip ...

Boyd posts and web URLs:
https://www.garlic.com/~lynn/subboyd.html

specific past posts with Boyd story of USAF missiles being replaced with Sidewinders
https://www.garlic.com/~lynn/2021f.html#60 Martial Arts "OODA-loop"
https://www.garlic.com/~lynn/2021c.html#9 Air Force thinking of a new F-16ish fighter
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2019e.html#15 Before the First Shots Are Fired
https://www.garlic.com/~lynn/2018c.html#108 F-35
https://www.garlic.com/~lynn/2018c.html#2 FY18 budget deal yields life-sustaining new wings for the A-10 Warthog
https://www.garlic.com/~lynn/2017j.html#74 A-10
https://www.garlic.com/~lynn/2017h.html#105 Iraq, Longest War
https://www.garlic.com/~lynn/2017b.html#55 60 Minutes interview with Grace Hopper
https://www.garlic.com/~lynn/2016h.html#40 The F-22 Raptor Is the World's Best Fighter (And It Has a Secret Weapon That Is Out in the Open)
https://www.garlic.com/~lynn/2016h.html#21 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016d.html#90 "Computer & Automation" later issues--anti-establishment thrust
https://www.garlic.com/~lynn/2016d.html#8 What Does School Really Teach Children
https://www.garlic.com/~lynn/2014l.html#61 How Comp-Sci went from passing fad to must have major
https://www.garlic.com/~lynn/2014g.html#48 The Pentagon Is Playing Games With Its $570-Billion Budget
https://www.garlic.com/~lynn/2013o.html#28 ELP weighs in on the software issue:
https://www.garlic.com/~lynn/2013m.html#28 The Reformers
https://www.garlic.com/~lynn/2013m.html#27 US Air Force Converts F-16 Fighters into Drones
https://www.garlic.com/~lynn/2013e.html#32 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012k.html#19 SnOODAn: Boyd, Snowden, and Resilience
https://www.garlic.com/~lynn/2012i.html#64 Early use of the word "computer"
https://www.garlic.com/~lynn/2012i.html#51 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'? thoughts please
https://www.garlic.com/~lynn/2012i.html#2 Interesting News Article
https://www.garlic.com/~lynn/2012h.html#63 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'?
https://www.garlic.com/~lynn/2012h.html#21 The Age of Unsatisfying Wars
https://www.garlic.com/~lynn/2011n.html#88 What separates Sun Tzu & John Boyd as Martial thinkers
https://www.garlic.com/~lynn/2011j.html#33 China Builds Fleet of Small Warships While U.S. Drifts
https://www.garlic.com/~lynn/2011g.html#13 The Seven Habits of Pointy-Haired Bosses
https://www.garlic.com/~lynn/2011.html#75 America's Defense Meltdown
https://www.garlic.com/~lynn/2010o.html#66 They always think we don't understand
https://www.garlic.com/~lynn/2010k.html#16 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010f.html#70 Handling multicore CPUs; what the competition is thinking
https://www.garlic.com/~lynn/2010.html#94 Daylight Savings Time again
https://www.garlic.com/~lynn/2009q.html#62 Did anybody ever build a Simon?
https://www.garlic.com/~lynn/2008c.html#52 Current Officers
https://www.garlic.com/~lynn/99.html#120 atomic History

--
virtualization experience starting Jan1968, online at home since Mar1970

FedEx to Stop Using Mainframes, Close All Data Centers By 2024

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: FedEx to Stop Using Mainframes, Close All Data Centers By 2024
Date: 06 July 2022
Blog: Facebook
FedEx to Stop Using Mainframes, Close All Data Centers By 2024. The company is moving to the cloud and in the process will save $400 million a year.
https://www.pcmag.com/news/fedex-to-stop-using-mainframes-close-all-data-centers-by-2024

FedEx signals 'zero mainframe, zero datacenter' operations by 2024. Going completely cloud-native will save it $400m a year, CIO estimates
https://www.theregister.com/2022/07/05/fedex_to_close_all_datacenters/

Along with closing its remaining datacenters, FedEx said its closure plans will eliminate the 20 percent of its mainframe fleet that's still in operation, with an eye toward "zero mainframe, zero datacenter" operations by 2024.

... aka has already successfully eliminated 80%

Early/mid 80s, majority of IBM revenue was from mainframe hardware. Around turn of century reports were that mainframe hardware was only a few percent of IBM revenue and dropping. An analysis in EC12 time-frame that mainframe hardware was then only a couple percent of IBM revenue (and continued dropping) but that the mainframe group was 25% of IBM revenue (and 40% of profit) ... aka almost all software and services.

Also earlier machines were still doing industry standard benchmark with number of iterations compared to 370/158 iterations (assumed to be 1MIPS machine). Later calculation using IBM numbers about throughput compared to previous generation.

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000 z990, 32 processors, 9BIPS, (281MIPS/proc), 2003 z9, 54 processors, 18BIPS (333MIPS/proc), July2005 z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008 z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010 EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012 z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015 z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017 z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019

• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS) • z16, 200?? processors, ???BIPS (???MIPS/proc),


Same time as z196, there were E5-2600 blades running same industry standard benchmark getting 500BIPS (ten times max. configured z196). Since then the blades have increased the (ten times) throughput advantage over max. configured mainframes.

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

recent specific posts mentioning mainframes this century
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#54 IBM Z16 Mainframe
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022c.html#12 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022c.html#9 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#63 Mainframes
https://www.garlic.com/~lynn/2022b.html#57 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#45 Mainframe MIPS
https://www.garlic.com/~lynn/2022b.html#32 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#96 370/195
https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021i.html#92 How IBM lost the cloud
https://www.garlic.com/~lynn/2021i.html#2 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#44 OoO S/360 descendants
https://www.garlic.com/~lynn/2021f.html#41 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#18 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#68 Amdahl
https://www.garlic.com/~lynn/2021d.html#55 Cloud Computing
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?

--
virtualization experience starting Jan1968, online at home since Mar1970

FedEx to Stop Using Mainframes, Close All Data Centers By 2024

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: FedEx to Stop Using Mainframes, Close All Data Centers By 2024
Date: 06 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#71 FedEx to Stop Using Mainframes, Close All Data Centers By 2024

Also at the time (before IBM sold off its server business), IBM had base list price for E5-2600 blade of $1815 ($3.63/BIPS) and $30M for max configured z196 ($600,000/BIPS) ... however cloud operators have been claiming that they assembler their own blades at 1/3rd the cost of brand name blades ($1.21/BIPS) and will provision for something like ten times nominal use in order to handle "on-demand" peaks. As a result, large cloud operator costs has increasingly shifted to power and cooling. As a result they have put increasing pressure on server chip makers to improve computational power efficiency as well as dropping to zero when idle, but "instant on" when needed; Also savings from power use improvements of newer models frequently offset costs of replacing older systems with newer models.

Other z196 trivia; 1988, IBM branch asks me to help LLNL (national lab) get some serial stuff they are working with, standardized ... which quickly becomes fibre channel standard (FCS, initially 1gbit, full-duplex, 2gbit aggregate, 200mbyte/sec), including some stuff I had done in 1980. In 1990, POK announces ESCON (17mbytes/sec, when it is already obsolete) with ES/9000. Then some POK engineers get involved with FCS and define heavy weight protocol that radically reduces throughput ... which is announced as FICON. Most recent published benchmark is z196 "PEAK I/O" getting 2M IOPS with 104 FICON (running over 104 FCS). At the same time a FCS was announced for E5-2600 blade claiming over million IOPS (two native FCS having higher throughput than 104 running FICON). Other trivia, no CKD DASD have been made for decades, using CKD simulation with industry standard fixed-block disks.

1980 trivia: STL was bursting at the seams and was transferring 300 people from IMS group to offsite bldg with dataprocessing back to STL datacenter. They had tried "remote 3270" and found the human factors unacceptable (compared to inside STL). I'm con'ed into doing channel-extender support so channel-attached 3270 controllers can be placed at off-site bldg with no perceptible human factors difference (between STL and off-site).

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Technology Flashback

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Technology Flashback
Date: 06 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback

The story told by Boeing & IBM account team, about move from commission to quota, & left IBM was S/360 announcement day, 1964 (highest paid employee) and 1965 (left IBM after quota was adjusted)

Perot was in Texas, left in 1962 to form EDS, needed to borrow $1000 from his wife
https://blog.chron.com/txpotomac/2008/06/today-in-texas-history-ross-perot-born/

Perot left IBM in 1962 to found Electronic Data Systems in Dallas. He borrowed $1,000 from his wife, Margot, to start the company and was refused a contract 77 times. He used his IBM salesman skills to court CEOs to use his data processing services.

... snip ...

other posts mentioning Boeing order on S/360 announce
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota
https://www.garlic.com/~lynn/2022d.html#100 IBM Stretch (7030) -- Aggressive Uniprocessor Parallelism
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2019d.html#60 IBM 360/67
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2019b.html#39 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2019b.html#38 Reminder over in linkedin, IBM Mainframe announce 7April1964
https://www.garlic.com/~lynn/2017.html#46 Hidden Figures and the IBM 7090 computer
https://www.garlic.com/~lynn/2015b.html#36 IBM CEO Rometty gets bonus despite company's woes
https://www.garlic.com/~lynn/2015.html#93 Ginni gets bonus, plus raise, and extra incentives
https://www.garlic.com/~lynn/2014k.html#76 HP splits, again
https://www.garlic.com/~lynn/2014j.html#32 Univac 90 series info posted on bitsavers
https://www.garlic.com/~lynn/2014f.html#80 IBM Sales Fall Again, Pressuring Rometty's Profit Goal
https://www.garlic.com/~lynn/2013o.html#18 Why IBM chose MS-DOS, was Re: 'Free Unix!' made30yearsagotoday
https://www.garlic.com/~lynn/2012g.html#3 Quitting Top IBM Salespeople Say They Are Leaving In Droves
https://www.garlic.com/~lynn/2012g.html#0 Top IBM Salespeople Are Leaving In Droves, Say Those Who Have Quit
https://www.garlic.com/~lynn/2011l.html#37 movie "Airport" on cable
https://www.garlic.com/~lynn/2011j.html#65 Who was the Greatest IBM President and CEO of the last century?
https://www.garlic.com/~lynn/2011b.html#7 Mainframe upgrade done with wire cutters?
https://www.garlic.com/~lynn/2010m.html#59 z196 sysplex question
https://www.garlic.com/~lynn/2010d.html#29 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2007u.html#26 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2004o.html#58 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2002n.html#72 bps loader, was PLX
https://www.garlic.com/~lynn/2002j.html#43 Killer Hard Drives - Shrapnel?

--
virtualization experience starting Jan1968, online at home since Mar1970

The Supreme Court Is Limiting the Regulatory State

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Supreme Court Is Limiting the Regulatory State
Date: 07 July 2022
Blog: Facebook
The Supreme Court Is Limiting the Regulatory State. The Brennan Center's Martha Kinsella joins the podcast to discuss the high court's decision on regulatory power.
https://www.govexec.com/oversight/2022/07/govexec-daily-supreme-court-limiting-regulatory-state/373918/

recent articles:

The Supreme Court Deals a Major Blow to the EPA, and All Agencies
https://www.govexec.com/management/2022/06/supreme-court-major-blow-epa-agencies/368821/
How Charles Koch Purchased the EPA Supreme Court Decision
https://theintercept.com/2022/06/30/supreme-court-epa-climate-charles-koch/
The US Supreme Court just gutted federal climate policy
https://www.technologyreview.com/2022/06/30/1055272/supreme-court-climate-policy-epa/
Supreme Court Limits EPA's Power to Regulate Climate-Warming Carbon Dioxide
https://www.cnet.com/science/climate/supreme-court-limits-epas-power-to-regulate-climate-warming-carbon-dioxide/
It's Time for Charles Koch to Testify About His Climate Disinformation
https://www.counterpunch.org/2022/04/01/its-time-for-charles-koch-to-testify-about-his-climate-disinformation-campaign/

also

The Money Trail to the Ginni Thomas Emails to Overturn Biden's Election Leads to Charles Koch
https://www.counterpunch.org/2022/03/30/the-money-trail-to-the-ginni-thomas-emails-to-overturn-bidens-election-leads-to-charles-koch/

regulatory capture posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
merchants of doubt posts
https://www.garlic.com/~lynn/submisc.html#merchants.of.doubt
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
tax avoidance, tax fraud, tax evasion, tax havens, etc posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion

specific posts mentioning Koch, Koch brothers, Koch industry
https://www.garlic.com/~lynn/2022c.html#118 The Death of Neoliberalism Has Been Greatly Exaggerated
https://www.garlic.com/~lynn/2022c.html#59 Rags-to-Riches Stories Are Actually Kind of Disturbing
https://www.garlic.com/~lynn/2022c.html#58 Rags-to-Riches Stories Are Actually Kind of Disturbing
https://www.garlic.com/~lynn/2022c.html#35 40 Years of the Reagan Revolution's Libertarian Experiment Have Brought Us Crisis & Chaos
https://www.garlic.com/~lynn/2021k.html#20 Koch Funding for Campuses Comes With Dangerous Strings Attached
https://www.garlic.com/~lynn/2021j.html#43 Koch Empire
https://www.garlic.com/~lynn/2021i.html#98 The Koch Empire Goes All Out to Sink Joe Biden's Agenda -- and His Presidency, Too
https://www.garlic.com/~lynn/2021g.html#40 Why do people hate universal health care? It turns out -- they don't
https://www.garlic.com/~lynn/2021f.html#13 Elizabeth Warren hammers JPMorgan Chase CEO Jamie Dimon on pandemic overdraft fees
https://www.garlic.com/~lynn/2021c.html#77 Meet the "New Koch Brothers"
https://www.garlic.com/~lynn/2021c.html#51 In Biden's recovery plan, an overdue rebuke of trickle-down economics
https://www.garlic.com/~lynn/2021.html#27 We must stop calling Trump's enablers 'conservative.' They are the radical right
https://www.garlic.com/~lynn/2021.html#20 Trickle Down Economics Started it All
https://www.garlic.com/~lynn/2020.html#5 Book: Kochland : the secret history of Koch Industries and corporate power in America
https://www.garlic.com/~lynn/2020.html#4 Bots Are Destroying Political Discourse As We Know It
https://www.garlic.com/~lynn/2020.html#3 Meet the Economist Behind the One Percent's Stealth Takeover of America
https://www.garlic.com/~lynn/2019e.html#134 12 EU states reject move to expose companies' tax avoidance
https://www.garlic.com/~lynn/2019d.html#116 David Koch Was the Ultimate Climate Change Denier
https://www.garlic.com/~lynn/2019d.html#103 David Koch Was the Ultimate Climate Change Denier
https://www.garlic.com/~lynn/2019d.html#97 David Koch Was the Ultimate Climate Change Denier
https://www.garlic.com/~lynn/2019d.html#64 How the Supreme Court Is Rebranding Corruption
https://www.garlic.com/~lynn/2019c.html#47 Day of Reckoning for KPMG-Failures in Ethics
https://www.garlic.com/~lynn/2019.html#45 Jeffrey Skilling, Former Enron Chief, Released After 12 Years in Prison
https://www.garlic.com/~lynn/2019.html#37 Democracy in Chains
https://www.garlic.com/~lynn/2018f.html#11 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018e.html#102 Can we learn from financial lessons of 90 years ago?
https://www.garlic.com/~lynn/2018e.html#64 Mystery of the Underpaid American Worker
https://www.garlic.com/~lynn/2018d.html#77 Nassim Nicholas Taleb
https://www.garlic.com/~lynn/2018d.html#11 Hell is ... ?
https://www.garlic.com/~lynn/2018c.html#91 Economists and the Powerful: Convenient Theories, Distorted Facts, Ample Rewards
https://www.garlic.com/~lynn/2018c.html#83 Economists and the Powerful: Convenient Theories, Distorted Facts, Ample Rewards
https://www.garlic.com/~lynn/2018b.html#45 More Guns Do Not Stop More Crimes, Evidence Shows
https://www.garlic.com/~lynn/2018.html#84 The Warning
https://www.garlic.com/~lynn/2017i.html#47 Retirement Heist: How Firms Plunder Workers' Nest Eggs
https://www.garlic.com/~lynn/2017h.html#13 What the Enron E-mails Say About Us
https://www.garlic.com/~lynn/2017f.html#6 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017b.html#5 Trump to sign cyber security order
https://www.garlic.com/~lynn/2016g.html#32 Ma Bell is coming back and, boy, is she pissed! She bought Bugs Bunny!
https://www.garlic.com/~lynn/2016g.html#31 Economic Mess
https://www.garlic.com/~lynn/2016b.html#110 The Koch-Fueled Plot to Destroy the VA
https://www.garlic.com/~lynn/2016b.html#107 Qbasic - lies about Medicare
https://www.garlic.com/~lynn/2016.html#38 Shout out to Grace Hopper (State of the Union)
https://www.garlic.com/~lynn/2016.html#31 I Feel Old
https://www.garlic.com/~lynn/2015e.html#52 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015e.html#4 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014m.html#27 LEO
https://www.garlic.com/~lynn/2013h.html#52 "Highway Patrol" back on TV
https://www.garlic.com/~lynn/2011o.html#72 Public misperception about scientific agreement on global warming undermines climate policy support
https://www.garlic.com/~lynn/2011o.html#64 Civilization, doomed?

--
virtualization experience starting Jan1968, online at home since Mar1970

FedEx to Stop Using Mainframes, Close All Data Centers By 2024

From: Lynn Wheeler <lynn@garlic.com>
Subject: FedEx to Stop Using Mainframes, Close All Data Centers By 2024
Date: 07 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#71 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#72 FedEx to Stop Using Mainframes, Close All Data Centers By 2024

A cloud operator my have a dozen or more "megadatacenters" around the world ... each with half million or more blade systems, each blade system highly optimized for power&cooling use ... and each with ten times or more processing than max-configured ibm mainframe. There is also massive automation, typical megadatacenter will be staffed by 80-120 ... sometihng like 5,000-10,000 systems/staff (equivalent of 50,000-100,000 max configured ibm mainframes per staff)

old article from last decade discussing some of the details
https://www.datacenterknowledge.com/archives/2014/10/15/how-is-a-mega-data-center-different-from-a-massive-one

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

Washington Doubles Down on Hyper-Hypocrisy After Accusing China of Using Debt to "Trap" Latin American Countries

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Washington Doubles Down on Hyper-Hypocrisy After Accusing China of Using Debt to "Trap" Latin American Countries
Date: 08 July 2022
Blog: Facebook
Washington Doubles Down on Hyper-Hypocrisy After Accusing China of Using Debt to "Trap" Latin American Countries
https://www.nakedcapitalism.com/2022/07/us-hypocrisy-plumbs-new-lows-as-pentagon-accuses-china-of-using-debt-to-trap-latin-american-countries.html

Nonetheless, Richardson's warning reeks of rank hypocrisy. After all, no country has done more to trap the economies of Latin America (and beyond) under an insurmountable mountain of toxic debt than the US. Since the 1980s over exuberant lending on the part of the largely US-controlled World Bank, regional development banks, US and European commercial banks and investors has repeatedly fuelled speculative booms that have quickly turned to bust. Once that happens, the IMF swoops in with a prescription for crippling austerity medicine.

... snip ...

a couple past posts mentioning China "debt traps"
https://www.garlic.com/~lynn/2021k.html#71 MI6 boss warns of China 'debt traps and data traps'
https://www.garlic.com/~lynn/2019.html#13 China's African debt-trap ... and US Version

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

and other posts mentioning "Confessions of Economic Hitman"
https://www.garlic.com/~lynn/2022e.html#41 Wall Street's Plot to Seize the White House
https://www.garlic.com/~lynn/2022b.html#104 Why Nixon's Prediction About Putin and Ukraine Matters
https://www.garlic.com/~lynn/2021i.html#97 The End of World Bank's "Doing Business Report": A Landmark Victory for People & Planet
https://www.garlic.com/~lynn/2021i.html#33 Afghanistan's Corruption Was Made in America
https://www.garlic.com/~lynn/2021g.html#68 Four Officers Rip Into Trump, Give Moving Testimony About January 6 Riot
https://www.garlic.com/~lynn/2021f.html#26 Why We Need to Democratize Wealth: the U.S. Capitalist Model Breeds Selfishness and Resentment
https://www.garlic.com/~lynn/2021d.html#75 The "Innocence" of Early Capitalism is Another Fantastical Myth
https://www.garlic.com/~lynn/2019e.html#106 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#92 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#38 World Bank, Dictatorship and the Amazon
https://www.garlic.com/~lynn/2019e.html#18 Before the First Shots Are Fired
https://www.garlic.com/~lynn/2019d.html#79 Bretton Woods Institutions: Enforcers, Not Saviours?
https://www.garlic.com/~lynn/2019d.html#54 Global Warming and U.S. National Security Diplomacy
https://www.garlic.com/~lynn/2019d.html#52 The global economy is broken, it must work for people, not vice versa
https://www.garlic.com/~lynn/2019c.html#40 When Dead Companies Don't Die - Welcome To The Fat, Slow World
https://www.garlic.com/~lynn/2019c.html#36 Is America A Christian Nation?
https://www.garlic.com/~lynn/2019c.html#17 Family of Secrets
https://www.garlic.com/~lynn/2019.html#85 LUsers
https://www.garlic.com/~lynn/2019.html#45 Jeffrey Skilling, Former Enron Chief, Released After 12 Years in Prison
https://www.garlic.com/~lynn/2019.html#43 Billionaire warlords: Why the future is medieval
https://www.garlic.com/~lynn/2019.html#42 Army Special Operations Forces Unconventional Warfare
https://www.garlic.com/~lynn/2019.html#41 Family of Secrets
https://www.garlic.com/~lynn/2018c.html#44 Anatomy of Failure: Why America Loses Every War It Starts
https://www.garlic.com/~lynn/2018b.html#60 Revealed - the capitalist network that runs the world
https://www.garlic.com/~lynn/2018b.html#30 free, huh, was Bitcoin confusion?
https://www.garlic.com/~lynn/2018.html#82 DEC and HVAC
https://www.garlic.com/~lynn/2018.html#14 Predicting the future in five years as seen from 1983
https://www.garlic.com/~lynn/2017k.html#66 Innovation?, Government, Military, Commercial
https://www.garlic.com/~lynn/2017i.html#64 The World America Made
https://www.garlic.com/~lynn/2017e.html#105 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017e.html#103 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016h.html#38 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#3 Smedley Butler
https://www.garlic.com/~lynn/2016f.html#22 US and UK have staged coups before
https://www.garlic.com/~lynn/2016c.html#69 Qbasic
https://www.garlic.com/~lynn/2016c.html#7 Why was no one prosecuted for contributing to the financial crisis? New documents reveal why
https://www.garlic.com/~lynn/2016b.html#39 Failure as a Way of Life; The logic of lost wars and military-industrial boondoggles
https://www.garlic.com/~lynn/2016b.html#31 Putin holds phone call with Obama, urges better defense cooperation in fight against ISIS
https://www.garlic.com/~lynn/2015h.html#122 For those who like to regress to their youth? :-)
https://www.garlic.com/~lynn/2015g.html#14 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015g.html#11 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015f.html#45 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015f.html#44 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015f.html#30 Analysis: Root of Tattered US-Russia Ties Date Back Decades
https://www.garlic.com/~lynn/2015e.html#67 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015c.html#13 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015b.html#68 Why do we have wars?
https://www.garlic.com/~lynn/2015b.html#8 Shoot Bank Of America Now---The Case For Super Glass-Steagall Is Overwhelming
https://www.garlic.com/~lynn/2015b.html#5 Swiss Leaks lifts the veil on a secretive banking system
https://www.garlic.com/~lynn/2015b.html#4 Pay Any Price: Greed, Power, and Endless War
https://www.garlic.com/~lynn/2015b.html#1 do you blame Harvard for Puten
https://www.garlic.com/~lynn/2014j.html#104 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014g.html#66 Revamped PDP-11 in Brooklyn
https://www.garlic.com/~lynn/2014d.html#47 Stolen F-35 Secrets Now Showing Up in China's Stealth Fighter
https://www.garlic.com/~lynn/2014d.html#38 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014d.html#37 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014c.html#49 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014c.html#41 UK government plans switch from Microsoft Office to open source
https://www.garlic.com/~lynn/2014b.html#62 UK government plans switch from Microsoft Office to open source
https://www.garlic.com/~lynn/2014b.html#38 Can America Win Wars
https://www.garlic.com/~lynn/2014.html#40 Royal Pardon For Turing
https://www.garlic.com/~lynn/2013m.html#80 The REAL Reason U.S. Targets Whistleblowers
https://www.garlic.com/~lynn/2013k.html#69 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013i.html#78 Has the US Lost Its Grand Strategic Mind?
https://www.garlic.com/~lynn/2013e.html#51 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013e.html#25 What Makes bank regulation and insurance Bizarre?
https://www.garlic.com/~lynn/2013e.html#7 How to Cut Megabanks Down to Size
https://www.garlic.com/~lynn/2013d.html#98 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013d.html#95 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013d.html#93 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012o.html#2 OT: Tax breaks to Oracle debated
https://www.garlic.com/~lynn/2012n.html#83 Protected: R.I.P. Containment
https://www.garlic.com/~lynn/2012n.html#60 The IBM mainframe has been the backbone of most of the world's largest IT organizations for more than 48 years
https://www.garlic.com/~lynn/2012k.html#45 If all of the American earned dollars hidden in off shore accounts were uncovered and taxed do you think we would be able to close the deficit gap?
https://www.garlic.com/~lynn/2012j.html#81 GBP13tn: hoard hidden from taxman by global elite
https://www.garlic.com/~lynn/2012f.html#70 The Army and Special Forces: The Fantasy Continues
https://www.garlic.com/~lynn/2012e.html#70 Disruptive Thinkers: Defining the Problem
https://www.garlic.com/~lynn/2012d.html#57 Study Confirms The Government Produces The Buggiest Software
https://www.garlic.com/~lynn/2012.html#25 You may ask yourself, well, how did I get here?
https://www.garlic.com/~lynn/2011p.html#111 Matt Taibbi with Xmas Message from the Rich
https://www.garlic.com/~lynn/2011p.html#80 The men who crashed the world
https://www.garlic.com/~lynn/2011p.html#71 A question for the readership
https://www.garlic.com/~lynn/2011p.html#63 21st Century Management approach?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Quota

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Quota
Date: 09 July 2022
Blog: Facebook

https://www.garlic.com/~lynn/2022e.html#73 Technology Flashback
https://www.garlic.com/~lynn/2022e.html#0 IBM Quota
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota

In the late 80s, my wife had been asked to co-author to response to a request from a gov. high-security agency ... in the response she included 3-tier (network) architecture, ethernet, tcp/ip, etc. We were then out pitching our HSDT project (T1 and faster computer links) and her architecture to customer executives. This was at a time communication group was fiercely fighting off client/server (2tier) and distributed computing (and we were taking all sorts of misinformation arrows in the back from their SNA, SAA, and token-ring organizations). At one point, we had detailed discussions with GM/EDS and they happened to mention that they had decided to move off SNA to X.25. The next week we were in a meeting in Raleigh and happened to mention what GM/EDS said ... the Raleight people angrily argued for several minutes and then left the room. When they came back, they said, OK, GM/EDS has decided to move to X.25 ... but it won't make any difference since they had already spent that year's budget on 3725s.

3-tier posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Some folklore, the last product we did at IBM was HA/CMP. It started out as HA/6000 for NYTimes to allow them to migrate their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP (High Availability Cluster Multi-Processing) after working with national labs on technical/scientific cluster scale-up and RDBMS vendors (Ingres, Informix, Oracle, Sybase) that had VAXCluster support in same source base with unix (lots of work for easing their VAXCluster RDBMS support to cluster unix base). Then cluster scale-up was transferred, announced as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Later we are brought in as consultants to small client/server startup, two of the former Oracle people (that we had worked with on HA/CMP RDBMS) were there responsible for something called commerce server. The startup had also invented something they called "SSL" they wanted to use, the result is now frequently called electronic commerce.

Later around the turn of the century, we are asked to spend a year in Redmond/Kirkland area (not far from my old Boeing stomping grounds), working with several companies in the area on electronic commerce related stuff, including a large software company in Redmond. One of the smaller companies was a Kerberos security company (which had a contract to port Kerberos to platform where it became Active Directory). They had a CEO "for hire" that we would meet with a couple times month ... who had previously headed IBM POK and then IBM Boca ... and become CEO "for hire" after leaving IBM. One of the first was CEO of Perot Systems with assignment to take it public.

some old electronic commerce (gateway) posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
some kerberos posts
https://www.garlic.com/~lynn/subpubkey.html#kerberos

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Quota

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Quota
Date: 10 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#77 IBM Quota
https://www.garlic.com/~lynn/2022e.html#73 Technology Flashback
https://www.garlic.com/~lynn/2022e.html#0 IBM Quota
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota

Kerberos topic-drift. Kerberos was done at MIT Project Athena. Both IBM and DEC had contributed $25M to Project Athena and each got a Asst. Director. The IBM Asst. Director, I had worked with many years earlier at IBM Cambridge Science and have mentioned several times he had invented the 370 Compare-and-Swap instruction (chosen because "CAS" are his initials) when he was doing CP67 fine-grain multiprocessor locking.

While doing our HA/CMP product, we were asked to periodically drop by MIT to review Project Athena efforts ... one such visit, sat through discussions creating "cross-domain" support. trivia: after turn-of-the-century was sitting through presentation on SAML cross-domain deployment between US & ally gov agencies . Afterwards, I commented to the presenter that the SAML flows looked identical to Kerberos cross-domain flows.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
SMP, multiprocessor and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts mentioning SAML &/or Kerberos cross-domain
https://www.garlic.com/~lynn/2016d.html#100 Multithreaded output to stderr and stdout
https://www.garlic.com/~lynn/2011m.html#11 PKI "fixes" that don't fix PKI
https://www.garlic.com/~lynn/2011h.html#56 pdp8 to PC- have we lost our way?
https://www.garlic.com/~lynn/2010f.html#3 Why is Kerberos ever used, rather than modern public key cryptography?
https://www.garlic.com/~lynn/2010e.html#15 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2008p.html#23 Your views on the increase in phishing crimes such as the recent problem French president Sarkozy faces
https://www.garlic.com/~lynn/2007q.html#2 Windows Live vs Kerberos
https://www.garlic.com/~lynn/2005q.html#49 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005g.html#2 Cross-Realm Authentication
https://www.garlic.com/~lynn/2003h.html#53 Question about Unix "heritage"
https://www.garlic.com/~lynn/aadsm27.htm#23 Identity resurges as a debate topic
https://www.garlic.com/~lynn/aadsm20.htm#25 Cross logins
https://www.garlic.com/~lynn/aadsm2.htm#pkikrb PKI/KRB

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 10 July 2022
Blog: LinkedIn
After joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters ... I got to wander around a lot of IBM datacenters and observe what was going on. The science center had done activity monitors and deployed on their own systems but also most other systems running CP67 and then VM370 ... aggregating the results at the science center. Had years of data on hundreds of systems, large number of workload types and wide variation in workload combinations across the hundreds of systems.

I had originally done dynamic adaptive resource management as undergraduate in the 60s ... and would significantly improve on it after joining IBM (as well as pathlength optimization, I/O optimization, paging optimization, etc). In the morph from CP67->VM370, lots of features were dropped and/or significantly simplified. Originally for CP67 we had automated benchmark and parameterised synthetic workloads developed ... so would specify various combinations of different workloads representing different kinds of aggregate activity (including stress testing many times greater than any actual activity observed). When I first started migration from CP67 to VM370 Release 2 ... VM370 system would consistently fail even light and moderate benchmarking profiles ... so some of the initial code migration was the CP67 integrity and serialization moved to VM370 (to eliminate the constant system failures as well as hung/zombie users).

There was also a CMS\APL-based analytical system model developed at the science center which was made available on the world-wide sales&marketing support HONE systems as the performance predictor. Customer configuration and/or workload information could be entered and then "what-if" questions about results when configuration and/or workload changes are made.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

During the Future System effort in the 70s (completely different from 370 and was going to completely replace 370, I continued to work with 360&370 all during this period, including periodically ridiculing FS activity), internal politics was suspending/killing 370 activity (limited new 370 during the FS period is credited with giving the clone 370 makers their market foothold). When FS implodes, there is mad rush to get stuff back into 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel. Some more FS history
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

As part of 23jun1969 unbundling, IBM started charging for application software, but manage to make the case that kernel software should be free. With the demise of FS (and the rise of clone 370 makers), there was decision to start charging for kernel software, initially just new, addon kernel packages (but with eventual migration to all kernel software charged for). A bunch of stuff from my enhanced systems for internal datacenters was selected to be the initial guinea pig.

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Preparing for the first release, had a 2000 automated benchmarks that took three months elapsed time to run with varied configurations and workloads. Defined a multi-dimensional space, from the years of activity data on hundreds of live systems; configuration, and workloads were defined for 1000 benchmarks, uniformly distributed through this multi-dimensional configuration/workload space with a couple hundred exceeding (stress testing) any observed in live systems. A modified version of the performance predictor ... would predict the result of each benchmark and then compare the results with the prediction (helping validate the CMS\APL-based model as well as my dynamic adaptive resource management). Then for the 2nd thousand benchmarks, the modified performance predictor would select the benchmark configuration & workload ... searching for possible anomalous combinations (that may have been missed with uniform distribution).

automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
dynamic adaptive resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
working set & paging systems
https://www.garlic.com/~lynn/subtopic.html#clock

Later after transfer to research in silicon valley, got to wander around both IBM and customer datacenters, including bldg14 (disk engineering) and bldg15 (disk product test) across the street. At the time they were doing prescheduled, 7x24, stand-alone machine testing. They said that they had recently tried MVS, but it had 15min mean-time-between-failure (in that environment). I offered to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand, concurrent testing ... greatly improving productivity. I then wrote up an (internal) research report on the work and happened to mention the MVS 15min MTBF ... bringing down the wrath of the MVS organization on my head. Informally I was told they tried to have me separated from the IBM company, when that failed they would make my time at IBM unpleasant in other ways (however the joke was on them, I was already being told I had no career, no promotions, no awards and no raises). A few years later when 3380s were about to ship, FE had a regression test of 57 errors that were likely to occur, in all 57 cases, MVS would fail (requiring re-ipl) and in 2/3rds of the cases there was no indication of what caused the failure. I didn't feel badly.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

some posts mentioning the 57 errors and MVS failure
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#35 Error Handling
https://www.garlic.com/~lynn/2019b.html#53 S/360
https://www.garlic.com/~lynn/2018f.html#57 DASD Development
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer

some recent posts mentioning performance predictor
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Quota

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Quota
Date: 10 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#78 IBM Quota
https://www.garlic.com/~lynn/2022e.html#77 IBM Quota
https://www.garlic.com/~lynn/2022e.html#73 Technology Flashback
https://www.garlic.com/~lynn/2022e.html#0 IBM Quota
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota

VM group start shipping PID tapes (periodic "releases" and monthly PLC) with assembled text decks on the front ... followed by full source (assemble files and incremental updates for the PLC changes).

CMS had update command to apply update changes to source files creating temporary work file. Endicott came out and started distributed development to update CP67 to support 370 virtual machines including support for unannounced 370 virtual memory (tables somewhat different than 360/67 tables). Thus was born the incremental update effort (originally all exec). Production system on real 360/67 was CP67L (with lots of my changes), then running in 360/67 virtual machine was CP67H (modifications to provide 370 virtual machines), and running in 370 virtual machine was CP67I (modifications to run with 370 instructions and virtual memory tables). This was regularly running a year before the first engineering 370(/145 in endicott) was operational (in fact, CP67I was use to test its operation).

The reason that CP67H ran in virtual machine ... rather on bare hardware, was cambridge system had a lot of staff&student users from local universities and they wanted to make sure unannounced 370 virtual memory details didn't leak.

Mid-80s, Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist
ask if I had copy of the original incremental update code. Lucky for her I was able to pull it off CSC backup tapes I had in the Almaden tape library ... a couple months later Almaden had an operational problem with random tapes being mounted as scratch ... I lost a dozen tapes including all my (triple replicated) CSC backup tapes.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

recent posts mentioning cp67l, cp67h, cp67i, cp67sj
https://www.garlic.com/~lynn/2022e.html#6 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022d.html#62 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#18 Computer Server Market
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021h.html#53 PROFS
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021c.html#5 Z/VM

other posts mentioning Almaden tape library
https://www.garlic.com/~lynn/2022c.html#83 VMworkshop.og 2022
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2021g.html#89 Keeping old (IBM) stuff
https://www.garlic.com/~lynn/2021.html#22 Almaden Tape Library
https://www.garlic.com/~lynn/2017d.html#52 Some IBM Research RJ reports
https://www.garlic.com/~lynn/2014g.html#98 After the Sun (Microsystems) Sets, the Real Stories Come Out
https://www.garlic.com/~lynn/2014d.html#19 Write Inhibit
https://www.garlic.com/~lynn/2013n.html#60 Bridgestone Sues IBM For $600 Million Over Allegedly 'Defective' System That Plunged The Company Into 'Chaos'
https://www.garlic.com/~lynn/2013f.html#73 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013e.html#61 32760?
https://www.garlic.com/~lynn/2013b.html#61 Google Patents Staple of '70s Mainframe Computing
https://www.garlic.com/~lynn/2012i.html#22 The Invention of Email
https://www.garlic.com/~lynn/2011o.html#16 Dennis Ritchie
https://www.garlic.com/~lynn/2011m.html#12 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011g.html#29 Congratulations, where was my invite?
https://www.garlic.com/~lynn/2011c.html#4 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#3 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
https://www.garlic.com/~lynn/2010q.html#45 Is email dead? What do you think?
https://www.garlic.com/~lynn/2010e.html#32 Need tool to zap core
https://www.garlic.com/~lynn/2006w.html#42 vmshare

--
virtualization experience starting Jan1968, online at home since Mar1970

There is No Nobel Prize in Economics

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: There is No Nobel Prize in Economics
Date: 11 July 2022
Blog: Facebook
There is No Nobel Prize in Economics
https://www.nakedcapitalism.com/2022/07/there-is-no-nobel-prize-in-economics.html

More specifically, the fake Nobel Prize in economics has strongly favored adherents to the Chicago School of Economics' neoliberal dogma. So it has served to act as an enforcer of conservative, capital-backing, anti-labor policies. The Nobel Foundation haf bleated about the Swedish Riksbank hijacking of their name, but clearly gave up.

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess

past posts specifically mentioning milton friedman &/or chicago school
https://www.garlic.com/~lynn/2022d.html#84 Destruction Of The Middle Class
https://www.garlic.com/~lynn/2022c.html#97 Why Companies Are Becoming B Corporations
https://www.garlic.com/~lynn/2022c.html#96 Why Companies Are Becoming B Corporations
https://www.garlic.com/~lynn/2021k.html#30 Why Mislead Readers about Milton Friedman and Segregation?
https://www.garlic.com/~lynn/2021j.html#34 Chicago Boys' 100% Private Pension System in Chile Is in Big Trouble
https://www.garlic.com/~lynn/2021i.html#36 We've Structured Our Economy to Redistribute a Massive Amount of Income Upward
https://www.garlic.com/~lynn/2021h.html#22 Neoliberalism: America Has Arrived at One of History's Great Crossroads
https://www.garlic.com/~lynn/2021f.html#17 Jamie Dimon: Some Americans 'don't feel like going back to work'
https://www.garlic.com/~lynn/2021.html#21 ESG Drives a Stake Through Friedman's Legacy
https://www.garlic.com/~lynn/2020.html#25 Huawei 5G networks
https://www.garlic.com/~lynn/2020.html#15 The Other 1 Percent": Morgan Stanley Spots A Market Ratio That Is "Unprecedented Even During The Tech Bubble"
https://www.garlic.com/~lynn/2019e.html#158 Goliath
https://www.garlic.com/~lynn/2019e.html#149 Why big business can count on courts to keep its deadly secrets
https://www.garlic.com/~lynn/2019e.html#64 Capitalism as we know it is dead
https://www.garlic.com/~lynn/2019e.html#51 Big Pharma CEO: 'We're in Business of Shareholder Profit, Not Helping The Sick
https://www.garlic.com/~lynn/2019e.html#50 Economic Mess and Regulations
https://www.garlic.com/~lynn/2019e.html#32 Milton Friedman's "Shareholder" Theory Was Wrong
https://www.garlic.com/~lynn/2019e.html#31 Milton Friedman's "Shareholder" Theory Was Wrong
https://www.garlic.com/~lynn/2019e.html#14 Chicago Theory
https://www.garlic.com/~lynn/2019d.html#48 Here's what Nobel Prize-winning research says will make you more influential
https://www.garlic.com/~lynn/2019c.html#73 Wage Stagnation
https://www.garlic.com/~lynn/2019c.html#68 Wage Stagnation
https://www.garlic.com/~lynn/2018f.html#117 What Minimum-Wage Foes Got Wrong About Seattle
https://www.garlic.com/~lynn/2018f.html#107 Politicians have caused a pay 'collapse' for the bottom 90 percent of workers, researchers say
https://www.garlic.com/~lynn/2018e.html#115 Economists Should Stop Defending Milton Friedman's Pseudo-science
https://www.garlic.com/~lynn/2018c.html#83 Economists and the Powerful: Convenient Theories, Distorted Facts, Ample Rewards
https://www.garlic.com/~lynn/2018c.html#81 What Lies Beyond Capitalism And Socialism?
https://www.garlic.com/~lynn/2018b.html#87 Where Is Everyone???
https://www.garlic.com/~lynn/2018b.html#82 The Real Reason the Investor Class Hates Pensions
https://www.garlic.com/~lynn/2018.html#25 Trump's Infrastructure Plan Is Actually Pence's--And It's All About Privatization
https://www.garlic.com/~lynn/2017i.html#60 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017i.html#47 Retirement Heist: How Firms Plunder Workers' Nest Eggs
https://www.garlic.com/~lynn/2017h.html#116 The Real Reason Wages Have Stagnated: Our Economy Is Optimized For Financialization
https://www.garlic.com/~lynn/2017h.html#92 'X' Marks the Spot Where Inequality Took Root: Dig Here
https://www.garlic.com/~lynn/2017h.html#9 Corporate Profit and Taxes
https://www.garlic.com/~lynn/2017g.html#107 Why IBM Should -- and Shouldn't -- Break Itself Up
https://www.garlic.com/~lynn/2017g.html#83 How can we stop algorithms telling lies?
https://www.garlic.com/~lynn/2017g.html#79 Bad Ideas
https://www.garlic.com/~lynn/2017g.html#49 Shareholders Ahead Of Employees
https://www.garlic.com/~lynn/2017g.html#19 Financial, Healthcare, Construction, Education complexity
https://www.garlic.com/~lynn/2017g.html#6 Mapping the decentralized world of tomorrow
https://www.garlic.com/~lynn/2017f.html#53 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017f.html#45 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017f.html#44 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017f.html#16 Conservatives and Spending
https://www.garlic.com/~lynn/2017e.html#96 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017e.html#44 [CM] cheap money, was What was your first home computer?
https://www.garlic.com/~lynn/2017e.html#7 Arthur Laffer's Theory on Tax Cuts Comes to Life Once More
https://www.garlic.com/~lynn/2017d.html#93 United Air Lines - an OODA-loop perspective
https://www.garlic.com/~lynn/2017d.html#77 Trump delay of the 'fiduciary rule' will cost retirement savers $3.7 billion
https://www.garlic.com/~lynn/2017d.html#67 Economists are arguing over how their profession messed up during the Great Recession. This is what happened
https://www.garlic.com/~lynn/2017b.html#43 when to get out???
https://www.garlic.com/~lynn/2017b.html#17 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017b.html#11 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#102 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#97 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#92 Trump's Rollback of the Neoliberal Market State
https://www.garlic.com/~lynn/2017.html#34 If economists want to be trusted again, they should learn to tell jokes
https://www.garlic.com/~lynn/2017.html#31 Milton Friedman's Cherished Theory Is Laid to Rest
https://www.garlic.com/~lynn/2017.html#29 Milton Friedman's Cherished Theory Is Laid to Rest
https://www.garlic.com/~lynn/2017.html#26 Milton Friedman's Cherished Theory Is Laid to Rest
https://www.garlic.com/~lynn/2017.html#24 Destruction of the Middle Class
https://www.garlic.com/~lynn/2017.html#17 Destruction of the Middle Class
https://www.garlic.com/~lynn/2016d.html#72 Five Outdated Leadership Ideas That Need To Die
https://www.garlic.com/~lynn/2013f.html#34 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012p.html#64 IBM Is Changing The Terms Of Its Retirement Plan, Which Is Frustrating Some Employees
https://www.garlic.com/~lynn/2008o.html#18 Once the dust settles, do you think Milton Friedman's economic theories will be laid to rest
https://www.garlic.com/~lynn/2008c.html#16 Toyota Sales for 2007 May Surpass GM

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 11 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems

Besides wandering around IBM datacenters after joining IBM ... was still allowed to attend user group meetings and drop in on customers. The manager of one of the largest financial datacenters liked me to stop by and talk technology. At one point, the IBM branch manager horribly offended the customer ... and in retaliation they ordered an Amdahl system (lone Amdahl system in vast sea of "blue"). Amdahl had been selling into tech/science/university market, but this would be the first for commercial, "true blue" account. I was asked to go onsite at the customer for 6-12 months to help obfuscate why the customer was ordering an Amdahl system. I talk it over with the customer and then decline IBM's offer. I was then told that the branch manager was good sailing buddy of IBM CEO and if I didn't do this, I could forget having a career, promotions, and/or raises.

Later in the early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM ... and could readily relate to one of his quotations:

"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or To Do, that is the question."

... snip ...

Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html

some posts mentioning branch manager good sailing buddy of IBM CEO
https://www.garlic.com/~lynn/2022e.html#60 IBM CEO: Only 60% of office workers will ever return full-time
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022d.html#21 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021h.html#61 IBM Starting Salary
https://www.garlic.com/~lynn/2021e.html#66 Amdahl
https://www.garlic.com/~lynn/2021e.html#15 IBM Internal Network
https://www.garlic.com/~lynn/2021d.html#85 Bizarre Career Events
https://www.garlic.com/~lynn/2021d.html#66 IBM CEO Story
https://www.garlic.com/~lynn/2021c.html#37 Some CP67, Future System and other history
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2021.html#8 IBM CEOs
https://www.garlic.com/~lynn/2019e.html#138 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2019e.html#29 IBM History
https://www.garlic.com/~lynn/2018f.html#68 IBM Suits
https://www.garlic.com/~lynn/2018e.html#27 Wearing a tie cuts circulation to your brain
https://www.garlic.com/~lynn/2018c.html#27 Software Delivery on Tape to be Discontinued
https://www.garlic.com/~lynn/2018.html#55 Now Hear This--Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2017c.html#92 An OODA-loop is a far-from-equilibrium, non-linear system with feedback
https://www.garlic.com/~lynn/2016h.html#86 Computer/IBM Career
https://www.garlic.com/~lynn/2016e.html#95 IBM History
https://www.garlic.com/~lynn/2016.html#41 1976 vs. 2016?
https://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2013l.html#22 Teletypewriter Model 33

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 11 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems

For Boyd's first briefing, I tried to do it through plant site employee education. At first they agreed, but as I provided more information about how to prevail/win in competitive situations, they changed their mind. They said that IBM spends a great deal of money training managers on how to handle employees and it wouldn't be in IBM's best interest to expose general employees to Boyd, I should limit audience to senior members of competitive analysis departments. First briefing in bldg28 auditorium open to all.

Trivia: In 89/90 the commandant of Marine Corps leverages Boyd for make-over of the corps ... at a time when IBM was desperately in need of make-over ... 1992 has one of the largest losses in US corporate history and was being reorg'ed into the 13 "baby blues" in preparation for breaking up the company,

When Boyd passes in 1997, the USAF had pretty much disowned him and it was the Marines at Arlington (and his effects go to Gray Research Center & Library in Quantico). There have continued to be Boyd conferences at Marine Corps Univ. in Quantico ... including lots of discussions about careerists and bureaucrats (as well as the "old boy networks" and "risk averse").

Chuck's tribute to John
http://www.usni.org/magazines/proceedings/1997-07/genghis-john
for those w/o subscription
http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html
John Boyd - USAF. The Fighter Pilot Who Changed the Art of Air Warfare
http://www.aviation-history.com/airmen/boyd.htm
"John Boyd's Art of War; Why our greatest military theorist only made colonel"
http://www.theamericanconservative.com/articles/john-boyds-art-of-war/
40 Years of the 'Fighter Mafia'
https://www.theamericanconservative.com/articles/40-years-of-the-fighter-mafia/
Fighter Mafia: Colonel John Boyd, The Brain Behind Fighter Dominance
https://www.avgeekery.com/fighter-mafia-colonel-john-boyd-the-brain-behind-fighter-dominance/
Updated version of Boyd's Aerial Attack Study
https://tacticalprofessor.wordpress.com/2018/04/27/updated-version-of-boyds-aerial-attack-study/
A New Conception of War
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/

Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 11 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems

The last product we did at IBM was HA/CMP. It started out as HA/6000 for NYTimes to allow them to migrate their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP (High Availability Cluster Multi-Processing) after working with national labs on technical/scientific cluster scale-up and RDBMS vendors (Ingres, Informix, Oracle, Sybase) that had VAXCluster support in same source base with unix (lots of work for easing their VAXCluster RDBMS support to cluster unix base). Old poast about Jan1992 cluster scale-up with Oracle CEO, 16way mid1992, 128way ye1992.
https://www.garlic.com/~lynn/95.html#13

Within a few weeks of the Oracle CEO meeting, cluster scale-up was transferred, announced as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later.

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Later we are brought into a small client/server startup as consultants, a couple of former Oracle people (that we worked with on cluster scale-up and were in the Ellison meeting) were there responsible for something called the commerce server and wanted to do financial transactions on the server, the startup had also invented technology they called "SSL" ... result is now sometimes referred to as electronic commerce. I had responsibility for everything between webservers and the payment networks.

ecommerce payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

Having done "electronic commerce", invited into participating finanncial standards body (X9). Then in Jan1999 am asked to help try and stop the coming economic mess (we failed). A decade later (living in DC area), am asked to web'ize the Pecora Hearings (30s senate hearings into '29 crash, resulted in criminal convictions and jail sentences) with lots of internal HREFs and URLs between what happened then and what happened this time (comments that new congress might have appetite to do something). I work on it for a couple months and then get a call saying it won't be needed after all (comments that capital hill was totally buried under enormous mountains of wallstreet cash). We don't stay much longer in DC area.

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
Pecora &/or class-steagall posts
https://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass-Steagall

Being heavily involved in financial standards ... including security, cryptography, and hardware ... would get invited into gov. agencies (even tho I didn't have clearance). I did talk at NIST security conference that I was taking a $500 mil-spec chip, cost reducing by at least two orders of magnitude (for under $1), while increasing security.
http://csrc.nist.gov/nissc/1998/index.html
part of presentation
https://www.garlic.com/~lynn/nissc21.zip

Senior technical director to agency DDI up at Ft. Meade was doing assurance panel at trusted computing track at IDF and asked me to participate ... giving talk on the chip (and financial security), guy running TPM is in front row, so I say that it is nice to see TPM starting to look more like my chip, he responds that I didn't have a committee of 200 people helping me ... gone 404, but lives on at wayback machine
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13
part of presentation
https://www.garlic.com/~lynn/iasrtalk.zip

x9.59 posts
https://www.garlic.com/~lynn/subpubkey.html#x959
AADS refs
https://www.garlic.com/~lynn/x959.html#aads

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 11 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems

other trivia: I was blamed for online computer conferencing in the late 70s and early 80s on the internal computer network. It really took off spring 1981, when I distributed a trip report of visit to Jim Gray at Tandem (only about 300 directly participated, but claims upwards of 25,000 were reading, folklore when corporate executive committee was told, 5of6 wanted to fire me) from IBM Jargon Dictionary:
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

300 pages are printed, along with executive summary and summary of the summary are packaged in tandem 3-ring binders and sent to the corporate executive committee (folklore is 5of6 wanted to fire me). From summary of the summary:

• The perception of many technical people in IBM is that the company is rapidly heading for disaster. Furthermore, people fear that this movement will not be appreciated until it begins more directly to affect revenue, at which point recovery may be impossible

• Many technical people are extremely frustrated with their management and with the way things are going in IBM. To an increasing extent, people are reacting to this by leaving IBM Most of the contributors to the present discussion would prefer to stay with IBM and see the problems rectified. However, there is increasing skepticism that correction is possible or likely, given the apparent lack of commitment by management to take action

• There is a widespread perception that IBM management has failed to understand how to manage technical people and high-technology development in an extremely competitive environment.


... snip ...

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

... but it takes another decade (1981-1992) ... IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company. gone behind paywall, but mostly lives free at wayback machine.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

had already left IBM, but get a call from the bowels of Armonk asking if could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts. However, before getting started, the board brings in a new CEO and reverses the breakup.

IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 11 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#85 Enhanced Production Operating Systems

In the wake of the death of FS, the head of POK also managed to convince corporate to kill vm370 product, shutdown the development group and transfer all the people to POK (for MVS/XA) or otherwise (claimed that) MVS/XA wouldn't be able to ship on time.

Part of the shutdown was trashing ongoing new vm370 stuff ... including significantly extensions to CMS OS/simulation.

POK was not planning on telling the group until just before the move, to minimize the numbers that might escape, however the information managed to leak and a number escaped (there was witch hunt for the source of the leak, but fortunately nobody gave me up). This was also in the early days of DEC VAX/VMS and there was joke that head of POK was major contributor to VMS. Endicott eventually managed to acquire the VM370 product mission, but had to reconstitute development group from scratch.

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 12 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#85 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems

Note the Watsons had Boyd's To Be or To Do in IBM ... they were called "wild ducks" and were required. Note that the IBM century/100yrs celebration, one of the 100 videos was on wild ducks, ... but it was customer wild ducks... all references to employee wild ducks has been expunged. Chairman Learson trying to block the rise of the (old boy) careerists and bureaucrats destroying the Watson legacy.

Management Briefing
Number 1-72: January 18,1972
ZZ04-1312

TO ALL IBM MANAGERS:

Once again, I'm writing you a Management Briefing on the subject of bureaucracy. Evidently the earlier ones haven't worked. So this time I'm taking a further step: I'm going directly to the individual employees in the company. You will be reading this poster and my comment on it in the forthcoming issue of THINK magazine. But I wanted each one of you to have an advance copy because rooting out bureaucracy rests principally with the way each of us runs his own shop.

We've got to make a dent in this problem. By the time the THINK piece comes out, I want the correction process already to have begun. And that job starts with you and with me.

Vin Learson


... and ...


+-----------------------------------------+
|           "BUSINESS ECOLOGY"            |
|                                         |
|                                         |
|            +---------------+            |
|            |  BUREAUCRACY  |            |
|            +---------------+            |
|                                         |
|           is your worst enemy           |
|              because it -               |
|                                         |
|      POISONS      the mind              |
|      STIFLES      the spirit            |
|      POLLUTES     self-motivation       |
|             and finally                 |
|      KILLS        the individual.       |
+-----------------------------------------+
"I'M Going To Do All I Can to Fight This Problem . . ."
by T. Vincent Learson, Chairman

... snip ...

How to Stuff a Wild Duck

"We are convinced that any business needs its wild ducks. And in IBM we try not to tame them." - T.J. Watson, Jr.

"How To Stuff A Wild Duck", 1973, IBM poster
https://collection.cooperhewitt.org/objects/18618011/

and from the budding Future System disaster in the 70s, from Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993 ....
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
Boyd posts and web url refs:
https://www.garlic.com/~lynn/subboyd.html
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 12 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#85 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#87 Enhanced Production Operating Systems

... other trivia: I had started HSDT in the early 80s with T1 and faster computer links (both terrestrial and satellite) and was working with NSF director and was suppose to get $20M to interconnect (TCP/IP) NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually RFP is released (in part based on what we already had running); preliminary announce:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

internal IBM politics prevent us from bidding on the RFP. the NSF director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse (as did claims that what we already had running was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87). The winning bid didn't even have T1 links, just 440kbit/sec ... possibly to make it look like conforming, they have T1 trunks with telco multiplexors running multiple links/trunk. As regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

one of the first HSDT T1 satellite links was between Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston (hudson valley, east coast), which eventually had a whole boatload of Floating Point Systems boxes.
https://en.wikipedia.org/wiki/Floating_Point_Systems

... in any case, recent HSDT & HSDTSFS post concerning the "VM Workshop", .... VM/370 50th birthday
https://www.garlic.com/~lynn/2022e.html#8 VM Workship ... VM/370 50th birthday

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 12 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#85 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#87 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#88 Enhanced Production Operating Systems

1980 STL (now SVL) was bursting at the seams and they were moving 300 people from the IMS group (and 300 3270 terminals) to offsite bldg with dataprocessing service back to the STL datacenter. They had tried "remote" 3270 (over telco lines), but found the human factors totally unacceptable (compared to channel connected 3270 controllers and my enhanced production operating systems). I get con'ed into doing channel-extender support to the offsite bldg so they could have channel attached controllers at the offste bldg with no perceptible difference in response. The hardware vendor then tries to get IBM to release my support, but there are some engineers in POK playing with some serial stuff who were afraid that it would make it harder to release their stuff ... and get it veto'ed.

Note that 3270 controllers were relative slow with exceptional channel busy ... getting them off the real IBM channels replaced with a fast channel-extender interface box increased STL 370/168 throughput by 10-15% (the 3270 controllers had been spread across the same channels shared with DASD ... and were interfering with DASD throughput, the fast channel-extender box radically cut the channel busy for the same amount of 3270 I/O). STL considered using the channel-extender box for all their 3270 controllers (even those purely in house).

In 1988, the IBM branch office asks me to help LLNL (national lab) get some serial stuff LLNL is playing with, released as standard ... which quickly becomes fibre channel standard (including some stuff I had done in 1980) ... started full-duplex 1gbit, 2gbit aggregate, 200mbyte/sec. Then in 1990, the POK engineers get their stuff released (when it is already obsolete) with ES/9000 as ESCON (17mbytes/sec).

Then some POK engineers become involved in FCS and define a protocol that radically reduces throughput, which is eventually released as FICON. The latest published FICON numbers is z196 "peak I/O" benchmark which got 2M IOPS with 104 FICON (running over 104 FCS). About the same time a FCS was announced for E5-2600 blades (commonly used in cloud datacenters) getting over a million IOPS (two such native FCS having higher throughput than 104 FICON running over 104 FCS).

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 12 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#85 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#87 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#88 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#89 Enhanced Production Operating Systems

VM370 "wheeler scheduler" trivia/joke: A review by somebody in corporate was that I had no manual tuning knobs (like MVS SRM). I tried to explain that "dynamic adaptive" met constantly monitoring configuration and workload and dynamically adapting&tuning. He said all "modern" systems had manual tuning knobs and he wouldn't sign off on announce/ship until I had manual tuning knobs. So I created manual tuning knobs which could be changed with an "SRM" command (parody/ridiculing MVS), provided full documentation and formulas on how they worked. What very few realized that in "degrees of freedom," (from dynamic feedback/feed forward) for the SRM manual tuning knobs, it was less than the dynamic adaptive algorithms ... so the dynamic adaptive algorithms could correct for any manual tuning knobs setting.

Was on HA/CMP marketing trip to hong kong and riding up with elevator in the "tinkertoy" bank bldg with customer and some from the branch ... and from the back of the elevator came the question was I the wheeler of the "wheeler scheduler", he said they had studied at the univ. (newly minted IBMer having graduated from Univ. of Waterloo). I said yes ... and I asked if anybody had mentioned my SRM "joke".

dynamic adaptive resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

some past posts mentioning manual tuning knobs
https://www.garlic.com/~lynn/2022e.html#66 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021b.html#15 IBM Recruiting
https://www.garlic.com/~lynn/2021b.html#14 IBM Recruiting
https://www.garlic.com/~lynn/2021b.html#10 IBM Marketing Trips
https://www.garlic.com/~lynn/2019c.html#89 A New Theory On Time Indicates Present And Future Exist Simultaneously
https://www.garlic.com/~lynn/2019b.html#92 MVS Boney Fingers
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2013n.html#22 Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013b.html#61 Google Patents Staple of '70s Mainframe Computing
https://www.garlic.com/~lynn/2011k.html#85 The Grand Message in the Conceptual Spiral
https://www.garlic.com/~lynn/2011g.html#6 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2010m.html#81 Nostalgia
https://www.garlic.com/~lynn/2008m.html#10 Unbelievable Patent for JCL
https://www.garlic.com/~lynn/2007g.html#56 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006h.html#22 The Pankian Metaphor
https://www.garlic.com/~lynn/2002k.html#66 OT (sort-of) - Does it take math skills to do data processing ?
https://www.garlic.com/~lynn/2002c.html#16 OS Workloads : Interactive etc

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 12 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#85 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#87 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#88 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#89 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#90 Enhanced Production Operating Systems

trivia: There was unannounced decision to convert all 370s to virtual memory and a copy of internal document had leaked to the computer press. One result led to adding unique identifier to IBM company copy machines that would show up on every page copied. For Future System they tried to move to "soft copy" documents ... specially modified VM370 systems that would only let read access to the documents from special 3270 terminals (allowing no copying, printing or anything except simple reading). I was starting effort porting to VM370 and got some weekend time in (VM370 development out at Burlington Mall) machine room. I stopped by Friday afternoon to check everything was set up for my weekend time. They referenced that one of the machines ran modified VM370 with FS documents and all the extra security ... and harassing me that even I wouldn't be able to bypass the security (even left alone in the machine room for the whole weekend). I eventually got tired of it and said, ok, it will take less than a minute. I asked them to disable all external access to the machine. I then used the front panel to find the password checking code and patched a branch instruction ... so everything typed in for password was accepted as valid.

other trivia: A decade ago, a customer asked me if I could track down the decision to make all 370s, virtual memory. I eventually found a staff to the executive making the decision. Effectively MVT storage management was so bad, that MVT regions had to be four times larger than used. As a result, typical 1mbyte 370/165 would only have four regions, not enough to have sufficient throughput to justify the machine. Moving MVT to virtual memory would allow increasing number of regions by a factor of four times with little or no paging (VS2/SVS was not significantly different from running MVT in CP67 16mbyte virtual machine). Old archived post with some of the email exchange.
https://www.garlic.com/~lynn/2011d.html#73

I mention stopped by to see Ludlow who was doing the MVT modifications for SVS prototype on 360/67. It had a little bit of code to create virtual tables and enter virtual memory mode. The biggest task was EXCP/SVC0 handling channel programs. EXCP/SVC0 is now getting channel programs made with virtual addresses (similar to CP67 getting channel programs from virtual machines) and so borrows CP67 CCWTRANS (that created copies of virtual channel programs substituting real address) for crafting into EXCP.

past posts referencing the 370 virtual memory decision
https://www.garlic.com/~lynn/2022e.html#54 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022d.html#93 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#61 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#51 IBM Spooling
https://www.garlic.com/~lynn/2022d.html#20 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#18 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#50 IBM 3033 Personal Computing
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022b.html#92 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#76 Link FEC and Encryption
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#89 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#58 Computer Security
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2022.html#10 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021k.html#1 PCP, MFT, MVT OS/360, VS1, & VS2
https://www.garlic.com/~lynn/2021j.html#82 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#66 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#23 fast sort/merge, OoO S/360 descendants
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#48 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021g.html#43 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#39 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#25 Execute and IBM history, not Sequencer vs microcode
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021e.html#32 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021d.html#53 IMS Stories
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#38 Some CP67, Future System and other history
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#59 370 Virtual Memory
https://www.garlic.com/~lynn/2019e.html#121 Virtualization
https://www.garlic.com/~lynn/2019e.html#108 Dyanmic Adaptive Resource Manager
https://www.garlic.com/~lynn/2019d.html#120 IBM Acronyms
https://www.garlic.com/~lynn/2019d.html#63 IBM 3330 & 3380
https://www.garlic.com/~lynn/2019d.html#26 direct couple
https://www.garlic.com/~lynn/2019c.html#25 virtual memory
https://www.garlic.com/~lynn/2019b.html#92 MVS Boney Fingers
https://www.garlic.com/~lynn/2019b.html#53 S/360
https://www.garlic.com/~lynn/2019.html#78 370 virtual memory
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#26 Where's the fire? | Computerworld Shark Tank
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2018e.html#97 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018e.html#95 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018d.html#22 The Rise and Fall of IBM
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2018.html#93 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2018.html#92 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017k.html#13 Now Hear This-Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2017j.html#96 thrashing, was Re: A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017j.html#87 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017j.html#84 VS/Repack
https://www.garlic.com/~lynn/2017h.html#61 computer component reliability, 1951
https://www.garlic.com/~lynn/2017g.html#102 SEX
https://www.garlic.com/~lynn/2017g.html#101 SEX
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017f.html#33 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017e.html#63 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017e.html#19 MVT doesn't boot in 16mbytes
https://www.garlic.com/~lynn/2017e.html#5 TSS/8, was A Whirlwind History of the Computer
https://www.garlic.com/~lynn/2017d.html#83 Mainframe operating systems?
https://www.garlic.com/~lynn/2017d.html#76 Mainframe operating systems?
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#50 Univ. 709
https://www.garlic.com/~lynn/2017c.html#81 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#80 Great mainframe history(?)
https://www.garlic.com/~lynn/2017b.html#8 BSAM vs QSAM
https://www.garlic.com/~lynn/2017.html#90 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#98 A Christmassy PL/I tale
https://www.garlic.com/~lynn/2016h.html#45 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016g.html#48 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016g.html#40 Floating point registers or general purpose registers
https://www.garlic.com/~lynn/2016c.html#9 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2016.html#56 Compile error
https://www.garlic.com/~lynn/2015h.html#116 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015h.html#86 Old HASP
https://www.garlic.com/~lynn/2015g.html#90 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2015f.html#79 Limit number of frames of real storage per job
https://www.garlic.com/~lynn/2015d.html#35 Remember 3277?
https://www.garlic.com/~lynn/2015c.html#69 A New Performance Model ?
https://www.garlic.com/~lynn/2015c.html#26 OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainframe - Forbes
https://www.garlic.com/~lynn/2015b.html#50 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2015.html#94 Update History Documentaries | Watson | PBS Video
https://www.garlic.com/~lynn/2015.html#43 z13 "new"(?) characteristics from RedBook
https://www.garlic.com/~lynn/2014m.html#134 A System 360 question
https://www.garlic.com/~lynn/2014m.html#105 IBM 360/85 vs. 370/165
https://www.garlic.com/~lynn/2014l.html#66 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2014l.html#19 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2014l.html#11 360/85
https://www.garlic.com/~lynn/2014l.html#10 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2014k.html#87 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2014j.html#99 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014j.html#56 R.I.P. PDP-10?
https://www.garlic.com/~lynn/2014j.html#33 Univac 90 series info posted on bitsavers
https://www.garlic.com/~lynn/2014i.html#66 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2014g.html#102 Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2014d.html#54 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2014c.html#71 assembler
https://www.garlic.com/~lynn/2014b.html#49 Mac at 30: A love/hate relationship from the support front
https://www.garlic.com/~lynn/2013n.html#92 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013n.html#84 'Free Unix!': The world-changing proclamationmade30yearsagotoday
https://www.garlic.com/~lynn/2013n.html#24 Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013m.html#51 50,000 x86 operating system on single mainframe
https://www.garlic.com/~lynn/2013l.html#69 model numbers; was re: World's worst programming environment?
https://www.garlic.com/~lynn/2013l.html#18 A Brief History of Cloud Computing
https://www.garlic.com/~lynn/2013i.html#47 Making mainframe technology hip again
https://www.garlic.com/~lynn/2013h.html#47 Storage paradigm [was: RE: Data volumes]
https://www.garlic.com/~lynn/2013h.html#13 Is newer technology always better? It almost is. Exceptions?
https://www.garlic.com/~lynn/2013c.html#51 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013b.html#22 Rejoice! z/OS 2.1 addresses some long term JCL complaints from here:
https://www.garlic.com/~lynn/2013b.html#11 what makes a computer architect great?
https://www.garlic.com/~lynn/2013.html#22 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012p.html#4 Query for IBM Systems Magazine website article on z/OS community
https://www.garlic.com/~lynn/2012n.html#42 history of Programming language and CPU in relation to each
https://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#76 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012l.html#73 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#55 Simulated PDP-11 Blinkenlight front panel for SimH
https://www.garlic.com/~lynn/2012i.html#55 Operating System, what is it?
https://www.garlic.com/~lynn/2012f.html#78 What are you experiences with Amdahl Computers and Plug-Compatibles?
https://www.garlic.com/~lynn/2012f.html#10 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012e.html#56 Typeface (font) and city identity
https://www.garlic.com/~lynn/2012e.html#4 Memory versus processor speed
https://www.garlic.com/~lynn/2012e.html#3 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2012d.html#33 TINC?
https://www.garlic.com/~lynn/2012b.html#100 5 Byte Device Addresses?
https://www.garlic.com/~lynn/2012.html#69 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011m.html#15 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2011f.html#73 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2011e.html#47 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#44 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#26 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#10 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#8 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#5 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#1 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#74 Multiple Virtual Memory

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 12 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#85 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#87 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#88 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#89 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#90 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#91 Enhanced Production Operating Systems

FS motivation; claims are major motivation for FS was countermeasure to clone controllers, things would be so complex that clone (controller) makers couldn't keep up. some discussed here
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

from law of unintended consequence, internal politics shutting down 370 efforts during FS, results in lack of new 370 during the period, claimed to give rise to clone system (countermeasure for clone controllers give rise to clone system makers).

trivia: CP67 delivered to univ had automagic terminal type identification for 2741 and 1052 terminal (switching port scanner type with terminal controller SAD CCW). Univ. had some number of tty/ascii terminals and so I integrated in tty/ascii support, including automagic terminal type identification. I then wanted single dialup number (hunt group)
https://en.wikipedia.org/wiki/Line_hunting
didn't quite work since IBM had taken short cut and hard wired line speed for each port.

Thus was born the univ project to build our own clone controller ... building channel interface board for Interdata/3 programmed to emulate the IBM controller with the addition of supporting automatic line speed. Later it was enhanced with Interdata/4 for the channel interface and cluster of Interdata/3s for the port interfaces. Interdata (and later Perkin/Elmer) sell it commercially as IBM clone controller. Four of us at the univ. get written up responsible for (some part of the) clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer

360 clone (plug compatible) controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

The Rice Paddy Navy

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Rice Paddy Navy
Date: 13 July 2022
Blog: Facebook
The Rice Paddy Navy
https://www.amazon.com/Rice-Paddy-Navy-Undercover-Military-ebook/dp/B01DPPPV1E/
loc1833-38:

YEARS AFTER THE WAR WAS OVER, with the Communists ruling the People's Republic of China, Chiang Kai-shek heading a second China in Taiwan, and SACO long dissolved, Rear Admiral Milton Miles was fond of quoting a letter he had received from General Claire Chennault in 1958. The general reflected on his own experience in China: I always found the Chinese friendly and cooperative. The Japanese gave me a little trouble at times, but not very much. The British in Burma were quite difficult sometimes. But Washington gave me trouble night and day throughout the whole war!

... snip ...

I've read Miles "A Different Kind of War"
https://www.amazon.com/Different-Kind-War-Guerrilla-Forces/dp/B000NTNH8U/
pg587:

Fleat Admiral Leahy to Miles: "Your go out and sink the guys that sold China down the river".

... snip ...

In the past, I've periodically characterized the first half of Miles' book about being sent to China to set up the coast watchers which quickly expanded into lots of other activities ... and the 2nd half of the book, about how OSS (Donovan & precursor to CIA), British, factions of the US Army (Wedemeyer) and others gave China to the Communists.

Dragon War goes into more detailed based on more recently released classified material ... it would seem that it was mostly Stilwell, who had deceived Milton and then Milton blames Wedemeyer who was trying to clean up Stilwell's mess. Stilwell appears to be trying to get Chiang Kai-shek to turn over all of China to Stilwell ... which wasn't going well.
https://www.amazon.com/Dragons-War-Allied-Operations-1937-1947-ebook/dp/B00DY0OLQC/
pg166/loc3392-97:

To satisfy Stilwell's demand, an elaborate scheme was worked out to ensure that Stilwell would ultimately get his command of the Chinese army. The central part of the scheme was to give Stilwell complete control over the China-bound Lend-Lease materiel as bargaining leverage over Chiang Kai-shek. Consequently, throughout his tenure until his unceremonious recall in late October 1944, Stilwell used his control over the Lend-Lease materiel to force Chiang into doing whatever Stilwell wanted done and used his power to satisfy his desire for command by granting favors to one particular Chinese commander over the others. In the end, Stilwell's inattention and chaotic management style over the Lend-Lease materiel prevented much larger amounts of American military aid from going to China.

... snip ...

I've also run into son of HD Wandling ... the family seems to have self-published "A Navy Mustang Sailor" that includes his time as member of SACO (I traded a copy of Miles "A Different Kind of War" for a scan of their book).

SACO Veterans
http://saconavy.net
more SACO recently gone 404, but lives on at wayback
https://web.archive.org/web/20210610051851/http://www.delsjourney.com/saco/saco.htm

some past posts:
https://www.garlic.com/~lynn/2021e.html#31 The Dragon's War: Allied Operations and the Fate of China, 1937-1947
https://www.garlic.com/~lynn/2021d.html#91 OSS in China: Prelude to Cold War
https://www.garlic.com/~lynn/2021d.html#78 OSS in China: Prelude to Cold War
https://www.garlic.com/~lynn/2021d.html#69 OSS in China: Prelude to Cold War
https://www.garlic.com/~lynn/2021d.html#67 OSS in China: Prelude to Cold War
https://www.garlic.com/~lynn/2021d.html#11 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2019e.html#94 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#62 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019e.html#60 Reviewing The China Mission
https://www.garlic.com/~lynn/2019c.html#72 This Kind of War: The Classic Military History of the Korean War
https://www.garlic.com/~lynn/2019.html#81 LUsers
https://www.garlic.com/~lynn/2018f.html#19 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018d.html#102 The Persistent Myth of U.S. Precision Bombing
https://www.garlic.com/~lynn/2018d.html#89 The China Mission: George Marshall's Unfinished War, 1945-1947
https://www.garlic.com/~lynn/2018c.html#107 Post WW2 red hunt
https://www.garlic.com/~lynn/2018c.html#82 The Redacted Testimony That Fully Explains Why General MacArthur Was Fired
https://www.garlic.com/~lynn/2018c.html#45 Counterinsurgency Lessons from Malaya and Vietnam: Learning to Eat Soup with a Knife
https://www.garlic.com/~lynn/2017k.html#5 The 1970s engineering recession
https://www.garlic.com/~lynn/2017k.html#3 Pearl Harbor
https://www.garlic.com/~lynn/2017j.html#57 About Unconventional warfare
https://www.garlic.com/~lynn/2017j.html#56 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017j.html#36 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017j.html#24 What if the Kuomintang Had Won the Chinese Civil War?
https://www.garlic.com/~lynn/2017i.html#75 WW II cryptography
https://www.garlic.com/~lynn/2017h.html#105 Iraq, Longest War
https://www.garlic.com/~lynn/2017f.html#18 5 Naval Battles That Changed History Forever
https://www.garlic.com/~lynn/2016h.html#80 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015c.html#60 past of nukes, was Future of support for telephone rotary dial ?

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems
Date: 13 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#85 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#87 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#88 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#89 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#90 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#91 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#92 Enhanced Production Operating Systems

CMS had update command to apply update changes to source files creating temporary work file. Endicott came out and started distributed development to update CP67 to support 370 virtual machines including support for unannounced 370 virtual memory (tables somewhat different than 360/67 tables, trivia: co-worker at science center had done rscs/vnet which becomes the basis for the internal network and corporate sponsored univ. BITNET).

Thus was born the multi-level source update effort (originally all exec). Production system on real 360/67 was CP67L (with lots of my changes), then running in 360/67 virtual machine was CP67H (modifications to provide 370 virtual machines), and running in 370 virtual machine was CP67I (modifications to run with 370 instructions and virtual memory tables). This was regularly running a year before the first engineering 370(/145 in endicott) was operational (in fact, CP67I was use to test its operation).

The reason that CP67H ran in virtual machine ... rather on bare hardware, was cambridge system had a lot of staff&student users from local universities and they wanted to make sure unannounced 370 virtual memory details didn't leak. Later some San Jose engineers came over and provided 3330 & 2305 device support for what becomes CP67SJ ... running on lots of 370 machines long before vm370 was operational.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet
https://www.garlic.com/~lynn/subnetwork.html#bitnet

Mid-80s, Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist

ask if I had copy of the original multi-level source update implementation. Lucky for her I was able to pull it off CSC backup tapes I had in the Almaden tape library ... a couple months later Almaden had an operational problem with random tapes being mounted as scratch ... I lost a dozen tapes including all my (triple replicated) CSC backup tapes. some archived past email w/Melinda
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908

other email w/Melinda about presenting HSDT/NSF
https://www.garlic.com/~lynn/2006w.html#email850607

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems II

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems II
Date: 14 July 2022
Blog: LinkedIn
I had taken two credit hr intro fortran/computers, at end of semester got student job reimplementing 1401 MPIO for 360/30 (univ. had 709/1401 with 1401 unit record front-end for 709 tape->tape and had been sold 360/67 for tss/360 ... and temporary got 360/30 pending availability of 360/67, even tho 360//30 had 1401 emulation mode ... I guess my job was just part of getting 360 experience). Univ shuts down datacenter over the weekend and I have it dedicated, although 48hrs w/o sleep makes Monday classes difficult. Within year of intro class, I was hired fulltime responsible for OS/360 (360/67 ran as 360/65, tss/360 never came to production quality) ... and continue to have my dedicated weekend time

student fortran jobs ran less than second on 709, but started out over a minute on os/360. I install HASP and cut time in half. First sysgen I did was MFT release 9.5 ... then for 11, I took apart the SYSGEN2 deck putting it back together for careful placement of datasets and PDS members (arm seek and PDS directory multi-track search optimization) ... cutting another 2/3rds to 12.9secs for student fortran. Never beat 709 until install Univ. of Waterloo WATFOR.

3 people come out from science center to install CP67 (3rd after CSC and MIT Lincoln labs), it couldn't handle OS/360 workload so I mostly play with it on weekends ... rewriting lots of code. Had OS/360 benchmark that ran 322sec, initially under cp67, ran 856sec (CP67 CPU 534sec), after a couple months I have it down to 435sec (CP67 CPU 113sec ... down from 534sec), part of 1968 SHARE presentation
https://www.garlic.com/~lynn/94.html#18

trivia: six months after cp67 was installed, CSC was having one week cp67 class at the Beverly hills Hilton. I arrived Sunday night and got asked to teach the cp67 class ... Friday, two days before, the cp67 people had resigned to join a commercial online cp67 startup.

Then before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thot Renton datacenter was possibly largest in the world (a couple hundred million in computer systems), 360/65s were arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between Renton manager and the CFO who only had a 360/30 up at Boeing field for payroll (although they enlarged the room for a 360/67 for me to play with when I wasn't doing other stuff). When I graduate, I leave Boeing for IBM CSC.

747#3 was flying skies of Seattle getting FAA flt certification. Pictures of 747 cabin were a cabin mockup south of Boeing field ... tours would claim 747 would have so many people that 747 would always have at least four jetways at the gate

In the early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM ... offline he had lots of stories. One was he was very vocal that electronics across the trail wouldn't work ... so possibly as punishment he is put in command of "spook base" (about the same time I'm at Boeing), refs
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

Boyd biography has "spook base" a $2.5B (60s $$$) "windfall" for IBM (ten times Renton)

When Boyd passes in 1997, the USAF had pretty much disowned him and it was the Marines at Arlington (and his effects go to Gray Research Center & Library in Quantico). There have continued to be Boyd conferences at Marine Corps Univ. in Quantico

Chuck's tribute to John
http://www.usni.org/magazines/proceedings/1997-07/genghis-john
for those w/o subscription
http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html
John Boyd - USAF. The Fighter Pilot Who Changed the Art of Air Warfare
http://www.aviation-history.com/airmen/boyd.htm
"John Boyd's Art of War; Why our greatest military theorist only made colonel"
http://www.theamericanconservative.com/articles/john-boyds-art-of-war/
40 Years of the 'Fighter Mafia'
https://www.theamericanconservative.com/articles/40-years-of-the-fighter-mafia/
Fighter Mafia: Colonel John Boyd, The Brain Behind Fighter Dominance
https://www.avgeekery.com/fighter-mafia-colonel-john-boyd-the-brain-behind-fighter-dominance/
Updated version of Boyd's Aerial Attack Study
https://tacticalprofessor.wordpress.com/2018/04/27/updated-version-of-boyds-aerial-attack-study/
A New Conception of War
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html

re:
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#83 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#85 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#87 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#88 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#89 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#90 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#91 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#92 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems II

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems II
Date: 14 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II

After joining IBM, I was doing enhanced production operating systems. I continued with CP67, even after several people spun off from science center to do VM370; in the CP67->VM370 morph, they dropped (SMP/multiprocessor support and bunch of my stuff) and/or significantly simplified a lot of code. I start on migrating to VM370 Release (for internal "CSC/VM"). Had done automated benchmarking for CP67 ... and starting to move to VM370 ... they were guaranteed to have lots of VM370 crashes ... so it became important to move the CP67 kernel serialization and integrity code to VM370 (in order to stop benchmarking tests from crashing). Some old email making (VM370 Release2-based CSC/VM available:
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

... as mentioned in the email, CSC/VM included SPM, which was a superset to the later combination of VMCF+IUCV+SMSG ... trivia: the author of REX(X) had done a 3270 client/server multi-user spacewar game using SPM (and since SPM was supported by RSCS/VNET, clients could be anywhere on the internal network). However, automated BOTs started appearing beating human players ... the "server" then increased energy use non-linear as interval between moves decreased, somewhat leveling the playing field.

The (eventually world-wide) online sales&marketing support HONE systems were long-time customers back to CP67 days (and providing CMS\APL-based sales/marketing application) and the US HONE datacenters were consolidated in Palo Alto in the mid-70s (trivia: when facebook 1st moves into silicon valley, it is into a new bldg built next door to the old HONE datacenter). Consolidated US HONE had additional support for single-system image, cluster, multiprocessing across large disk farm ... with load-balancing and fall-over provided by a modified version of performance predictor.

Science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark

Some some amount of VM370 group had been directed to FS activities and short of enhancements for VM370 Release 3, they decided to pickup a few things from my CSC/VM which were then included in Release 3 (serialization/integrity, a greatly simplified version of my VM370+CMS shared segment changes w/o my CMS paged mapped filesystem)

Then on a VM370 Release3-base, I added hardware multiprocessor support to CSC/VM, initially for HONE, so they could add a 2nd processor to each of the cluster systems allowing increase to 16 processors.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
smp, muliprocessor, compare&swap, etc posts
https://www.garlic.com/~lynn/subtopic.html#smp

some recnet posts mentioning SPM:
https://www.garlic.com/~lynn/2022e.html#2 IBM Games
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#81 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#33 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2021h.html#78 IBM Internal network
https://www.garlic.com/~lynn/2021c.html#11 Air Force thinking of a new F-16ish fighter
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2020.html#46 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems II

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems II
Date: 14 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II

Note: during FS, internal politics is killing off 370 projects ... and the lack of new 370 during the period is credited with giving clone 370 makers their market foothold. Then with the death of FS, there is mad rush getting stuff back in the 370 product pipeline, kicking off the quick&dirty 3033&3081 in parallel, some more info:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

The 3081 was warmed over FS but was only going to be multiprocessor, originally two processor 3081D but apparently less throughput than 3033MP ... so doubled cache size for 3081K getting better than 3033MP ... but Amdahl single processor was better throughput ... and Amdahl two processor was better throughput than later four processor 3084.

A major problem was (airline) ACP/TCP didn't have SMP support, so IBM feared that the whole airline market was going to move to Amdahl (single processor). Eventually IBM comes out with 3083 (a 3081 with one of the processors removed) motivated by ACP/TCP airline market.

There was similar issue at AT&T. For some reason AT&T Long Lines got a CSC/VM system (prior to my release3 SMP support) and propagated it around AT&T as well as incremental changes moving to newer 370s. Sometime 82 or 83(?), the IBM national AT&T account rep tracks me down and asks for help moving AT&T to system that had multiprocessor support.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
multiprocessor and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

some recent posts mentioning 3083:
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022d.html#37 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022d.html#31 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#25 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#97 IBM 9020
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#45 Automated Benchmarking
https://www.garlic.com/~lynn/2021j.html#66 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#78 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021g.html#90 Was E-mail a Mistake? The mathematics of distributed systems suggests that meetings might be better
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021f.html#39 'Bipartisanship' Is Dead in Washington
https://www.garlic.com/~lynn/2021d.html#44 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021c.html#66 ACP/TPF 3083
https://www.garlic.com/~lynn/2021c.html#39 WA State frets about Boeing brain drain, but it's already happening
https://www.garlic.com/~lynn/2021b.html#23 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#74 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems II

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems II
Date: 15 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#97 Enhanced Production Operating Systems II

Note that as undergraduate, IBM would sometimes suggest changes that I might do for CP67, I didn't know about gov. users at the time, but in retrospect some of the suggestions may have originated there. After I graduate and join science center, told about gov. agencies ... in part because they want me to come down and teach (computer & security) classes. One extended class (full large class room in the basement), middle of the afternoon, half the class quietly gets up and walks out. I look quizzically and guy in front row says I can look at it one of two ways, 1) half the class got up to go hear the VP talk in the auditorium or 2) half the class stayed to listen to me (offline he also bragged that they knew where I was every day of my life back to birth, even challenged me to name any date ... strange since I never worked for the gov. or had clearance ... I guess they justified because they ran so much of my code ... and it was before the Church Commission). Recently I was reading some books about Lansdale and for some reason there was reference to the VP going across the river to give a talk in the agency auditorium.

After leaving IBM and involved in doing electronic commerce, apparently got me dragged into doing (X9) financial standards, involved security, crypto, etc ... and the security and crypto got me involved with gov agencies. I did talk at NIST security conference that I was taking a $500 mil-spec chip, cost reducing by at least two orders of magnitude (for under $1), while increasing security.
http://csrc.nist.gov/nissc/1998/index.html
part of presentation
https://www.garlic.com/~lynn/nissc21.zip

Senior technical director to agency DDI up at Ft. Meade was doing assurance panel at trusted computing track at IDF and asked me to participate ... giving talk on the chip (and financial security), guy running TPM is in front row, so I say that it is nice to see TPM starting to look more like my chip, he responds that I didn't have a committee of 200 people helping me ... gone 404, but lives on at wayback machine
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13
part of presentation
https://www.garlic.com/~lynn/iasrtalk.zip

NISSC & Intel IDF al referenced here
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems

other agency ref gone 404, but lives on at wayback machine
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
reference to other VM history
https://www.leeandmelindavarian.com/Melinda#VMHist

other triva: about same time teaching computer/security classes for gov. agencies, IBM got a new CSO (previously in gov. service, head of presidential detail) and I was asked to run around with him some talking about computer security (and a little bit of physical security rubs off on me).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning Lansdale:
https://www.garlic.com/~lynn/2022d.html#30 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2021j.html#37 IBM Confidential
https://www.garlic.com/~lynn/2021d.html#84 Bizarre Career Events
https://www.garlic.com/~lynn/2019e.html#98 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#90 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019.html#87 LUsers
https://www.garlic.com/~lynn/2018e.html#9 Buying Victory: Money as a Weapon on the Battlefields of Today and Tomorrow
https://www.garlic.com/~lynn/2018d.html#101 The Persistent Myth of U.S. Precision Bombing
https://www.garlic.com/~lynn/2018d.html#0 The Road Not Taken: Edward Lansdale and the American Tragedy in Vietnam
https://www.garlic.com/~lynn/2018c.html#107 Post WW2 red hunt
https://www.garlic.com/~lynn/2013e.html#16 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013d.html#48 What Makes an Architecture Bizarre?

--
virtualization experience starting Jan1968, online at home since Mar1970

Enhanced Production Operating Systems II

From: Lynn Wheeler <lynn@garlic.com>
Subject: Enhanced Production Operating Systems II
Date: 15 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#97 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#98 Enhanced Production Operating Systems II

Note cross-over with upthread post about FS and 3081 only being multiprocessor machine. There were some unnatural things done to VM multiprocessor support in attempt to improve ACP/TPF (only had single processor support) throughput running in virtual machine (on 3081), however it degraded throughput for every other VM370 multiprocessor customer. Attempting to mask some of the degraded throughput, they did some tweaks to (3270) terminal response. However, a very large gov. VM370 multiprocessor customer (back to CP67 days), was all high-speed ASCII glass-teletype ... and I got dragged into seeing if I could do anything to offset the changes for ACP/TPF. I had done some tweaking to CMS console handling ... normal CMS was writing each line with a separate I/O requiring dropping from queue, adding to queue ... so something that wrote a full screen to ASCII terminal could involve twenty SIOs, along with 20 queue drop&adds. I hacked CMS console I/O handling to write all pending lines in a single start I/O ... cutting 20 queue drop/adds to one (regardless of the real terminal type) ... 1983 email references it cut overall avg queue drop/adds from 65/sec to 43/sec.
https://www.garlic.com/~lynn/2001f.html#email830420
in this post
https://www.garlic.com/~lynn/2001f.html#57

Note in the morph from CP67->VM370, the development group significantly dropped and/or simplified a lot of CP67 features ... but in a few cases took something that was enormously simple and made it enormously complex.

In the science center transition to VM370, I had added back in a lot of dropped/simplified CP67 features. However, here was a change to add back one more. CP67 & VM370 would drop from queue things that appeared to be in long-wait terminal I/O ... based on the device type. In CP67 there was a count of real device, high-speed pending I/Os, incremented when it started, decremented when it finished ... if the count was not zero, don't drop from queue when in wait state. In the morph to VM370, they changed the logic that on every entry to wait state ... it would scan the complete virtual device configuration looking for some active I/O for a "high-speed" virtual device. This was a lot more overhead. But things got worse when there was a miss-match between a "slow-speed" virtual device mapped to a "high-speed" 3270 terminal device (where I/Os could complete in similar elapsed time as disk I/O). It this case, somebody at IBM had noticed that there was this really fast queue drops/adds that shouldn't be happening ... but didn't really realize why. I just put the CP67 high-speed count logic in (eliminating the overhead of the virtual device configuration scan) ... also see the old email/post

note the agency was also very active in (IBM customer mainframe ser group) SHARE and on VMSHARE in the 70s & 80s (TYMSHARE had made their CMS-based online computer conferencing system available to SHARE for free starting in AUG1976) ... archives here:
http://vm.marist.edu/~vmshare

Mutliprocessor/SMP posts
https://www.garlic.com/~lynn/subtopic.html#smp

past posts mentioning the virtual/real queue-drop mismatch
https://www.garlic.com/~lynn/2022d.html#31 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#94 Computer BUNCH
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#45 Automated Benchmarking
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021g.html#90 Was E-mail a Mistake? The mathematics of distributed systems suggests that meetings might be better
https://www.garlic.com/~lynn/2021.html#75 Airline Reservation System
https://www.garlic.com/~lynn/2019c.html#46 IBM 9020
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2019b.html#22 Online Computer Conferencing
https://www.garlic.com/~lynn/2019b.html#4 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2018.html#86 Predicting the future in five years as seen from 1983
https://www.garlic.com/~lynn/2017k.html#39 IBM etc I/O channels?
https://www.garlic.com/~lynn/2017c.html#45 The ICL 2900
https://www.garlic.com/~lynn/2016.html#81 DEC and The Americans
https://www.garlic.com/~lynn/2015e.html#5 Remember 3277?
https://www.garlic.com/~lynn/2014d.html#58 The CIA's new "family jewels": Going back to Church?
https://www.garlic.com/~lynn/2014c.html#37 How many EBCDIC machines are still around?
https://www.garlic.com/~lynn/2013k.html#25 spacewar
https://www.garlic.com/~lynn/2011f.html#60 Dyadic vs AP: Was "CPU utilization/forecasting"
https://www.garlic.com/~lynn/2010q.html#18 Plug Your Data Leaks from the inside
https://www.garlic.com/~lynn/2010p.html#4 origin of 'fields'?
https://www.garlic.com/~lynn/2010e.html#31 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010d.html#14 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009s.html#0 tty
https://www.garlic.com/~lynn/2009p.html#37 Hillgang user group presentation yesterday
https://www.garlic.com/~lynn/2008d.html#42 VM/370 Release 6 Waterloo tape (CIA MODS)
https://www.garlic.com/~lynn/2006y.html#10 Why so little parallelism?

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Channel I/O

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Channel I/O
Date: 16 July 2022
Blog: LinkedIn
After transfer to research in silicon valley, I got to wander around both IBM and customer datacenters, including bldg14 (disk engineering) and bldg15 (disk product test) across the street. At the time they were doing prescheduled, 7x24, stand-alone machine testing. They said that they had recently tried MVS, but it had 15min mean-time-between-failure (in that environment). I offered to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand, concurrent testing ... greatly improving productivity. I then wrote up an (internal) research report on the work and happened to mention the MVS 15min MTBF ... bringing down the wrath of the MVS organization on my head. Informally I was told they tried to have me separated from the IBM company, when that failed they would make my time at IBM unpleasant in other ways (however the joke was on them, I was already being told I had no career, no promotions, no awards and no raises). A few years later when 3380s were about to ship, FE had a regression test of 57 errors that were likely to occur, in all 57 cases, MVS would fail (requiring re-ipl) and in 2/3rds of the cases there was no indication of what caused the failure. I didn't feel badly.

Bldg15 got the 1st engineering 3033 (outside POK, #3 or #4?) and since testing only took percent or two of processor, we found spare 3830 and couple strings of 3330 drives and setup private online service (ran 3270 coax under the street and added to my 3270 terminal switch on my desk). One Monday, got irate call asking what I had done to 3033 system (significant degradation, they claimed they did nothing). Eventually found that 3830 controller had been replaced with engineer 3880 controller. 3830 had fast horizontal microcode processing. 3880 had special hardware path for data transfer, but an extremely slow processor (JIB-prime) for everything else ... significantly driving up channel busy (and radically cutting amount of concurrent activity). They managed to mask some of the degradation before customer ship.

Trout/3090 had designed number channels for target throughput based on assumption that 3880 was same as 3830 (but supporting 3380 3mbyte/sec data rate). When they found out how bad 3880 channel busy really was, they realized they had to significantly increase the number of channels to achieve target throughput. The increase in channels required an additional TCM ... and 3090 semi-facetiously said that they would bill the 3880 controller group for the increase in 3090 manufacturing cost. Marketing then respun the large increase in number channels (to compensate for 3880 channel busy increase) to be a great I/O machine.

In 1980, STL (since renamed SVL), was bursting at the seams and they were moving 300 from IMS group to offsite bldg (w/dataprocessing back to STL datacenter). They had tried "remote 3270" but found the human factors totally unacceptable. I get con'ed into doing channel extender support, allowing channel attach 3270 controllers to be placed at offsite bldg (with no difference in human factors between offsite and inside STL). The hardware vendor tries to get IBM to release my support, but there was group in POK playing with some serial stuff, that get it vetoed (afraid if it is in the market, it would make it harder to get their stuff announced).

Unintended consequences; The STL 168s had 3270 (channel attached) controllers across all channels shared with disk controllers. Moving the 3270 controllers offsite with super fast channel interface box to 168 channel, channel busy was enormously reduced (for the same amount of 3270 terminal activity) compared to the 3270 controllers directly channel attached, increasing system throughput by 10-15% (eliminating much of the channel interference with disk controllers). There was some discussion of configuring all 3270 channel attached controllers for all the STL 168 systems similarly (even though didn't need the channel-extender capability, but the 10-15% in throughput would be welcome).

In 1988, the IBM branch asks me to help LLNL (national lab) get some serial stuff they are working with, standardized ... which quickly becomes fibre channel standard ("FCS", including some stuff that I had done in 1980) ... initially 1gbit/sec, full-duplex, 2gbit/sec aggregate, 200mbytes/sec. Then in 1990, the POK group gets their stuff released (when it is already obsolete) with ES/9000 as ESCON (17mbytes/sec).

Then some POK engineers become involved in FCS and define a heavy weight protocol that drastically reduces the throughput and is eventually released as FICON. The most recent published numbers I can find is Z196 "Peak I/O" benchmark that got 2M IOPS using 104 FICON (running over 104 FCS). About the same time there was FCS announced for E5-2600 blades claiming over million IOPs (two such FCS having higher throughput than 104 FICON).

FCS
https://en.wikipedia.org/wiki/Fibre_Channel

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
ficon posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

The US's best stealth jets are pretty easy to spot on radar, but that doesn't make it any easier to stop them

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The US's best stealth jets are pretty easy to spot on radar, but that doesn't make it any easier to stop them
Date: 16 July 2022
Blog: Facebook
The US's best stealth jets are pretty easy to spot on radar, but that doesn't make it any easier to stop them
https://www.businessinsider.com/radars-can-see-best-stealth-jets-but-cant-stop-them-2022-7

... multi-frequency radar systems ... current discription as targeting radar has trouble finding/identify baseball (or golfball) size image in the whole sky. It becomes easier if the lower-frequency radar drastically reduces the region (for high-frequency) to search. 2011 online radar tutorial claimed that realtime stealth targeting requires more computing power than what was then available. Fall 2017 article described self-driving cars had computing power (that was 100 times the computing power the 2011 tutorial claimed was required for stealth targeting).

F35 originally designed as bomb truck assuming F22 was flying cover to handle real threats. F35 compared to original prototype, stealth characteristics significantly compromised.
http://www.ausairpower.net/APA-JSF-Analysis.html
http://www.ausairpower.net/jsf.html
http://www.ausairpower.net/APA-2009-01.html

2011 RADAR tutorial
https://www.eetimes.com/document.asp?doc_id=1278838
https://www.eetimes.com/document.asp?doc_id=1278878
https://www.eetimes.com/document.asp?doc_id=1278931

mentions processing required to do advanced real time targeting (not available in 2011). Spring of 2015, DOD puts latest computer technologies on export restriction list. At fall 2015 supercomputer conference, China demonstrates they were making their own advanced computer components (used in supercomputers, military radar and other applications). YE2017 article referencing that latest generation of self driving cars have more than 100 times the processing (mentioned in 2011), needed to do real time targeting (rather than just tracking) of stealth aircraft.

W/o the F22 to fly cover, one of the F35 strategies is massive numbers of "stand-off" F35s using advanced "over the horizon" missiles to attack targets (as opposed to drone attacks).

China Claims Its New 'Meter Wave Radar' Is The Perfect Counter To Stealth Aircraft
https://nationalinterest.org/blog/buzz/china-claims-its-new-meter-wave-radar-perfect-counter-stealth-aircraft-145712

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

some past posts
https://www.garlic.com/~lynn/2021e.html#35 US Stealth Fighter Jets Like F-35, F-22 Raptors 'No Longer Stealth' In-Front Of New Russian, Chinese Radars?
https://www.garlic.com/~lynn/2019e.html#53 Stealthy no more? A German radar vendor says it tracked the F-35 jet in 2018 -- from a pony farm
https://www.garlic.com/~lynn/2019d.html#104 F-35
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2018f.html#83 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018c.html#108 F-35
https://www.garlic.com/~lynn/2018c.html#63 The F-35 has a basic flaw that means an F-22 hybrid could outclass it -- and that's a big problem
https://www.garlic.com/~lynn/2018c.html#60 11 crazy up-close photos of the F-22 Raptor stealth fighter jet soaring through the air
https://www.garlic.com/~lynn/2018c.html#19 How China's New Stealth Fighter Could Soon Surpass the US F-22 Raptor
https://www.garlic.com/~lynn/2018c.html#14 Air Force Risks Losing Third of F-35s If Upkeep Costs Aren't Cut
https://www.garlic.com/~lynn/2018b.html#86 Lawmakers to Military: Don't Buy Another 'Money Pit' Like F-35
https://www.garlic.com/~lynn/2018b.html#39 Why China's New Supercomputer Is Only Technically the World's Fastest
https://www.garlic.com/~lynn/2017i.html#78 F-35 Multi-Role
https://www.garlic.com/~lynn/2017g.html#44 F-35
https://www.garlic.com/~lynn/2017c.html#15 China's claim it has 'quantum' radar may leave $17 billion F-35 naked
https://www.garlic.com/~lynn/2016h.html#93 F35 Program
https://www.garlic.com/~lynn/2016h.html#77 Test Pilot Admits the F-35 Can't Dogfight
https://www.garlic.com/~lynn/2016h.html#73 Note on dis-orientation
https://www.garlic.com/~lynn/2016h.html#40 The F-22 Raptor Is the World's Best Fighter (And It Has a Secret Weapon That Is Out in the Open)
https://www.garlic.com/~lynn/2016b.html#96 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#89 Computers anyone?
https://www.garlic.com/~lynn/2015f.html#46 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015c.html#14 With the U.S. F-35 Grounded, Putin's New Jet Beats Us Hands-Down
https://www.garlic.com/~lynn/2015b.html#59 A-10
https://www.garlic.com/~lynn/2014j.html#43 Let's Face It--It's the Cyber Era and We're Cyber Dumb
https://www.garlic.com/~lynn/2014j.html#41 50th/60th anniversary of SABRE--real-time airline reservations computer system
https://www.garlic.com/~lynn/2014j.html#40 China's Fifth-Generation Fighter Could Be A Game Changer In An Increasingly Tense East Asia
https://www.garlic.com/~lynn/2014i.html#102 A-10 Warthog No Longer Suitable for Middle East Combat, Air Force Leader Says
https://www.garlic.com/~lynn/2014h.html#49 How Comp-Sci went from passing fad to must have major
https://www.garlic.com/~lynn/2014h.html#36 The Designer Of The F-15 Explains Just How Stupid The F-35 Is

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Channel I/O

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Channel I/O
Date: 16 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O

Downside of providing disk product test/engineering their systems, they started having kneejerk reaction to any problem of blaming software and calling me ... as a result I had to increasingly play disk engineer shooting their problems. One point I pointed out that they were doing something that violated channel architecture. They then had an escalation call with POK channel engineers and required that I attend ... turns out I was right. After that they wanted me on all POK calls (somebody explained that most of the senior disk engineers that understood channel, had left to go with 3rd party disk startups).

other trivia: I did do some timing software on how fast channels & disk controllers handled channel commands (like how fast could a seek track CCW be handled) and got various customers to test on variety of processors and controllers (IBM and non-IBM). Most non-IBM, OEM disk controllers were faster than 3830. Also, 158 integrated channel was among the slowest. When Future System failed (was suppose to replace all 370s), there was mad rush to get stuff back into 370 product pipelines ... including kick off quick&dirty 303x & 3081 in parallel. For the 303x channel director, they took 158 engine with just the integrated channel microcode. A 3031 was a 158 engine with just the 370 microcode, and a 2nd with just the integrated channel microcode. A 3032 was 168-3 reworked to use 303x channel director for external channels. A 3033 started out 168 logic remapped to 20% faster chips. 3081 channels had similar performance.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Also 1975, Endicott had asked me to help with doing ECPS for virgil/tully (138/148) ... also available with 4300s ... old archived post (redoing in native microcode gave about 10:1 improvement).
https://www.garlic.com/~lynn/94.html#21

Early 80s, I get approval to give presentation on how ECPS was done, at local monthly BAYBUNCH user group meetings. After meetings, we usually adjourned to local silicon valley watering holes ... and the Amdahl people grilled me for more on ECPS. They said that they were in process of implementing HYPERVISOR in Amdahl "MACROCODE" ... a 370-like instruction set that ran in microcode mode. MACROCODE was originally developed to greatly simplify and cut the time to respond to the plethora of trivial microcode changes that IBM had started doing for 3033 (which were required by IBM operating systems). For high-end machines, microcode was "horizontal" which was very complex and time-consuming to program. While Amdahl was able to ship "HYPERVISOR" in the early 80s ... it took until 1988 for IBM to respond with PR/SM & LPAR for 3090.

some recent posts mentioning macrocode
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021j.html#4 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#31 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021e.html#67 Amdahl
https://www.garlic.com/~lynn/2021.html#54 IBM Quota
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers

--
virtualization experience starting Jan1968, online at home since Mar1970

John Boyd and IBM Wild Ducks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: John Boyd and IBM Wild Ducks
Date: 17 July 2022
Blog: LinkedIn

How to Stuff a Wild Duck

"We are convinced that any business needs its wild ducks. And in IBM we try not to tame them."
T.J. Watson, Jr.

"How To Stuff A Wild Duck", 1973, IBM poster
https://collection.cooperhewitt.org/objects/18618011/

IBM Wild Ducks
https://www.discerningreaders.com/ibm-wild-ducks-home-page.html

Note IBM had 100 videos for its 100 anniversary ... however the one about "wild ducks" was about "customer" wild ducks, all traces of employee wild ducks appeared to have been expunged

I was introduced to John Boyd (retired USAF) in the early 80s and use to sponsor his briefings at IBM. John's version of "wild ducks" (To Be or To Do):

"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question."

... snip ...

IBM Chairman Learson trying to block the rise of the (old boy) careerists and bureaucrats destroying the Watson legacy.

Management Briefing
Number 1-72: January 18,1972
ZZ04-1312

TO ALL IBM MANAGERS:

Once again, I'm writing you a Management Briefing on the subject of bureaucracy. Evidently the earlier ones haven't worked. So this time I'm taking a further step: I'm going directly to the individual employees in the company. You will be reading this poster and my comment on it in the forthcoming issue of THINK magazine. But I wanted each one of you to have an advance copy because rooting out bureaucracy rests principally with the way each of us runs his own shop.

We've got to make a dent in this problem. By the time the THINK piece comes out, I want the correction process already to have begun. And that job starts with you and with me.

Vin Learson


... text rendition of Learson's poster


+-----------------------------------------+
|           "BUSINESS ECOLOGY"            |
|                                         |
|                                         |
|            +---------------+            |
|            |  BUREAUCRACY  |            |
|            +---------------+            |
|                                         |
|           is your worst enemy           |
|              because it -               |
|                                         |
|      POISONS      the mind              |
|      STIFLES      the spirit            |
|      POLLUTES     self-motivation       |
|             and finally                 |
|      KILLS        the individual.       |
+-----------------------------------------+
"I'M Going To Do All I Can to Fight This Problem . . ."
by T. Vincent Learson, Chairman

... snip ...

How to Stuff a Wild Duck

"We are convinced that any business needs its wild ducks. And in IBM we try not to tame them." - T.J. Watson, Jr.

"How To Stuff A Wild Duck", 1973, IBM poster
https://collection.cooperhewitt.org/objects/18618011/

... and in the 70s budding Future System disaster, from Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993 ....
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

more FS info
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

FS was completely different than 370 and was going to completely replace it and internal politics was shutting down 370 efforts (lack of new 370 products during FS is credited with giving the 370 clone makers their market foothold). I continued to work on 360/370 stuff all during FS, even periodically ridiculing what they were doing (which wasn't exactly a career enhancing activity). When FS finally implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033 & 3081 efforts in parallel.

Late 70s and early 80s, I was blamed for online computer conferencing (precursor to the IBM forums and modern social media) on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s). It really took off spring of 1981 when I distributed a trip report of visit to Jim Gray at Tandem. Only about 300 directly participated but claims upwards of 25,000 were reading. There were six copies of approx. 300 pages printed along with executive summary and summary of the summary and packaged in Tandem 3-ring binders and sent to the executive committee (folklore is 5of6 wanted to fire me) ... from summary of summary:

• The perception of many technical people in IBM is that the company is rapidly heading for disaster. Furthermore, people fear that this movement will not be appreciated until it begins more directly to affect revenue, at which point recovery may be impossible

• Many technical people are extremely frustrated with their management and with the way things are going in IBM. To an increasing extent, people are reacting to this by leaving IBM Most of the contributors to the present discussion would prefer to stay with IBM and see the problems rectified. However, there is increasing skepticism that correction is possible or likely, given the apparent lack of commitment by management to take action

• There is a widespread perception that IBM management has failed to understand how to manage technical people and high-technology development in an extremely competitive environment.


... snip ...

... from IBM Jargon
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

... but it takes another decade (1981-1992) ... IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company. gone behind paywall, but mostly lives free at wayback machine.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

had already left IBM, but get a call from the bowels of Armonk asking if could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts. However, before getting started, the board brings in a new CEO and reverses the breakup.

For Boyd's first IBM briefing (in early 80s), I tried to do it through plant site employee education. At first they agreed, but as I provided more information about how to prevail/win in competitive situations, they changed their mind. They said that IBM spends a great deal of money training managers on how to handle employees and it wouldn't be in IBM's best interest to expose general employees to Boyd, I should limit audience to senior members of competitive analysis departments. First briefing in San Jose Research bldg28 auditorium open to all.

Note in 89/90, the commandant of the Marine Corps leverages Boyd for a make-over of the corps, at a time when IBM was desperately in need of a make-over. When Boyd passes in 1997, the USAF had pretty much disowned him and it was the Marines at Arlington and his effects go to Gray Library and Research Center at Quantico (there continued to be Boyd conferences at Quantico).

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters. Besides wandering around IBM datacenters after joining IBM ... was still allowed to attend user group meetings and drop in on customers. The manager of one of the largest financial datacenters liked me to stop by and talk technology. At one point, the IBM branch manager horribly offended the customer ... and in retaliation they ordered an Amdahl system (lone Amdahl system in vast sea of "blue"). Amdahl had been selling into tech/science/university market, but this would be the first for commercial, "true blue" account. I was asked to go onsite at the customer for 6-12 months to help obfuscate why the customer was ordering an Amdahl system. I talk it over with the customer and then decline IBM's offer. I was then told that the branch manager was good sailing buddy of IBM CEO and if I didn't do this, I could forget having a career, promotions, and/or raises. It was one of many times that I was told I had no career, promotions, and/or raises ... also reminded that in IBM, Business Ethics was an Oxymoron.

Chuck's tribute to John
http://www.usni.org/magazines/proceedings/1997-07/genghis-john
for those w/o subscription
http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html
John Boyd - USAF. The Fighter Pilot Who Changed the Art of Air Warfare
http://www.aviation-history.com/airmen/boyd.htm
"John Boyd's Art of War; Why our greatest military theorist only made colonel"
http://www.theamericanconservative.com/articles/john-boyds-art-of-war/
40 Years of the 'Fighter Mafia'
https://www.theamericanconservative.com/articles/40-years-of-the-fighter-mafia/
Fighter Mafia: Colonel John Boyd, The Brain Behind Fighter Dominance
https://www.avgeekery.com/fighter-mafia-colonel-john-boyd-the-brain-behind-fighter-dominance/
Updated version of Boyd's Aerial Attack Study
https://tacticalprofessor.wordpress.com/2018/04/27/updated-version-of-boyds-aerial-attack-study/
A New Conception of War
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/

I also had HSDT (T1 and faster computer links) starting in the early 80s and was working with the NSF director; was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually RFP is released (in part based on what we already had running); preliminary announce:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

internal IBM politics prevent us from bidding on the RFP. the NSF director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse (as did claims that what we already had running was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87). The winning bid didn't even have T1 links, just 440kbit/sec ... possibly to make it look like conforming, they have T1 trunks with telco multiplexors running multiple links/trunk. As regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

one of the first HSDT T1 satellite links was between Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston (hudson valley, east coast), which eventually had a whole boatload of Floating Point Systems boxes.
https://en.wikipedia.org/wiki/Floating_Point_Systems

The last product we did at IBM was HA/CMP.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

It started out HA/6000 for the NYTimes to move their newspaper system (ATEX) off (DEC) Vaxcluster to RS/6000. However as I started doing technical/scale-up cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Ingres, Informix, Sybase, Oracle, all had vax/cluster support in the same source base with unix support ... lots of discussion on improving over Vaxcluster and easing Vaxcluster RDBMS port to HA/CMP unix base). Then cluster scale-up was transferred, announced as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later.

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

John Boyd and IBM Wild Ducks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: John Boyd and IBM Wild Ducks
Date: 18 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks

I was giving HSDT presentations at various univ., usually those with NSF supercomputers or had plans on getting NSF supercomputers. After one such pitch (up valley from SJR) at Berkeley, led to being asked to help with the "Berkeley 10M" telescope effort. They were also working on transitioning from film to CCD, which would also enable remote viewing and require high speed links. Some old archived email
https://www.garlic.com/~lynn/2004h.html#email830804
https://www.garlic.com/~lynn/2004h.html#email830822
https://www.garlic.com/~lynn/2004h.html#email830830
https://www.garlic.com/~lynn/2004h.html#email841121

They were doing some testing at Lick Observatory (east of San Jose) and there was some technology visits there
https://www.lickobservatory.org
Eventually they got an $80M grant from the Keck Foundation and it turns into the "Keck 10M"
https://www2.lbl.gov/Science-Articles/Archive/keck-telescope.html
https://keckobservatory.org/
more old email
https://www.garlic.com/~lynn/2004h.html#email860519

Some followup presentations when NSF gives UC $120M for Berkeley supercomputer center. However, the UC Regents master plan had UC San Diego getting the next new bldg and it becomes the UC San Diego supercomputer center. more old email
https://www.garlic.com/~lynn/2011b.html#email850312
https://www.garlic.com/~lynn/2011b.html#email850313
https://www.garlic.com/~lynn/2011b.html#email850314

I was exchanging some email w/Melinda on history of VM
https://www.leeandmelindavarian.com/Melinda#VMHist
and asked about giving HSDT presentation on pending Princeton Supercomputer center
https://www.garlic.com/~lynn/2011c.html#email860407

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

FedEx to Stop Using Mainframes, Close All Data Centers By 2024

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: FedEx to Stop Using Mainframes, Close All Data Centers By 2024
Date: 18 July 2022
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022e.html#71 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#72 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#75 FedEx to Stop Using Mainframes, Close All Data Centers By 2024

After transfer to San Jose Research, I worked with Jim Gray and Vera Watson on the original sql relational implementation, System/R and involved with tech transfer to Endicott for SQL/DS ("under the radar" while rest of company was preoccupied with the next great DBMS, EAGLE). When EAGLE implodes there is request for how fast can System/R be ported to MVS, which is eventually released as DB2, originally for decision support *ONLY*. When Gray leaves IBM for Tandem, one of the thing he palms off on me is DBMS consulting with the IMS group.

The last product we did at IBM was HA/CMP. It started out as HA/6000 for NYTimes to allow them to migrate their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP (High Availability Cluster Multi-Processing) after working with national labs on technical/scientific cluster scale-up and RDBMS vendors (Ingres, Informix, Oracle, Sybase) that had VAXCluster support in same source base with unix (lots of work for easing their VAXCluster RDBMS support to cluster unix base). Old poast about Jan1992 cluster scale-up with Oracle CEO, 16way mid1992, 128way ye1992.
https://www.garlic.com/~lynn/95.html#13

Within a few weeks of the Oracle CEO meeting, cluster scale-up was transferred, announced as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later. Likely contributing was the (mainframe) DB2 group complaining if we were allowed to continue, it would be at least 5yrs ahead of them.

After leaving IBM, was brought in as consultant into small client/server startup, two of the former Oracle people in the Ellison scale-up meeting, were thee responsible for something called "commerce server" and they wanted to do payment transactions. The startup had also invented this technology they calle "SSL" they wanted to use, the result is now frequently called "electronic commerce". I had absolute authority over everthing between servers and the payment networks (but could only make recommendations on the client/server side).

Later Postel (Internet Standards editor)
https://en.wikipedia.org/wiki/Jon_Postel
sponsors my talk on why the internet isn't business critical dataprocessing, based on the compensating documentation, processes and software I needed to do for electronic commerce.

Electronic commerce apparently got me dragged into doing (X9) financial standards, involved security, crypto, etc ... and the security and crypto got me involved with gov agencies. I did talk at NIST security conference that I was taking a $500 mil-spec chip, cost reducing by at least two orders of magnitude (for under $1), while increasing security.
http://csrc.nist.gov/nissc/1998/index.html
part of presentation
https://www.garlic.com/~lynn/nissc21.zip

Senior technical director to agency DDI up at Ft. Meade was doing assurance panel at trusted computing track at IDF and asked me to participate ... giving talk on the chip (and financial security), guy running TPM is in front row, so I say that it is nice to see TPM starting to look more like my chip, he responds that I didn't have a committee of 200 people helping me ... gone 404, but lives on at wayback machine
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13
part of presentation
https://www.garlic.com/~lynn/iasrtalk.zip

other agency ref gone 404, but lives on at wayback machine
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
reference to other VM history
https://www.leeandmelindavarian.com/Melinda#VMHist

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
electronic commerce and EC payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
some X9 standards work refs
https://www.garlic.com/~lynn/x959.html
X9.59 protocol posts
https://www.garlic.com/~lynn/subpubkey.html#x959
security proportional to risk posts
https://www.garlic.com/~lynn/submisc.html#security.proportional.to.risk

--
virtualization experience starting Jan1968, online at home since Mar1970

Price Wars

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Price Wars
Date: 18 July 2022
Blog: LinkedIn
Price Wars
https://www.amazon.com/Price-Wars-Commodities-Markets-Chaotic-ebook/dp/B093YSL569/
pg40/loc652-53:

"There were many things that led this to happen," Masters tells me. "First of all, you had the Commodities Futures Modernization Act."

pg40/loc656-57:

It was Brooksley Born, the chairwoman of the Commodities Futures Trading Commission, versus Alan Greenspan, the chairman of the Federal Reserve. The question: should "derivatives" be regulated?

... snip ...

Jan1999, I was asked to help try and prevent the coming economic mess (we failed). Then decade later (jan2009), I was asked to web'ize the Pecora Hearings (1930s congressional hearings into '29 crash, resulted in jail sentences and Glass-Steagall) with lots of internal HTMLs and URLs between what happened then and what happened this time (comments that the new congress might have appetite to do something). I work on it for awhile and then get a call saying it wouldn't be needed after all (comments that capital hill was totally buried under enormous mountains of wallstreet cash).

Gramm, #2 on time's list of those responsible for economic mess (2001-2008)
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html

Now better known for GLBA and repeal of Glass-Steagall (enabling Too Big To Fail, used as excuse for not holding TBTF accountable), but on the list for legislation blocking regulation of CDS gambling bets (derivatives). Born, CFTC chair, suggested regulating derivatives. Gramm's wife replaces Born, while Gramm gets legislation passed blocking derivative regulation), then his wife resigns to join Enron board (and audit committee).

http://www.nytimes.com/2008/11/17/business/17grammside.html

Enron was a major contributor to Mr. Gramm's political campaigns, and Mr. Gramm's wife, Wendy, served on the Enron board, which she joined after stepping down as chairwoman of the Commodity Futures Trading Commission.

... snip ...

https://web.archive.org/web/20080711114839/http://www.villagevoice.com/2002-01-15/news/phil-gramm-s-enron-favor/

A few days after she got the ball rolling on the exemption, Wendy Gramm resigned from the commission. Enron soon appointed her to its board of directors, where she served on the audit committee, which oversees the inner financial workings of the corporation. For this, the company paid her between $915,000 and $1.85 million in stocks and dividends, as much as $50,000 in annual salary, and $176,000 in attendance fees

... snip ...

slightly different description: "Dark Money: The Hidden History of the Billionaires Behind the Rise of the Radical Right"
https://www.amazon.com/Dark-Money-History-Billionaires-RadicalRight-ebook/dp/B0180SU4OA/
loc2953-55:

The most fateful Mercatus Center hire might have been Wendy Gramm, an economist and director at the giant Texas energy company Enron who was the wife of Senator Phil Gramm, the powerful Texas Republican. In the mid-1990s, she became the head of Mercatus's Regulatory Studies Program.

loc2955-57:

There, she pushed Congress to support what came to be known as the Enron Loophole, exempting the type of energy derivatives from which Enron profited from regulatory oversight. Both Enron and Koch Industries, which also was a major trader of derivatives, lobbied desperately for the loophole.

loc2958-59:

Some experts foresaw danger. In 1998, Brooksley Born, chair of the Commodity Futures Trading Commission, warned that the lucrative but risky derivatives market needed more government oversight.

loc2959-61:

But Senator Gramm, who chaired the Senate Banking Committee, ignored such warnings, crafting a deregulatory bill made to order for Enron and Koch, called the Commodity Futures Modernization Act.

... snip ...

Mercatus Center
https://en.wikipedia.org/wiki/Mercatus_Center

I had been told that some investment bankers had walked away "clean" from the 80s S&L crisis, were then running Internet IPO mills (invest a few million, hype, IPO for a couple billion, which then needed to fail to leave the field clear for the next round of IPOs) and were predicted to get next into securitized mortgages. I was to improve the integrity of mortgage supporting documents as countermeasure, however they then found that they could pay the rating agencies for triple-A ... and could start doing liar, no-documentation mortgages (no-documentation, no documentation integrity), securitize, pay for triple-A (even when rating agencies knew they weren't worth triple-A, from Oct2008 congressional hearings), and sell into the bond market, doing over $27T 2001-2008. From the law of unintended consequences, the biggest fines from the economic mess were for the "robo-signing mills" fabricating the "missing" documents (the fines were transferred to agencies that supposedly were to aid the defrauded borrowers ... very little found it to them ... some agencies were even headed by some of the same people involved in the economic mess).

Then they started creating securitized mortgages designed to fail (paying for triple-A, sell into the bond market), and take out (CDS) gambliing bets (derivatives) they would fail. As economy was failing, SECTREAS convinces congress to appropriate $700B in TARP funds supposedly to bail out Too Big To Fail. However, the largest recipient of TARP funds was AIG (the largest holder of CDS gambling bets and negotiating to pay off at 50cents on the dollar). The SECTREAS steps in saying that they had to take TARP funds to pay off at 100cents on the dollar ... and the largest recipient of face value payoffs was the institution formally headed by SECTREAS (a firm that was also one of the major speculators in the CFTC oil/gas price spike).

The real bailout of the Too Big To Fail was the Federal Reserve (buying trillions of toxic assets at 98cents on the dollar and providing tens of trillions in ZIRP funds). that fought a legal battle to prevent it being made public. When they lost, the chairman held a press conference to say that he had believed that the TBTF would use the money to help "main street", when they didn't he had no way to force them (but that didn't stop the flow of funds). Note the chairman had been partly selected because he was student of the depression (where the Feds had tried something similar with the same results, so he would have no expectation that it would be different this time).

Why are gas prices so high? These obscure traders are partly to blame
https://www.theguardian.com/environment/2022/apr/28/gas-prices-why-are-they-so-high-traders

"My instinct tells me that a very careful analysis of this market would show that the price is not reflective of supply chain problems, that there's just too much leeway for the big banks and the big producers to manipulate if no one is looking and watching what they're doing," says Greenberger, the former division director of the Commodity Futures Trading Commission (CFTC), the main regulator of US energy markets.

... snip ...

"GRIFTOPIA" had chapter on CFTC that used to require that commodity players had significant position because speculators were causing wild irrational price fluctuation (i.e. they profited by manipulating price, buy low sell high, then short sale on the way day ... including manipulating news to push price in the direction they wanted). But then CFTC sent (secret) letters to selected speculators allowing them play ... responsible for the huge oil&gas price hike summer of 2008.
https://en.wikipedia.org/wiki/Griftopia

Later a member of congress published the transactions for 2008 showing the speculators that were responsible for the huge price spike summer of 2008. Instead of vilifying the speculators responsible, somehow the press vilified the member of congress for violating corporation privacy (as if corporations were people, disinformation to distract from those responsible). (summer 2008) Oil settles at record high above $140
https://money.cnn.com/2008/06/27/markets/oil/

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
S&L crisis
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
too big to fail (too big to prosecute, too big to jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
triple-A toxic CDOs posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
fed chairman posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
ZIRP posts
https://www.garlic.com/~lynn/submisc.html#zirp
Pecora &/or class-steagall posts
https://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass-Steagall
ENRON posts
https://www.garlic.com/~lynn/submisc.html#enron
regulatory capture posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Price Wars

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Price Wars
Date: 18 July 2022
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2022e.html#106 Price Wars

Price Wars (Part II Wars)
https://www.amazon.com/Price-Wars-Commodities-Markets-Chaotic-ebook/dp/B093YSL569/
pg65/loc1050-51:

PART II WARS

pg67/loc1053-54:

3 Perception: Pricing ISIS in Iraq

pg83/loc1329-34:

From Shiller and Stiglitz I had learnt two truths about prices. On the one hand, prices narrate. They tell a story, a story that can excite and spread and become a self-fulfilling prophecy. In this view, prices are driven not by scientific rationality but emotional contagion. On the other hand, prices deceive. They can hide information, they can manipulate the unwitting and extract their wealth. In this view, prices are a carefully deployed rational weapon. Were these two perspectives compatible? I went back to my original model of financial markets: the cult. I found one in the US that combined prophecy and deception, and suggested how prices could manipulate and narrate at the same time.

... snip ...

The World Crisis, Vol. 1, Churchill explains the mess in middle east started with move from 13.5in to 15in Naval guns (leading to moving from coal to oil)
https://www.amazon.com/Crisis-1911-1914-Winston-Churchill-Collection-ebook/dp/B07H18FWXR/
loc2012-14:

From the beginning there appeared a ship carrying ten 15-inch guns, and therefore at least 600 feet long with room inside her for engines which would drive her 21 knots and capacity to carry armour which on the armoured belt, the turrets and the conning tower would reach the thickness unprecedented in the British Service of 13 inches.

loc2087-89:

To build any large additional number of oil-burning ships meant basing our naval supremacy upon oil. But oil was not found in appreciable quantities in our islands. If we required it, we must carry it by sea in peace or war from distant countries.

loc2151-56:

This led to enormous expense and to tremendous opposition on the Naval Estimates. Yet it was absolutely impossible to turn back. We could only fight our way forward, and finally we found our way to the Anglo-Persian Oil agreement and contract, which for an initial investment of two millions of public money (subsequently increased to five millions) has not only secured to the Navy a very substantial proportion of its oil supply, but has led to the acquisition by the Government of a controlling share in oil properties and interests which are at present valued at scores of millions sterling, and also to very considerable economies, which are still continuing, in the purchase price of Admiralty oil.

... snip ...

When the newly elected democratic government wanted to review the Anglo-Persian contract, US arranged coup and backed Shah as front
https://unredacted.com/2018/03/19/cia-caught-between-operational-security-and-analytical-quality-in-1953-iran-coup-planning/
https://en.wikipedia.org/wiki/Kermit_Roosevelt,_Jr%2E
https://en.wikipedia.org/wiki/1953_Iranian_coup_d%27%C3%A9tat

... and Schwarzkoph (senior) training of the secret police to help keep Shah in power
https://en.wikipedia.org/wiki/SAVAK
Savak Agent>Describes How He Tortured Hundreds
https://www.nytimes.com/1979/06/18/archives/savak-agent-describes-how-he-tortured-hundreds-trial-is-in-a-mosque.html
Iran people eventually revolt against the horribly oppressive, (US backed) autocratic government.

CIA Director Colby wouldn't approve the "Team B" analysis (exaggerated USSR military capability) and Rumsfeld got Colby replaced with Bush, who would approve "Team B" analysis (justifying huge DOD spending increase), after Rumsfeld replaces Colby, he resigns as white house chief of staff to become SECDEF (and is replaced by his assistant Cheney)
https://en.wikipedia.org/wiki/Team_B
Then in the 80s, former CIA director H.W. is VP, he and Rumsfeld are involved in supporting Iraq in the Iran/Iraq war
http://en.wikipedia.org/wiki/Iran%E2%80%93Iraq_War
including WMDs (note picture of Rumsfeld with Saddam)
http://en.wikipedia.org/wiki/United_States_support_for_Iraq_during_the_Iran%E2%80%93Iraq_war

VP and former CIA director repeatedly claims no knowledge of
http://en.wikipedia.org/wiki/Iran%E2%80%93Contra_affair

because he was fulltime administration point person deregulating financial industry ... creating S&L crisis
http://en.wikipedia.org/wiki/Savings_and_loan_crisis
along with other members of his family
http://en.wikipedia.org/wiki/Savings_and_loan_crisis#Silverado_Savings_and_Loan
and another
http://query.nytimes.com/gst/fullpage.html?res=9D0CE0D81E3BF937A25753C1A966958260

In the early 90s, H.W. is president and Cheney is SECDEF. Sat. photo recon analyst told white house that Saddam was marshaling forces to invade Kuwait. White house said that Saddam would do no such thing and proceeded to discredit the analyst. Later the analyst informed the white house that Saddam was marshaling forces to invade Saudi Arabia, now the white house has to choose between Saddam and the Saudis.
https://www.amazon.com/Long-Strange-Journey-Intelligence-ebook/dp/B004NNV5H2/

... roll forward ... Bush2 is president and presides over the huge cut in taxes, huge increase in spending, explosion in debt, the economic mess (70 times larger than his father's S&L crisis) and the forever wars, Cheney is VP, Rumsfeld is SECDEF and one of the Team B members is deputy SECDEF (and major architect of Iraq policy).
https://en.wikipedia.org/wiki/Paul_Wolfowitz

Before the Iraq invasion, the cousin of white house chief of staff Card ... was dealing with the Iraqis at the UN and was given evidence that WMDs (tracing back to US in the Iran/Iraq war) had been decommissioned. the cousin shared it with (cousin, white house chief of staff) Card and others ... then is locked up in military hospital, book was published in 2010 (4yrs before decommissioned WMDs were declassified)
https://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/

NY Times series from 2014, the decommission WMDs (tracing back to US from Iran/Iraq war), had been found early in the invasion, but the information was classified for a decade
http://www.nytimes.com/interactive/2014/10/14/world/middleeast/us-casualties-of-iraq-chemical-weapons.html

note the military-industrial complex had wanted a war so badly that corporate reps were telling former eastern block countries that if they voted for IRAQ2 invasion in the UN, they would get membership in NATO and (directed appropriation) USAID (can *ONLY* be used for purchase of modern US arms, aka additional congressional gifts to MIC complex not in DOD budget). From the law of unintended consequences, the invaders were told to bypass ammo dumps looking for WMDs, when they got around to going back, over a million metric tons had evaporated (showing up later in IEDs)
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/

... from truth is stranger than fiction and law of unintended consequences that come back to bite you, much of the radical Islam & ISIS can be considered our own fault, VP Bush in the 80s
https://www.amazon.com/Family-Secrets-Americas-Invisible-Government-ebook/dp/B003NSBMNA/
pg292/loc6057-59:

There was also a calculated decision to use the Saudis as surrogates in the cold war. The United States actually encouraged Saudi efforts to spread the extremist Wahhabi form of Islam as a way of stirring up large Muslim communities in Soviet-controlled countries. (It didn't hurt that Muslim Soviet Asia contained what were believed to be the world's largest undeveloped reserves of oil.)

... snip ...

Saudi radical extremist Islam/Wahhabi loosened on the world ... bin Laden & 15of16 9/11 were Saudis (some claims that 95% of extreme Islam world terrorism is Wahhabi related)
https://en.wikipedia.org/wiki/Wahhabism

Mattis somewhat more PC (political correct)
https://www.amazon.com/Call-Sign-Chaos-Learning-Lead-ebook/dp/B07SBRFVNH/
pg21/loc349-51:

Ayatollah Khomeini's revolutionary regime took hold in Iran by ousting the Shah and swearing hostility against the United States. That same year, the Soviet Union was pouring troops into Afghanistan to prop up a pro-Russian government that was opposed by Sunni Islamist fundamentalists and tribal factions. The United States was supporting Saudi Arabia's involvement in forming a counterweight to Soviet influence.

... snip ...

and internal CIA
https://www.amazon.com/Permanent-Record-Edward-Snowden-ebook/dp/B07STQPGH6/
pg133/loc1916-17:

But al-Qaeda did maintain unusually close ties with our allies the Saudis, a fact that the Bush White House worked suspiciously hard to suppress as we went to war with two other countries.

... snip ...

The Danger of Fibbing Our Way into War. Falsehoods and fat military budgets can make conflict more likely
https://web.archive.org/web/20200317032532/https://www.pogo.org/analysis/2020/01/the-danger-of-fibbing-our-way-into-war/
The Day I Realized I Would Never Find Weapons of Mass Destruction in Iraq
https://www.nytimes.com/2020/01/29/magazine/iraq-weapons-mass-destruction.html

The Deep State (US administration behind formation of ISIS)
https://www.amazon.com/Deep-State-Constitution-Shadow-Government-ebook/dp/B00W2ZKIQM/
pg190/loc3054-55:

In early 2001, just before George W. Bush's inauguration, the Heritage Foundation produced a policy document designed to help the incoming administration choose personnel

pg191/loc3057-58:

In this document the authors stated the following: "The Office of Presidential Personnel (OPP) must make appointment decisions based on loyalty first and expertise second,

pg191/loc3060-62:

Americans have paid a high price for our Leninist personnel policies, and not only in domestic matters. In important national security concerns such as staffing the Coalition Provisional Authority, a sort of viceroyalty to administer Iraq until a real Iraqi government could be formed, the same guiding principle of loyalty before competence applied.

... snip ...

... including kicked hundreds of thousands of former soldiers out on the streets created ISIS ... and bypassing the ammo dumps (looking for fictitious/fabricated WMDs) gave them over a million metric tons (for IEDs).

Military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
S&L crisis
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
Perpetual War posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

some past posts mentioning shiller
https://www.garlic.com/~lynn/2017k.html#51 Taxing Social Security Benefits
https://www.garlic.com/~lynn/2017.html#11 Attack SS Entitlements
https://www.garlic.com/~lynn/2014.html#69 Pensions, was Re: Royal Pardon For Turing
https://www.garlic.com/~lynn/2011p.html#67 The men who crashed the world
https://www.garlic.com/~lynn/2011k.html#78 China's yuan could challenge dollar role in a decade
https://www.garlic.com/~lynn/2011h.html#10 Home prices may drop another 25%, Shiller predicts
https://www.garlic.com/~lynn/2011h.html#7 Home prices may drop another 25%, Shiller predicts
https://www.garlic.com/~lynn/2011h.html#6 Home prices may drop another 25%, Shiller predicts
https://www.garlic.com/~lynn/2011h.html#5 Home prices may drop another 25%, Shiller predicts
https://www.garlic.com/~lynn/2008j.html#38 dollar coins
https://www.garlic.com/~lynn/2008d.html#0 Toyota Sales for 2007 May Surpass GM

--
virtualization experience starting Jan1968, online at home since Mar1970

--
previous, next, index - home