List of Archived Posts

2024 Newsgroup Postings (04/15 - 06/12)

Amdahl and IBM ACS
Disk & TCP/IP I/O
ReBoot Hill Revisited
ReBoot Hill Revisited
Cobol
Cobol
Testing
Testing
AI-controlled F-16
Boeing and the Dark Age of American Manufacturing
AI-controlled F-16
370 Multiprocessor
370 Multiprocessor
Boeing and the Dark Age of American Manufacturing
Bemer, ASCII, Brooks and Mythical Man Month
360&370 Unix (and other history)
CTSS, Multicis, CP67/CMS
IBM Millicode
CP40/CMS
IBM Millicode
IBM Millicode
TDM Computer Links
FOILS
CP40/CMS
TDM Computer Links
Tymshare & Ann Hardy
The Last Thing This Supreme Court Could Do to Shock Us
PDP1 Spacewar
Wondering Why DEC Is The Most Popular
Wondering Why DEC Is The Most Popular
GML and W3C
HONE &/or APL
UNIX & IBM AIX
Old adage "Nobody ever got fired for buying IBM"
Old adage "Nobody ever got fired for buying IBM"
The man reinventing economics with chaos theory and complexity science
Old adage "Nobody ever got fired for buying IBM"
Planet Mainframe Profile
Joseph Stiglitz is still walking the road to freedom
Big oil spent decades sowing doubt about fossil fuel dangers, experts testify
CMS RED, XEDIT, IOS3270, FULIST, BROWSE
Congratulations Lynne
Netscape
TYMSHARE, VMSHARE, ADVENTURE
IBM Mainframe LAN Support
IBM Mainframe LAN Support
Big oil spent decades sowing doubt about fossil fuel dangers, experts testify
IBM Mainframe LAN Support
IBM Mainframe LAN Support
Left Unions Were Repressed Because They Threatened Capital
third system syndrome, interactive use, The Design of Design
third system syndrome, interactive use, The Design of Design
backward architecture, The Design of Design
IBM 3705 & 3725
IBM 3705 & 3725
backward architecture, The Design of Design
Token-Ring Again
IBM Mainframe, TCP/IP, Token-ring, Ethernet
IBM Mainframe, TCP/IP, Token-ring, Ethernet
IBM "Winchester" Disk
IBM "Winchester" Disk
IBM "Winchester" Disk
HTTP over TCP
UNIX 370
HTTP over TCP
More CPS
WEB Servers, Browsers and Electronic Commerce
IBM Mainframe Addressing
Berkeley 10M
IBM Token-Ring
IBM 3705 & 3725
IBM 3705 & 3725
IBM 3705 & 3725
Mainframe and Blade Servers
Mainframe and Blade Servers
Mainframe and Blade Servers
Inventing The Internet
Mainframe and Blade Servers
IBM Internal Network
Mainframe and Blade Servers
Inventing The Internet
Inventing The Internet
Inventing The Internet
Inventing The Internet
Hacker's Conference
New 9/11 Evidence Points to Deep Saudi Complicity
Inventing The Internet
Gordon Bell
Virtual Machines
Inventing The Internet
Gordon Bell
Gordon Bell
TCP Joke
ASCII/TTY33 Support
Virtual Memory Paging
Codd almighty! Has it been half a century of SQL already?
Online Library Catalog
Virtual Memory Paging
Virtual Memory Paging
Virtual Memory Paging
IBM 4300
architectural goals, Byte Addressability And Beyond
Virtual Memory Paging
CP67 & VM370 Source Maintenance
Virtual Memory Paging
Financial/ATM Processing
Financial/ATM Processing
architectural goals, Byte Addressability And Beyond
D-Day
Old adage "Nobody ever got fired for buying IBM"
Anyone here (on news.eternal-september.org)?
Anyone here (on news.eternal-september.org)?
Multithreading
CAI, IBM 1500
Disconnect Between Coursework And Real-World Computers
Puritans
IBM Mainframe System Meter
Disconnect Between Coursework And Real-World Computers
How the IRS went soft on billionaires and corporate tax cheats
Financial/ATM Processing
Disconnect Between Coursework And Real-World Computers

Amdahl and IBM ACS

From: Lynn Wheeler <lynn@garlic.com>
Subject: Amdahl and IBM ACS
Date: 15 Apr, 2024
Blog: Facebook
Note Amdahl wins battle to make ACS, 360 compatible ... folklore is that executives then shutdown the operation because they were afraid that it would advance the state of the art too fast and IBM would loose control of the market ... shortly later Amdahl leaves IBM. Following lists some ACS/360 features that show up more than 20yrs later in the 90s with ES/9000

https://people.computing.clemson.edu/~mark/acs_end.html ACS
https://people.computing.clemson.edu/~mark/acs.html
https://people.computing.clemson.edu/~mark/acs_legacy.html

some recent posts mentioning Amdahl and end of ACS
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#91 7Apr1964 - 360 Announce
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#103 More IBM Downfall
https://www.garlic.com/~lynn/2023g.html#44 Amdahl CPUs
https://www.garlic.com/~lynn/2023g.html#23 Vintage 3081 and Water Cooling
https://www.garlic.com/~lynn/2023g.html#11 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#80 Vintage Mainframe 3081D
https://www.garlic.com/~lynn/2023f.html#72 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#69 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023d.html#94 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023d.html#63 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989

--
virtualization experience starting Jan1968, online at home since Mar1970

Disk & TCP/IP I/O

From: Lynn Wheeler <lynn@garlic.com>
Subject: Disk & TCP/IP I/O
Date: 15 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#116 Disk & TCP/IP I/O

135/145, 138/148, 4331/4341 were conventional microprocessors with microcode to emulate 370 instructions, avg 10 native instruction per 370 instruction. I got con'ed into helping with ECPS originally for 138/148 ... old archive post with the initial analysis of kernel pathlengths for selecting what to microcode. I was told 138/148 had 6k bytes and 370 kernel instructions would translate into native microcode on approx. byte-for-byte basis (highest executing 6k 370 pathlengths accounted for approx. 80% of kernel execution) ...
https://www.garlic.com/~lynn/94.html#21

around 1980, there was effort to move variety of IBM internal microprocessors to 801/risc ... low&mid-range 370s (Iliad 801 for 4361&4381), s38->as/400, controllers, etc. I got roped into help with white paper that VLSI technology had advanced to point that it was possible to implement nearly all 370 instructions directly in silicon ... as well as other proposed 801 solutions and those 801 efforts floundered ... seeing some number of 801/RISC engineers leaving for RISC projects at other vendors.

Note 801/ROMP was suppose to be for the next generation displaywriter ... when that got canceled, they decided to pivot to unix workstation market and got the company that had done AT&T Unix port to IBM/PC for PC/IX, to do one for ROMP ... becomes AIX (and PC/RT). The follow-on chip set was RIOS for RS/6000. Then AIM is formed (Apple, IBM, Motorola) and the executive we reported to for HA/CMP, went over to head-up Somerset (single chip power/pc effort) which included adopting some features from Motorola 88k RISC processor.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

tome about being frequently told that I had no career, no raises, no promotions and about all the people that wanted to see me fired ... including 5of6 of the corporate executive committee, being blamed for doing online computer conferencing in the late 70s and early 80s on the IBM internal network (larger than arpanet/internet from just about the beginning until mid/late 80s).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

ReBoot Hill Revisited

From: Lynn Wheeler <lynn@garlic.com>
Subject: ReBoot Hill Revisited
Date: 16 Apr, 2024
Blog: Facebook
ReBoot Hill Revisited
https://planetmainframe.com/2016/03/reboot-hill-revisited/

Learson tried (and failed) to block the bureaucrats, careerists, and MBAs from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
... further complicated by the failure of "Future System"
https://www.amazon.com/Computer-Wars-Future-Global-Technology/dp/0812923006/
"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

future system refs:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

... and 20yrs later, IBM has one of the largest losses in the history of US corporations and it looked like it might be the end; IBM being re-orged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get a call from the bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former AMEX president as CEO, who (somehat) reverses the breakup.

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Mid-90s, financial industry was expanding globally and spending billions on redoing batch cobol overnight settlement (some of it originating from the 60s) ... combination of increased business and globalization shortening the overnight window ... was not getting settlement done in the time available. They were going to straight-through financial processing on large numbers of parallel "killer micros". Some of us tried to point out that the standard parallelization libraries being used, had hundred times the overhead of batch cobol ... and were ignored ... until some major pilots went down in throughput flames.

After the turn of the century I was helping somebody that did a high-level financial processing language that translated specifications into (parallelizable) fine-grain SQL statements for execution. Also in the late 90s, i86 processor makers had gone to hardware layer that translated i86 into RISC micro-ops, largely negating throughput difference between i86 and RISC.


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     z900 processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC)

2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

In the same period, major (non-mainframe) RDBMS vendors (including IBM) had done significant optimization work on parallelizing (non-mainframe) RDBMS cluster operation. In 2003, had demo'ed a six system parallel RDBMS cluster, each system a four Pentium4 multiprocessor (each Pentium4 equivalent of max-configured z990, each system equivalent of four max-configured z990, six of them equivalent of 24 max-configured z990 ... or 232.8BIPS, aggregate more than current max-configured z16),

Using the financial processing language, implemented equivalent "straight-through" processing of several existing major production (overnight batch window) systems with throughput greatly exceeding any existing requirement. This was taken to major financial industry meetings, initially with great acceptance ... then brick wall. Eventually were told that executives still bore the scars of the 90s attempts, and it would be a long time before it was tried again.

some recent posts mentioning "straight-through" processing implementaton
https://www.garlic.com/~lynn/2024.html#113 Cobol
https://www.garlic.com/~lynn/2023g.html#12 Vintage Future System
https://www.garlic.com/~lynn/2022g.html#69 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#3 Final Rules of Thumb on How Computing Affects Organizations and People
https://www.garlic.com/~lynn/2021k.html#123 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021g.html#18 IBM email migration disaster
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros

Turn of the century, IBM mainframe hardware sales had dropped to a few percent of total revenue (compared to over half in the 80s). In the z12 time-frame, it was down to a couple percent (and still dropping), but mainframe group was 25% of total revenue (and 40% of profit) ... nearly all software & services.

I/O trivia: 1980 I was con'ed into doing channel-extender support for STL (since renamed SVL) that was moving 300 people from IMS group to offsite bldg with service back to STL datacenter. They had tried "remote 3270", but found human factors unacceptable. Channel-extender allowed placing channel-attached 3270 controllers at the offsite bldg with no perceptible difference in human factors between offsite and inside STL (although some tweaks with channel-extender increased system throughput by 10-15%, prompting suggestion that all their systems should use channel-extender). Then some POK engineers playing with some serial stuff, blocked the release of support to customers.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

Later in 1988, the IBM branch office asks if I could help LLNL (national lab) get some serial stuff they were playing with, standardized. It quickly becomes "fibre-channel" standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then the POK stuff (after more than decade) finally gets released with ES/9000 as ESCON (when it is already obsolete) 17mbyes/sec. Then some POK engineers get involved in FCS and define a heavy weight protocol that significantly cuts the native throughput, which eventually ships as FICON (running over FCS). The latest public benchmark I can find is z196 "Peak I/O" getting 2M IOPS with 104 FICON. About the same time, an FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend limiting SAPs (system assist processors that actually do I/O) to 70% CPU ... would be around 1.5M IOPS. Further complicating are CKD DASD, which haven't been made for decades, needing to be simulated on industry standard fixed-block disks.

FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon


z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS (1000MIPS/proc), Sep2019
z16, 200 processors, 222BIPS (1111MIPS/proc), Sep2022

2010 max configured z196, 80 processor aggregate 50BIPS
     (625MIPS/proc)
2010 E5-2600 server blade, 16 processor aggregate 500BIPS
     (31BIPS/proc)

2010 E5-2600 server blade ten times max configured z196 and still more than twice current max-configured z16 (current generation server blade closer to 40 times max-configured z16)

reference to some discussion about performance technologies
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2022h.html#116 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021i.html#92 How IBM lost the cloud
https://www.garlic.com/~lynn/2019e.html#102 MIPS chart for all IBM hardware model
https://www.garlic.com/~lynn/2016f.html#91 ABO Automatic Binary Optimizer
https://www.garlic.com/~lynn/2016e.html#38 How the internet was invented
https://www.garlic.com/~lynn/2014m.html#164 Slushware
https://www.garlic.com/~lynn/2014l.html#90 What's the difference between doing performance in a mainframe environment versus doing in others
https://www.garlic.com/~lynn/2014l.html#56 This Chart From IBM Explains Why Cloud Computing Is Such A Game-Changer
https://www.garlic.com/~lynn/2014c.html#96 11 Years to Catch Up with Seymour
https://www.garlic.com/~lynn/2013i.html#33 DRAM is the new Bulk Core
https://www.garlic.com/~lynn/2006s.html#21 Very slow booting and running and brain-dead OS's?

--
virtualization experience starting Jan1968, online at home since Mar1970

ReBoot Hill Revisited

From: Lynn Wheeler <lynn@garlic.com>
Subject: ReBoot Hill Revisited
Date: 16 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited

Attractive Alternatives to Mainframes Are Breaking Their Decades-Old Hold on Wall Street
https://web.archive.org/web/20120125090143/http://www.wallstreetandtech.com/operations/197007742

... before we left ibm (before our ha/cmp cluster scale-up was transferred for announce as IBM supercomputer for technical/scientific *only* and we were told we couldn't work on anything with more than four processors), we did number of calls on NYSE and SAIC ... part of it was their need for more processor power ... and HA/CMP would be capable to have processors 128 RS/6000 clusters doing both technical/scientific as well was RDBMS commercial.


1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS (128*126MIPS = 16BIPS)

Hardware reliability had been increasing and service outages were increasingly shifting to environmental (earthquakes, hurricanes, floods) we were doing replicated systems and I had coined the terms disaster survivability and geographic survivability when out marketing. The IBM (rebranded) S/88 Product Administer was taking us into their customers. They had also gotten me to write a section for the corporate continuous availability strategy document (but it got pulled when both Rochester/AS400 and POK/mainframe complained that they couldn't meet the objectives).

We had been brought into NYSE and SIAC; they had a datacenter very carefully located in NYC in a building that was supplied from multiple water, power, and telco sources that traveled different routes past the building. NYSE/SAIC was taken out when a transformer exploded in the basement, contaminating bldg with PCB.

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

Cobol

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cobol
Date: 17 Apr, 2024
Blog: Facebook
Turn of the century was brought into large financial outsourcing datacenter, handled over half of all (issuing/consumer) credit card accounts in the US (real-time auths, statementing, call-centers, etc) ... had 40+ max configured IBM mainframe systems (constant rolling upgrades, none older than 18months) all running the same 450K statement cobol application (number needed to finish batch settlement in the overnight window). They had large group supporting performance care and feeding for a couple decades ... but possibly got a little myopic.

I offer to use some different performance analysis techniques (from the IBM science center in the 70s) ... and was able to identify a 14% improvement (including finding large complex operation that was using three times the expected processing, turns out it was being invoked three different times instead of just once) ... represented savings of six max configured mainframes (at the time going rate around @$30M). They had other datacenters that handled 70% of all acquiring (merchant) credit card processing.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

past posts mentioning financial outsourcing and 450k statement cobol application handling over half of all issuing/consumer credit card
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024.html#113 Cobol
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2024.html#26 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2023g.html#87 Mainframe Performance Analysis
https://www.garlic.com/~lynn/2023g.html#50 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023c.html#99 Account Transaction Update
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#68 How Gerstner Rebuilt IBM
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021c.html#49 IBM CEO
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs
https://www.garlic.com/~lynn/2019e.html#155 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019b.html#62 Cobol
https://www.garlic.com/~lynn/2018f.html#13 IBM today
https://www.garlic.com/~lynn/2018d.html#43 How IBM Was Left Behind
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2017k.html#57 When did the home computer die?
https://www.garlic.com/~lynn/2017h.html#18 IBM RAS
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014b.html#83 CPU time
https://www.garlic.com/~lynn/2013h.html#42 The Mainframe is "Alive and Kicking"
https://www.garlic.com/~lynn/2013b.html#45 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2012i.html#25 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2011c.html#35 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order' for financial services
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009e.html#76 Architectural Diversity
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?

--
virtualization experience starting Jan1968, online at home since Mar1970

Cobol

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cobol
Date: 17 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#4 Cobol

the financial services company had once been unit of AMEX, but in 1992, it was spun off in the largest IPO up until that time ... same time that IBM looked about at its end, having one of the largest losses in the history of US corporations and was being reorged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former president of Amex (that the financial services company had previously reported to) as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Testing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Testing
Date: 17 Apr, 2024
Blog: Facebook
IBM 23jun1969 unbundling announcement started to charge for (application) software (managed to make the case that kernel software could still be free), system engineers (SE), maintenance, etc.

after graduation I joined the IBM science center and one of hobbies was enhanced production operating systems for internal datacenters.

with the decision to add virtual memory to all 370s (basically MVT storage management was so bad that regions were specified four times larger than used and 1mbyte 370/165 typically only ran four concurrent regions, insufficient to keep system busy and justified; going to running MVT with 16mbyte address space ... similar to running MVT in a 16mbyte virtual machine ... aka VS2/SVS, would allow the number of concurrently running regions to be increased by a factor of four times ... with little or no paging) ... first thing was enhancing CP67 to optionally support 370 virtual machines with 370 virtual memory ... and modifying a CP67 to run on 370 virtual memory architecture (this was in regular production use for a year before the 1st engineering 370 with virtual memory was operational (in fact the CP67-370 was used as part of validating the engineering 370). Then there was a decision to release a VM370 product and in the morph from CP67->VM370, a lot of features were dropped or simplified.

I had also done an automated benchmarking process ... run a specified script giving number of simulated users with specified execution profiles (as part of automated benchmarking I had also done the "autolog" command that also came to be used for automating lots of standard production operation), with automated system reboot between each benchmark. With more internal datacenters installing VM370, early 1974, I started migrating lots of CP67 features to VM370 Release2... initially i found the VM370 automated benchmarking were consistently crashing VM370 ... so the next thing I migrated was the CP67 kernel syncronization&serialization ... in order to complete a full set of benchmarks, w/o VM370 constantly crashing. Towards the end of 1974, I had a VM370 R2-based production "CSC/VM" (for internal datacenters).

Also in the period, IBM took a sharp swerve with the Future System ... which was completely different from 370 and was going to completely replace 370. Internal politics during FS period was also killing off 370 efforts, and the lack of new IBM 370s during the period is credited with giving clone 370 makers, their market foothold. When FS finally implodes, there is mad rush to get stuff back into the 370 product pipeline, including kicking off quick&dirty 3033&3081 efforts in parallel. some more detail
http://www.jfsowa.com/computer/memo125.htm

With the demise of FS (and the rise of 370 clone makers), it was decided to start transition to kernel software charging ... beginning with new kernel code "add-ons" (transition complete in the 1st half of the 80s) ... and much of my internal "CSC/VM" was selected as guinea pig (I also get to spend lots of time with business planners and lawyers on kernel software charging practices).

As part of my release (some focus on the dynamic adaptive resource manager & scheduler that I had done as undergraduate) for kernel software add-on charging was 2000 automated validation benchmarks that took 3months elapsed time to run. Science center had years of system activity monitoring data for large number of different systems ... and created a multiple dimension system activity specification (uniform distribution of different combinations of number of users with different amounts of real storage available, paging, working set sizes, file I/O, CPU intensive, etc) with several benchmarks outside normally observed activity ... for the 1st 1000 benchmarks.

Also done at the science center was an APL-based analytical system model. This was made available on the world-wide, online sales&marketing HONE as the Performance Predictor, branch people could enter customer configuration and workload profile data and ask "what-if" questions about what happens with configuration and/or workload changes. The US HONE systems had been consolidated in silicon valley resulting in the largest loosely-coupled, shared DASD complex with fall-over and load-balancing ... where a modified version of the APL-based model made load-balancing decisions.

Another modified version of the APL-base model would predict the result of each of the 1st 1000 benchmarks and then checked the prediction with the actual results (somewhat validating both the model and my dynamic adaptive implementation). The APL-base model then was modified to specify the benchmark profile for each of the 2nd 1000 benchmarks, looking at the results of all benchmarks run so far ... searching for possible anomalies.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
23un1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management and scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging, page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock
HONE & APL posts
https://www.garlic.com/~lynn/subtopic.html#hone

some recent performance predictor specific posts
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024b.html#18 IBM 5100
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#33 Copyright Software
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#7 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history

--
virtualization experience starting Jan1968, online at home since Mar1970

Testing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Testing
Date: 18 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#6 Testing

other trivia: last product did at IBM was HA/CMP, started out HA/6000 for the NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres ... that had both VAXCluster and UNIX support in same source base). Lots of studies on why things fail. In part, commodity hardware was increasingly becoming more reliable and service outages were starting to increasingly shift to other factors like earthquakes, floods, hurricanes, etc ... so had to include replicated systems and different locations (less likely to be subject to common events) ... out marketing I coined the terms disaster survivability and geographic survivability. The IBM S/88 product administrator started taking us around to their customers and also had me write a section for the corporate continuous availability strategy document (but it got pulled when both Rochester/AS400 and POK/mainframe complained they couldn't meet the objectives).

Early Jan1992, meeting with Oracle, IBM AWD/Hester told Oracle CEO that IBM would have 16processor HA/CMP clusters by mid92 and 128processor HA/CMP clusters by ye92. I was then briefing IBM (gov) FSD about HA/CMP and they apparently told the Kingston supercomputer group that they were going with HA/CMP for gov. customers. Then end Jan92, we were told that cluster scale-up was being transferred to Kingston for announce as IBM supercomputer (for technical/scientific *ONLY*) and we couldn't work with anything that had more than four processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

.. when I transferred to SJR in the 70s, got to wander around IBM & non-IBM datacenters including disk engineering (bldg14) and disk product test (bldg15) across the street. They were doing prescheduled, around the clock, stand-alone mainframe testing (they said they had recently tried MVS, but MVS had 15min mean-time-between-failures ... requiring manual re-ipl ... in that environment). I offered to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, improving productivity ... downside was they would increasingly blame me for problems and I had to spend increasing amount of time playing disk engineer diagnosing their hardware problems. Engineering & Product Test were completely separated, departments didn't report to common management until the executive level ... and members didn't have badge access to each others' machine rooms and bldgs (since I provided the mainframe systems for both bldgs, my badge was enabled for access in both bldgs, I assume not being in disk division, I wasn't subject to the separation rules).

getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk

repost from over in another FACEBOOK group

The Birth OF SQL
https://www.youtube.com/watch?v=z8L202FlmD4&si=FHDLe1v_QZNUHZwM

.. when I transferred to SJR in the 70s , they were doing original SQL/relational, "System/R" on vm370 370/145 there ... worked with Jim Gray and Vera Watson. Some amount of conflict with STL and mainstream DBMS "IMS" ... then the company was working on the next great DBMS "EAGLE" ... and was able to do tech transfer (under the "radar") to Endicott for SQL/DS. Then when "EAGLE" implodes there is request for how fast could "System/R" be ported from VM/370 to MVS .... which eventually ships as DB2, originally for decision-support *only*.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

AI-controlled F-16

From: Lynn Wheeler <lynn@garlic.com>
Subject: AI-controlled F-16
Date: 20 Apr, 2024
Blog: Facebook
Following AESA radar first flight on F-16, Aselsan eyes 5th-gen
https://breakingdefense.com/2024/03/following-aesa-radar-first-flight-on-f-16-aselsan-eyes-5th-gen-aircraft-integration/
US Air Force Secretary to fly in AI-piloted F16 to demonstrate safety
https://interestingengineering.com/military/usaf-to-fly-ai-controlled-f16
US Air Force Secretary to fly in AI-controlled F-16
https://www.theregister.com/2024/04/10/usaf_ai_f16_tests/
US Air Force says AI-controlled F-16 has fought humans
https://www.theregister.com/2024/04/18/darpa_f16_flight/

I was introduced to John Boyd in the early 80s and would sponsor his briefings. He was largely responsible for LWF ... he would say he used his E-M theory on the original F15 design (supposedly started out as F-111 follow-on with swing wing), showing that the weight of the pivot more than offset the advantage of swing wing.
https://en.wikipedia.org/wiki/Lightweight_Fighter_program
and then YF16 and YF17
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon#Lightweight_Fighter_program
In the late 1960s, Boyd gathered a group of like-minded innovators who became known as the Fighter Mafia, and in 1969, they secured Department of Defense funding for General Dynamics and Northrop to study design concepts based on the theory.[13][14]

... snip ...

YF16 with relaxed stability requiring "fly-by-wire" that was fast enough for flight control surfaces
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon#Relaxed_stability_and_fly-by-wire
https://en.wikipedia.org/wiki/Relaxed_stability
https://fightson.net/150/general-dynamics-f-16-fighting-falcon/
The F-16 is the first production fighter aircraft intentionally designed to be slightly aerodynamically unstable, also known as "relaxed static stability" (RSS), to improve manoeuvrability. Most aircraft are designed with positive static stability, which induces aircraft to return to straight and level flight attitude if the pilot releases the controls; this reduces manoeuvrability as the inherent stability has to be overcome. Aircraft with negative stability are designed to deviate from controlled flight and thus be more maneuverable. At supersonic speeds the F-16 gains stability (eventually positive) due to aerodynamic changes.

... snip ...

misc. other
http://www.aviation-history.com/airmen/boyd.htm
https://www.nytimes.com/2003/03/09/books/40-second-man.html
https://www.nytimes.com/1997/03/13/us/col-john-boyd-is-dead-at-70-advanced-air-combat-tactics.html
https://www.usni.org/magazines/proceedings/1997/july/genghis-john

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html

Around 2010, there were online social media claims that F-35 was stealth and would replace F-15s, F-16s, F-18s, EA-18s, and A10s. Later in the decade, I found some analysis that showed it was less stealth than claimed and saw claims changed to "low observable".
https://www.ausairpower.net/APA-2009-01.html
http://www.ausairpower.net/jsf.html
http://www.ausairpower.net/APA-JSF-Analysis.html

Then found an online 2011 radar tutorial that made claims about the processing power needed to do real-time recognizing low-observable F-35 radar signatures (which was more than currently available ... however that fall articles appeared about self-driving cars claiming that the processing power used was 100 times the 2011 claims needed for real-time F-35 radar signature). Then within a year, articles appeared announcing that new radar jamming pods were being delivered for EA-18s to handle frequencies that could be used to target F-35s.

Posts mentioning F-35 "stealth" and 2011 radar tutorial
https://www.garlic.com/~lynn/2022f.html#9 China VSLI Foundry
https://www.garlic.com/~lynn/2022e.html#101 The US's best stealth jets are pretty easy to spot on radar, but that doesn't make it any easier to stop them
https://www.garlic.com/~lynn/2019e.html#53 Stealthy no more? A German radar vendor says it tracked the F-35 jet in 2018 -- from a pony farm
https://www.garlic.com/~lynn/2019d.html#104 F-35
https://www.garlic.com/~lynn/2018f.html#83 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018c.html#108 F-35
https://www.garlic.com/~lynn/2018c.html#60 11 crazy up-close photos of the F-22 Raptor stealth fighter jet soaring through the air
https://www.garlic.com/~lynn/2018b.html#86 Lawmakers to Military: Don't Buy Another 'Money Pit' Like F-35
https://www.garlic.com/~lynn/2017i.html#78 F-35 Multi-Role

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing and the Dark Age of American Manufacturing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing and the Dark Age of American Manufacturing
Date: 21 Apr, 2024
Blog: Facebook
Boeing and the Dark Age of American Manufacturing. Somewhere along the line, the plane maker lost interest in making its own planes. Can it rediscover its engineering soul?
https://www.theatlantic.com/ideas/archive/2024/04/boeing-corporate-america-manufacturing/678137/

I took two credit hr intro to fortran/computers and at the end of semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30 ... the univ. was getting 360/67 replacing the 709/1401, a 360/30 temporarily replaced 1401 (getting 360/30 for 360 experience) pending delivery of 360/67. The 360/67 arrives within a year of my taking intro class and I'm hired fulltime responsible fo os/360.

Then before I graduate I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services ... I think Renton datacenter possibly largest in the world with 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room. Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll, although they enlarge the room for a 360/67 for me to play with when I'm not doing other stuff. 747#3 was flying skies of Seattle getting FAA flt certification. There was also disaster plan to replicate Renton up at the new 747 plant in Everett (Mt. Rainier heats up and the resulting mud slide takes out Renton). When I graduate, I join IBM science center instead of staying with Boeing CFO.

IBM science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

recent Boyd post
https://www.garlic.com/~lynn/2024c.html#8 AI-controlled F-16

Boyd told story about being vocal that the electronics across the trail wouldn't work ... he then is put in command of spook base (about the same time I'm at Boeing).
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

Boyd biography has "spook base" a $2.5B windfall for IBM (ten times Renton).

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html#,a/>

Did Stock Buybacks Knock the Bolts Out of Boeing?
https://lesleopold.substack.com/p/did-stock-buybacks-knock-the-bolts
Since 2013, the Boeing Corporation initiated seven annual stock buybacks. Much of Boeing's stock is owned by large investment firms which demand the company buy back its shares. When Boeing makes repurchases, the price of its stock is jacked up, which is a quick and easy way to move money into the investment firms' purse. Boeing's management also enjoys the boost in price, since nearly all of their executive compensation comes from stock incentives. When the stock goes up via repurchases, they get richer, even though Boeing isn't making any more money.

... snip ...

2016, one of the "The Boeing Century" articles was about how the merger with MD has nearly taken down Boeing and may yet still (infusion of military industrial complex culture into commercial operation)
https://issuu.com/pnwmarketplace/docs/i20160708144953115

The Coming Boeing Bailout?
https://mattstoller.substack.com/p/the-coming-boeing-bailout
Unlike Boeing, McDonnell Douglas was run by financiers rather than engineers. And though Boeing was the buyer, McDonnell Douglas executives somehow took power in what analysts started calling a "reverse takeover." The joke in Seattle was, "McDonnell Douglas bought Boeing with Boeing's money."

... snip ...

Crash Course
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution
Sorscher had spent the early aughts campaigning to preserve the company's estimable engineering legacy. He had mountains of evidence to support his position, mostly acquired via Boeing's 1997 acquisition of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft plant in Long Beach and a CEO who liked to use what he called the "Hollywood model" for dealing with engineers: Hire them for a few months when project deadlines are nigh, fire them when you need to make numbers. In 2000, Boeing's engineers staged a 40-day strike over the McDonnell deal's fallout; while they won major material concessions from management, they lost the culture war. They also inherited a notoriously dysfunctional product line from the corner-cutting market gurus at McDonnell.

... snip ...

Boeing's travails show what's wrong with modern capitalism. Deregulation means a company once run by engineers is now in the thrall of financiers and its stock remains high even as its planes fall from the sky
https://www.theguardian.com/commentisfree/2019/sep/11/boeing-capitalism-deregulation

stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buybacks
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

Recent posts mentioning Boeing CFO, Boeing Computer Services, Renton
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying

some posts mentioning M/D financiers taking over Boeing
https://www.garlic.com/~lynn/2024.html#56 Did Stock Buybacks Knock the Bolts Out of Boeing?
https://www.garlic.com/~lynn/2023g.html#104 More IBM Downfall
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#69 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#40 Boeing Built an Unsafe Plane, and Blamed the Pilots When It Crashed
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021e.html#87 Congress demands records from Boeing to investigate lapses in production quality
https://www.garlic.com/~lynn/2021b.html#70 Boeing CEO Said Board Moved Quickly on MAX Safety; New Details Suggest Otherwise
https://www.garlic.com/~lynn/2021b.html#40 IBM & Boeing run by Financiers
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2019e.html#151 OT: Boeing to temporarily halt manufacturing of 737 MAX
https://www.garlic.com/~lynn/2019e.html#39 Crash Course
https://www.garlic.com/~lynn/2019e.html#33 Boeing's travails show what's wrong with modern capitalism
https://www.garlic.com/~lynn/2019d.html#39 The Roots of Boeing's 737 Max Crisis: A Regulator Relaxes Its Oversight
https://www.garlic.com/~lynn/2019d.html#20 The Coming Boeing Bailout?

--
virtualization experience starting Jan1968, online at home since Mar1970

AI-controlled F-16

From: Lynn Wheeler <lynn@garlic.com>
Subject: AI-controlled F-16
Date: 21 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#8 AI-controlled F-16
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing

The USAF Pairs Piloted Jets With AI Drones. Has AI spawned the ultimate "loyal wingman"--or just the next smart weapon?
https://spectrum.ieee.org/military-drones-us-air-force

2021 post/article mentioning loyal wingman/Valkyrie
https://www.garlic.com/~lynn/2021j.html#67 A Mini F-35?: Don't Go Crazy Over the Air Force's Stealth XQ-58A Valkyrie
A Mini F-35?: Don't Go Crazy Over the Air Force's Stealth XQ-58A Valkyrie
https://nationalinterest.org/blog/buzz/mini-f-35-dont-go-crazy-over-air-forces-stealth-xq-58a-valkyrie-46527
While the Air Force refused to disclose specifics of the XQ-58A, the drone is billed as having long range and a "high subsonic" speed. It is designed to be "runway independent," which suggests it will be flown from rough airstrips and forward bases. Still more clues can be found in a $40.8 million Air Force contract awarded to Kratos in 2016 under the Low-Cost Attritable Strike Unmanned Aerial System Demonstration program. That contract called for a drone with a top speed of Mach 0.9 (691 miles per hour), a 1,500-mile combat radius carrying a 500-pound payload, the capability to carry two GBU-39 small diameter bombs, and costing $2 million apiece when in mass production (an F-35 costs around $100 million).

... snip ...

... at one point, F-35 price was so unreasonable they started quoting plane w/o engine and separate price for the engine.

I was introduced to John Boyd in the early 80s and would sponsor his briefings. One of Boyd stories was being asked to review the USAF newest air-to-air missile before Vietnam. They showed him a film where the missile hit flares on a drone every time. He asked them to rewind the film and then just before the missile hits, had them stop the film and asked them what kind of guidance. They eventually say heat-seeking, he then asks them what kind of heat-seeking and gets them to eventually say "pin-point". He then asks him where is the hottest part of a jet plane. They answer the engine ... he says wrong, it is the plume some 30yrds behind the plane ... aka the missile will be lucky to hit 10% of the time (they gather up all their material and leave). Roll forward to Vietnam and Boyd is proved correct. At some point the USAF commanding general in Vietnam has all the fighters grounded until the USAF missiles are replaced with Navy Sidewinders (that have better than twice the hit rate). The general lasts 3months before he is called on the carpet back in the Pentagon for violating a cardinal (USAF) Pentagon rule, cutting (USAF) budget (by not using USAF missiles) and what was much worse, increasing the Navy budget.

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Multiprocessor

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Multiprocessor
Date: 21 Apr, 2024
Blog: Facebook
Charlie had invented compare&swap when doing CP67 multiprocessor fine-grain locking support at the science center. When we tried to get the 370 architecture owners to include compare&swap for 370, they said that the POK favorite son operating system owners (MVT, then SVS&MVS) said the (360) test&set" instruction was more than sufficient, if compare&swap was to be justified had to come up with justifications that weren't multiprocessor specific; thus were born the examples for application multithreading/multiprogramming use (like DBMS).

SMP, multiprocessor, tightly-coupled, and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

A decade ago, I was asked to track down the decision to add virtual memory to all 370s; basically MVT storage management was so bad that regions had to be specified four times larger than used, so 1mbyte, 370/165 typically ran only four concurrent regions ... insufficient to keep system busy and justified. Going to 16mbyte virtual address space ("SVS", similar to running MVT in a CP67 16mbyte virtual machine) could increase concurrently running regions by a factor of four times, with little or no paging. The 370 virtual memory decision also resulted in doing VM370, and in the morph of CP67->VM370, they simplified and/or dropped lots of features (including multiprocessing support).

archived posts with pieces of email exchange
https://www.garlic.com/~lynn/2011d.html#73

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters (including online sales&marketing support US HONE was long time customer from CP67 days, which evolves into world-wide VM370). As internal datacenters were migrating to VM370, in 1974 I started moving a lot of the CP67 missing features to a release2-based VM370 production "CSC/VM" ... which included kernel re-organization for multiprocessing ... but not the actual multiprocessor support.

The US HONE datacenters were consolidated in silicon valley with the largest loosely-coupled shared DASD configuration including load-balancing and fall-over support. Then I added multiprocessor support to Release3-based VM370 "CSC/VM", initially for US HONE so they could add a second processor for eight tightly-coupled systems in a loosely-coupled configuration. I did some tricks with hightly optimized multiprocessor pathlengths coupled with some processor cache affinity tricks (improving cache-hit and processor throughput offsetting multiprocessor pathlengths) showing twice the throughput of a single processor (this was at the time when MVS documentation was giving MVS multiprocessor throughput as 1.2-1.5 times the troughput of a single processor).

CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

trivia: when facebook 1st moves into silicon valley, it is into a new bldg built next door to the former US HONE datacenter.

other trivia: around 2010, I made some joke about "from the annals of releasing no software before its time" when z/VM finally releasing similar loosely-coupled support.

more trivia: after "future system" imploded (was going to replace all 370s and lack of new 370s during the period is credited with giving 370 clone makers their market foothold)
http://www.jfsowa.com/computer/memo125.htm
I got roped into helping with a 16-processor tightly-coupled, multiprocessor 370 ... and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) had effective 16-processor support (with 2-processor only 1.2-1.5 times throughput of single processor and if not careful, multiprocessor overhead growing non-linear with increase in processors) The head of POK then directs that some of us never visit POK again and that the 3033 processor engineers keep concentrated on 3033 (... and POK doesn't ship a 16-processor system until after the turn of the century)

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Multiprocessor

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Multiprocessor
Date: 22 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#11 370 Multiprocessor

3033 started out 168 logic remaped to 20% faster (and somewhat more circuits/chip) ... the 303x channel director was 158 engine with just the integrated channel microcode (for six channels) and w/o the 370 microcode ... to get full 16 channels would require three channel director boxes.

A 3031 was two 158 engines... one with only the 370 microcode and a 2nd with just the integrated channel microcode.

A 3032 was 168 using the channel director box for external channels.

Trivia: the (original) 168 external channels were actual faster than the 303x channel director box (i.e. 158 engine with just the integrated channel microcode)

final(?) trivia: compare-and-swap was chosen because "CAS" were Charlie's initials

360 had 2301&2303 "drum"" ... 2305-1 & 2305-2 were fixed head disks. 2301 was similar to 2303 ... same capacity but read/write four heads in parallel ... 1/4 no. tracks, each track 4 times larger, 4 times transfer rate

2305-1, 5.4mbytes, avg rotational delay 2.5msecs, 3mbyte/sec transfer most were 2305-2, 11.2mbytes, avg rotational delay, 5msecs, 1.5mbyte/sec

2305-1 had same number of heads as 2305-2 but heads were paired, offset 180degrees, read/write simultaneously, transfer on 2-byte channel. Start of record had only to rotate avg. 1/4 revolution for record to come under pair of (offset) head pair.

URL still there 2023 ... but now gone "404" ... easiest to just go to wayback machine
https://web.archive.org/web/20230821125023/https://www.ibm.com/ibm/history/exhibits/storage/storage_2305.html

By 1980, there was no follow-on product. For internal datacenters, IBM then contracted with vendor for what they called "1655", electronic disks that would emulate a 2305 ... but had no rotational delay. One of the issue was that while IBM had fixed-block disks, the company favorite son batch operating system never supported anything other than CKD DASD ... so for their use it had to simulate an existing CKD 2305 running over 1.5mbyte I/O channels. However for other IBM systems that supported FBA ... 1655s could be configured as fixed-block disk running on 3mbyte/sec I/O channels ... similar to SSD ... but had standard electronic memory that wasn't persistent w/o power.

posts mentioning DASD, CKD, FBA, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

past posts mentioning 2301, 2305, and 1655
https://www.garlic.com/~lynn/2022e.html#41 Wall Street's Plot to Seize the White House
https://www.garlic.com/~lynn/2012c.html#1 Spontaneous conduction: The music man with no written plan
https://www.garlic.com/~lynn/2011c.html#48 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2010q.html#67 ibm 2321 (data cell)
https://www.garlic.com/~lynn/2008s.html#39 The Internet's 100 Oldest Dot-Com Domains
https://www.garlic.com/~lynn/2008n.html#93 How did http get a port number as low as 80?
https://www.garlic.com/~lynn/2004c.html#5 PSW Sampling
https://www.garlic.com/~lynn/2003p.html#46 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2003n.html#52 Call-gate-like mechanism
https://www.garlic.com/~lynn/2003n.html#50 Call-gate-like mechanism
https://www.garlic.com/~lynn/2003m.html#35 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003j.html#58 atomic memory-operation question
https://www.garlic.com/~lynn/2003j.html#6 A Dark Day
https://www.garlic.com/~lynn/2003j.html#5 A Dark Day
https://www.garlic.com/~lynn/2003h.html#14 IBM system 370
https://www.garlic.com/~lynn/2002n.html#74 Everything you wanted to know about z900 from IBM

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing and the Dark Age of American Manufacturing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing and the Dark Age of American Manufacturing
Date: 22 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing

Boeing's problems were as bad as you thought. Experts and whistleblowers testified before Congress today. The upshot? "It was all about money."
https://www.vox.com/money/2024/4/17/24133324/boeing-senate-hearings-whistleblower-sam-salehpour-congress
Boeing went under the magnifying glass at not one, but two Senate hearings today examining allegations of deep-seated safety issues plaguing the once-revered plane manufacturer. Witnesses, including two whistleblowers, painted a disturbing picture of a company that cut corners, ignored problems, and threatened employees who spoke up.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Bemer, ASCII, Brooks and Mythical Man Month

From: Lynn Wheeler <lynn@garlic.com>
Subject: Bemer, ASCII, Brooks and Mythical Man Month
Date: 24 Apr, 2024
Blog: Facebook
360s were suppose to be ASCII machines but the ASCII unit record gear wasn't ready ... so they were (supposedly) going to temporarily use the (old) BCD unit gear with EBCDIC ... "the biggest computer goof ever"
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
Unfortunately, the software for the 360 was constructed by thousands of programmers, with great and unexpected difficulties, and with considerable lack of controls. As a result, the nearly $300 million worth of software (at first delivery!) was filled with coding that depended upon the EBCDIC representation to work, and would not work with any other! Dr. Frederick Brooks, one of the chief designers of the IBM 360, informed me that IBM indeed made an estimate of how much it would cost to provide a reworked set of software to run under ASCII. The figure was $5 million, actually negligible compared to the base cost. However, IBM (present-day note: Read "Learson") made the decision not to take that action, and from this time the worldwide position of IBM hardened to "any code as long as it is ours".

... snip ...

https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

above attributes it to Learson ... however, it was also Learson that was trying to block the bureaucrats, careerists (and MBAs) from destroying the Watson Legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
So by the early 90s, it was looking like it was nearly over, 1992 IBM has one of the largest losses in history of US corporations and was being re-orged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup the company. Before we get started, the board brings in the former president of Amex that (mostly) reverses the breakup (although it wasn't long before the disk division is gone).

posts mentioning ASCII & Mythical Man Month
https://www.garlic.com/~lynn/2022h.html#65 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#63 Computer History, OS/360, Fred Brooks, MMM
https://www.garlic.com/~lynn/2014g.html#99 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT

--
virtualization experience starting Jan1968, online at home since Mar1970

360&370 Unix (and other history)

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360&370 Unix (and other history)
Date: 24 Apr, 2024
Blog: Facebook
Trivia: Story was both Amdahl & IBM field support claimed they wouldn't support customer machines w/o industrial strength EREP ... adding it to UNIX would have been several times the effort of just doing direct UNIX port to 370. SSUP was stripped down TSS/360 with just hardware and device support ... and EREP. Amdhal UTS and other IBM UNIX 370 efforts ran in VM/370 (leveraging its EREP).

possibly more than you asked for

Took two credit hr intro to fortran/computers and end of semester was hired to rewrite 1401 MPIO in assembler for 360/30. Univ replacing 709/1401 with a 360/67 for tss/360 ... temporarily the 1401 was replaced with 360/30 (pending availability of 360/67, 360/30 for starting to get familiar with 360, 360/30 also had microcode 1401 emulation). The univ shutdown datacenter on weekends and I would have it dedicated, although 48hrs w/o sleep made Monday classes hard. They gave me a bunch of hardware and software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc. and within a few weeks had a 2000 card assembler program.

Then within a year of intro class, the 360/67 comes in and I'm hired fulltime responsible for OS/360 (tss/360 never really came to production, so ran as 360/65, I continue to have my 48hr dedicated datacenter on weekends). Student fortran had run under a second on 709, initially on os/360 ran over a minute. I install HASP and it cuts the time in half. I then start redoing OS/360 STAGE2 SYSGEN, careful placing datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Never got better than 709 until I install Univ. of Waterloo WATFOR.

CSC had come out to install CP67/CMS (precursor to vm370, 3rd installation after CSC itself and MIT Lincoln Labs) and I mostly played with it in my weekend dedicated time. Early on the IBM TSS/360 SE was around for a time and we created synthetic benchmark of fortran edit, compile, & execute. Unmodified CP67/CMS ran 35 simulated users with better response and throughput than TSS/360 did with four simulated users.

Initially for CP67, I mostly worked on rewriting pathlengths for running os/360 in virtual machine. OS/360 test ran 322 secs on "bare machine", initially 856secs in virtual machine (CP67 CPU 534secs), after a few months, got CP67 CPU down to 113secs (from 534secs). I then redid I/O for paging (chained requests for optimized transfer per revolution) and for all disk optimized ordered arm seek; new optimized page replacement algorithm, and dynamic adaptive resource management and scheduling.

CP67 came with 2741&1052 terminal with automagic terminal type support (SAD CCW to switch port terminal type scanner). The univ. had some number of TTY/ASCII terminals and I integrated ASCII terminal support with automagic terminal type support (trivia: ASCII terminal type support had come in a "HEATHKIT" box for install in the IBM telecommunication controller). I then wanted a single dialup telephone number ("hunt group") for all terminals. Didn't quite work, while could dynamically change terminal type scanner ... IBM had taken a short cut and hardwired port line speed.

This kicks off a univ project to do clone controller, build a channel interface board for an Interdata/3 programmed to simulate IBM telecommunication controller, with addition it could do dynamic line speed). Later was upgraded to a Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Interdata (and later Perkin-Elmer) were selling it as clone controller and four of us are written up for (some part of) clone controller business. Around the turn of century I run into descendant at large datacenter that was handling majority of point-of-sale dailup credit card machines east of the Mississippi.

some more CSC & CP67/CMS history
http://www.leeandmelindavarian.com/Melinda#VMHist
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

plug compatible 360 controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

Then before I graduate I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services ... I think Renton datacenter possibly largest in the world with 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room. Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll, although they enlarge the room for a 360/67 for me to play with when I'm not doing other stuff. 747#3 was flying skies of Seattle getting FAA flt certification. There was also disaster plan to replicate Renton up at the new 747 plant in Everett (Mt. Rainier heats up and the resulting mud slide takes out Renton). When I graduate, I join IBM science center instead of staying with Boeing CFO.

Charlie had invented compare&swap (mnemonic chosen because "CAS" were his initials) instruction when he was doing CP67 fine-grain, multiprocessor locking at the science center. When we tried to get the 370 architecture owners to include compare&swap for 370, they said that the POK favorite son operating system owners (MVT, then SVS&MVS) said the (360) test&set" instruction was more than sufficient, if compare&swap was to be justified had to come up with justifications that weren't multiprocessor specific; thus were born the examples for application multithreading/multiprogramming use (like DBMS).

A decade ago, I was asked to track down the decision to add virtual memory to all 370s; basically MVT storage management was so bad that regions had to be specified four times larger than used, so 1mbyte, 370/165 typically ran only four concurrent regions ... insufficient to keep system busy and justified. Going to 16mbyte virtual address space ("SVS", similar to running MVT in a CP67 16mbyte virtual machine) could increase concurrently running regions by a factor of four times, with little or no paging. The 370 virtual memory decision also resulted in doing VM370, and in the morph of CP67->VM370, they simplified and/or dropped lots of features (including multiprocessing support).

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters (including online sales&marketing support US HONE was long time customer from CP67 days, which evolves into world-wide VM370-based HONE). As internal datacenters were migrating to VM370, in 1974 I started moving a lot of the CP67 missing features to a release2-based VM370 production "CSC/VM" ... which included kernel re-organization for multiprocessing ... but not the actual multiprocessor support. The US HONE datacenters were consolidated in silicon valley with the largest loosely-coupled shared DASD configuration including load-balancing and fall-over support.

Then I added multiprocessor support to Release3-based VM370 "CSC/VM", initially for US HONE so they could add a second processor for eight tightly-coupled systems in a loosely-coupled, shared-DASD configuration. I did some tricks with highly optimized multiprocessor pathlengths coupled with some processor cache affinity tricks (improving cache-hit and processor throughput offsetting multiprocessor pathlengths) showing twice the throughput of a single processor (this was at the time when MVS documentation was giving MVS multiprocessor throughput as 1.2-1.5 times the throughput of a single processor).

trivia: when facebook 1st moves into silicon valley, it is into a new bldg built next door to the former US HONE datacenter.

other trivia: around 2010, I made some joke about "from the annals of releasing no software before its time" when z/VM finally releasing similar loosely-coupled support.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

I had joined IBM Science Center not long before "Future System" started (early 70s, completely different and was going to completely replace 370, lack of new 370 during the period is credited with giving the 370 clone makers their market foothold). I continued to work on 360&370 all during the Future System period ... even periodically ridiculing them (like speculating they didn't really know what they were doing, not exactly career enhancing activity). more background:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

when FS finally implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel.
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive"

... snip ...

In the wake of the FS implosion, I was also roped into an effort to do a 16-processor, tightly-coupled, multiprocessor 370 and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips); everybody thought it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) has effective 16-processor support (goes along with documentation that 2-processor MVS only had 1.2-1.5 throughput of single processor). Then some of us were invited to never visit POK again (and the 3033 processor engineers directed to concentrate on 3033 and no more distractions). trivia: POK doesn't ship a 16-processor machine until after the turn of the century.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

CTSS, Multicis, CP67/CMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: CTSS, Multicis, CP67/CMS
Date: 24 Apr, 2024
Blog: Facebook
Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

trivia: I was undergraduate and univ hired me fulltime responsible for OS/360 (360/67 originally for tss/360, but was being run as 360/65). Then CSC came out to install CP67/CMS (3rd installation after CSC itself, and MIT Lincoln Labs). I mostly got to play with it during my 48hr weekend dedicated time (univ. shutdown datacenter on weekends). CSC had 1052&2741 support, but univ. had some number of TTY/ASCII terminals, so I added TTY/ASCII support ... and CSC picked up and distributed with standard CP67 (as well as lots of my other stuff). I had done a hack with one byte values for TTY line input/output. Tale of MIT Urban Lab having CP/67 (in tech sq bldg across quad from 545). Somebody down at Harvard got an ascii device with 1200(?) char length ... they modified field for max. lengths ... but didn't adjust my one-byte hack ... crashing system 27 times in single day.
https://www.multicians.org/thvv/360-67.html
But on that day, a user at Harvard School of Public Health had connected a plotter to a TTY line and was sending graphics to it, and every time he did, the whole system crashed. (It is a tribute to the CP/CMS recovery system that we could get 27 crashes in in a single day; recovery was fast and automatic, on the order of 4-5 minutes. Multics was also crashing quite often at that time, but each crash took an hour to recover because we salvaged the entire file system. This unfavorable comparison was one reason that the Multics team began development of the New Storage System.)

... snip ...

I had done automated benchmarking system where I could specify different configurations, types of workloads, number of users, etc ... and then reboot between benchmarks. When I 1st started migration from CP67 to VM370, the 1st thing I did was automated benchmarking ... but found that VM370 would crash several times before completing standard set of benchmarks. As a result, the next things I had to migrate to VM370 was the CP67 kernel serialization mechanism so VM370 could finish a standard set of benchmarks.

There was some friendly rivalry between 4th and 5th flrs ... one area was federal gov. ... Multics had installation at USAFDS in the Pentagon
https://www.multicians.org/site-afdsc.html

In 2nd half of 70s, had transferred out to IBM Research in San Jose and in spring 1979 got a call that a couple people from USAFDS wanted to come out to talk about getting 20 VM/4341s ... however by the time they got around to coming out the following fall, it had increased to 210 VM/4341s.

a reference to instead of upgrading UNIX with mainframe EREP, "borrowing" it
https://www.garlic.com/~lynn/2024c.html#4 Bemer, ASCII, Brooks and Mythical Man Month
https://www.garlic.com/~lynn/2024c.html#5 360&370 Unix (and other history)

above also refs adding CP67 multiprocessing to VM370 ... but just before I did it, somehow AT&T Longlines was able to get a copy of my CSC/VM with full source ... and over the following years, migrated it to newest processors and propagated to multiple AT&T datacenters. Roll-forward to new IBM 3081, which was originally intended to be multiprocessor *only* and the IBM AT&T corporate marketing rep tracks me down to help AT&T with this archaic CSC/VM system (afraid that AT&T would migrate everything to the latest Amdahl machines ... which had faster single processor that had almost the throughput of the two processor 3081).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Millicode

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Millicode
Date: 24 Apr, 2024
Blog: Facebook
IBM Millicode
https://www.researchgate.net/publication/224103049_Millicode_in_an_IBM_zSeries_processor
https://public.dhe.ibm.com/eserver/zseries/zos/racf/pdf/ny_metro_naspa_2012_10_what_and_why_of_system_z_millicode.pdf

IBM high-end machines are horizontal microcode which is really difficult and time-consuming to program. After Future System implosion
http://www.jfsowa.com/computer/memo125.htm

Endicott cons me into helping with ECPS microcode assist for 138/148 (low&mid range 370) that were vertical microcode ... basically microprocessor machine language. Then in early 80s, I got permission to give ECPS presentations at user group meetings, including monthly BAYBUNCH hosted by Stanford SLAC. Afterwards the Amdahl people would grill me for more information. They said that they had developed "MACROCODE" (370-like instructions running in microcode mode for their high-end horizontal microcode machine) during IBM's 3033 period, to quickly respond to IBM trivial new (horizontal) microcode functions were being shipped required for MVS to run. At the time they were in the process of implementing "HYPERVISOR" (subset of virtual machine functions running w/o VM370). IBM wasn't able to respond with LPAR&PR/SM until nearly end of the decade for 3090.

Similar, but different, late last century, the i86 vendors went to a hardware layer that translated i86 into RISC micro-ops for actual execution ... largely negating the throughput advantage of RISC processors (industry standard benchmark program that counts number of iterations compared to 1MIP reference platform).


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     IBM z900 mainframe processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC 440)

2003 max. configured IBM mainframe z990, 32 processor aggregate 9BIPS
    (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configure IBM mainframe z196, 80 processor aggregate 50BIPS
     (625MIPS/proc)
2010 E5-2600 XEON server blade, 16 processor aggregate 500BIPS
     (31BIPS/proc)

360/370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

CP40/CMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP40/CMS
Date: 25 Apr, 2024
Blog: Facebook
IBM CP-40
https://en.m.wikipedia.org/wiki/IBM_CP-40

Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

paper about CP40/CMS ... some amount taken from CTSS
https://www.garlic.com/~lynn/cp40seas1982.txt
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

science center wanted 360/50 to modify with virtual memory, but all the spare 360/50s were going to FAA ATC project ... and so they had to settle for a 360/40. When 360/67 becomes available standard with virtual memory, CP40 morphs into CP67

some more details (univ. I was at, becomes 3rd installation, after CSC itself, and MIT Lincoln Labs)
https://www.garlic.com/~lynn/2024c.html#16 CTSS, Multicis, CP67/CMS
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#39 Tonight's tradeoff
https://www.garlic.com/~lynn/2024.html#49 Card Sequence Numbers
https://www.garlic.com/~lynn/2024.html#40 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally

last product we did at IBM was HA/CMP ... it originally was HA/6000 for the NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000; I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had both VAXCluster and Unix in the same source base. I did an enhanced distributed lock manager with VAXCluster API semantics to simplify their HA/CMP support. Disclaimer: When transferred to IBM Research, I got roped into doing some work with Jim Gray and Vera Watson on the original SQL/relational implementation ("System/R") and then helping with tech transfer to Endicott for SQL/DS ... "under the radar", while the corporation was preoccupied with the next great DBMS, "EAGLE". Then when "EAGLE" implodes, there was request for how fast could System/R be ported to MVS ... which eventually ships as DB2, originally for decision-support only.

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

Part of HA/CMP was studying how things fail ... and at one point was in brought in to latest ATC modernization effort. Turns out it involved fault-tolerant triple-redundant hardware with guidelines that since all failures would be masked ... the software didn't have to worry about such things. However, it turns out that there were some "business/operational rules" that could have failures ... and the software effort had to be reset to handle non-hardware related failures. We then got into the habit of dropping in on staff person in the office of IBM FSD President.

First part of Jan1992, had Oracle meeting and IBM AWD/Hester told Oracle CEO that we would have 16-processor clusters by mid92 and 128-processor clusters by ye92 ... and during Jan1992 was keeping FSD appraised of HA/CMP status and work with national labs. Apparently during Jan, FSD told Kingston supercomputer project that FSD was going with HA/CMP for gov. accounts. Then end of Jan, cluster scale-up was transferred to Kingston for announce as IBM supercomputer (for technical/scientific *only*) and we were told that we couldn't work on anything with more than four processors ... we leave IBM a few months later.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

more trivia: never dealt with Fox while in IBM; FAA ATC, The Brawl in IBM 1964
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514
Two mid air collisions 1956 and 1960 make this FAA procurement special. The computer selected will be in the critical loop of making sure that there are no more mid-air collisions. Many in IBM want to not bid. A marketing manager with but 7 years in IBM and less than one year as a manager is the proposal manager. IBM is in midstep in coming up with the new line of computers - the 360. Chaos sucks into the fray many executives- especially the next chairman, and also the IBM president. A fire house in Poughkeepsie N Y is home to the technical and marketing team for 60 very cold and long days. Finance and legal get into the fray after that.

... snip ...

Executive Qualities
https://www.amazon.com/Executive-Qualities-Joseph-M-Fox/dp/1453788794
After 20 years in IBM, 7 as a divisional Vice President, Joe Fox had his standard management presentation -to IBM and CIA groups - published in 1976 -entitled EXECUTIVE QUALITIES. It had 9 printings and was translated into Spanish -and has been offered continuously for sale as a used book on Amazon.com. It is now reprinted -verbatim- and available from Createspace, Inc - for $15 per copy. The book presents a total of 22 traits and qualities and their role in real life situations- and their resolution- encountered during Mr. Fox's 20 years with IBM and with major computer customers, both government and commercial. The presentation and the book followed a focus and use of quotations to Identify and characterize the role of the traits and qualities. Over 400 quotations enliven the text - and synthesize many complex ideas.

... snip ...

... but after leaving IBM, had a project with Fox and his company that also had some other former FSD FAA people.

other trivia: doing HA/CMP we started out reporting to executive, who later went over to head up Somerset ... single RISC chip design effort for AIM (apple, ibm, motorola), some amount of motorola 88k RISC features incorporated into power/pc.

trivia: CPS (run under OS/360 ... similar to APL\360, CPS included microcode assist on the 360/50) was handled by Boston Programming Center which was on 3rd flr, below Cambridge Scientific Center on 4th flr (and Multics on 5th flr). With the decision to do CP67->VM/370 some of the science center people went to the 3rd flr taking over the Boston Programming Center for the VM/370 development group. When the development group outgrew their half of the 3rd flr (there was a gov. agency that the bldg register listed as law firm in the other half), they moved out to the empty SBC bldg at Burlington mall (off 128, SBC had been spun off to another computer company in a legal matter).

Note: after Future System implosion and mad rush to get stuff back into 370 product pipelines, including kicking off the quick and dirty 3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm

the head of POK also managed to convince corporate to kill the vm370 product, shutdown the development group and transfer all the people to POK for MVS/XA (presumably claiming that otherwise MVS/XA wouldn't be able to ship on time in the 80s). Eventually, Endicott managed to save the VM/370 product mission (for low-end and mid-range), but had to recreate a development group from scratch.

they weren't going to tell the people about the shutdown until the very last minute, to minimize the number that might be able to escape into the boston area ... however the information manage to leak and several managed to escape (including to the infant DEC VMS effort, joke was that head of POK was major contributor to VMS). They did a hunt for the source of the leak, fortunately for me, nobody gave the source up.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Millicode

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Millicode
Date: 25 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode

1980 I was con'ed into doing channel-extender support for STL (since renamed SVL) that was moving 300 people from IMS DBMS group to offsite bldg with service back to STL datacenter. They had tried "remote 3270", but found human factors unacceptable. Channel-extender allowed placing channel-attached 3270 controllers at the offsite bldg with no perceptible difference in human factors between offsite and inside STL (although some tweaks with channel-extender increased system throughput by 10-15%, prompting suggestion that all their systems should use channel-extender, aka they had spread 3270 controllers across all the channels with DASD and "slow" 3270 controller channel busy was interfering with DASD I/O, channel-extender boxes were much faster and reduced channel busy for same amount of 3270 transfer).

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

Then some POK engineers playing with some serial stuff, blocked the release of support to customers. Later in 1988, the IBM branch office asks if I could help LLNL (national lab) get some serial stuff they were playing with, standardized. It quickly becomes "fibre-channel" standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then the POK stuff (after more than decade) finally gets released with ES/9000 as ESCON (when it is already obsolete) 17mbyes/sec. Then some POK engineers get involved in FCS and define a heavy weight protocol that significantly cuts the native throughput, which eventually ships as FICON (running over FCS). The latest public benchmark I can find is z196 "Peak I/O" getting 2M IOPS with 104 FICON. About the same time, an FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend limiting SAPs (system assist processors that actually do I/O) to 70% CPU ... would be around 1.5M IOPS. Further complicating are CKD DASD, which haven't been made for decades, needing to be simulated on industry standard fixed-block disks.

FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

trivia: channel attached 3272/3277 had .086sec hardware response ... this was in days of studies showing improved productivity with quarter second response, so to get interactive .25sec, system response had to be no more than .164sec (several of my internal enhanced systems were getting .11sec interactive system response). For the 3278, they moved lots of electronics back into controller, so protocol chatter drove hardware response to .3-.5sec (somewhat dependent amount of data), making quarter second impossible. A complaint to the 3278 product administrator got a response that 3278 wasn't for interactive computing but "data entry" (aka electronic keypunch). Later IBM/PC 3277 emulation cards had 4-5 times the upload/download throughput of 3278 cards. Note MVS/TSO users never noticed since their system response was rarely even 1sec (so any change from 3272/3277 to 3274/3278 wasn't noticed).

other trivia: When I transfer to San Jose Research, I get to wander around (IBM and non-IBM) datacenters in silicon valley, including disk engineering (bldg14) and disk product test (bldg15) across the street. They were running prescheduled, around the clock, stand-alone mainframe testing. They mentioned that they had recently tried MVS, but it had 15min mean-time-between-failure (in that environment). I offer to rewrite I/O supervisor to make it bullet proof and never fail, enabling any amount of on-demand, concurrent testing, greatly improving productivity (downside was they started blaming me for any problems, and I had to spend increasing amount of time playing disk engineer shooting hardware issues). The engineers were complaining that bean-counting/accountants had forced the 3880 to have inexpensive, slow microprocessor (compared to 3830, 3880 had special hardware path for 3380 3mbyte/sec transfers, but everything else was much slower, significantly increasing channel busy).

Roll forward to 3090, which had initially configured number of channels to achieve target throughput, assuming 3880 was same as 3830 but with addition for 3mbyte/sec transfers. When they found out how bad it really was, they realized they would have to significantly increase the number of channels (to achieve target throughput), which required an additional TCM (3090 were semi-facetiously claiming they would bill the 3880 group for the increase in 3090 manufacturing cost. Eventually marketing respun the significant increase in number of channels as 3090 being wonderful I/O machine (rather than countermeasure to the 3880 channel busy increase).

I wrote (IBM internal) research report about work for disk division and happened to mention the MVS 15min MTBF ... bringing down the wrath of the MVS organization on my head.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Millicode

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Millicode
Date: 25 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode

Shortly after joining IBM ... I got roped into help on project for multithreading 370/195. 195 had 64 instruction pipeline and supported out-of-order execution .... but didn't have speculative execution or branch prediction and so conditional branches drained the pipeline ... so most codes ran 195 at half throughput. Multi-threading is mentioned in this webpage about the end of ACS/360
https://people.computing.clemson.edu/~mark/acs_end.html

aka Amdahl had won the battle to make ACS 360 compatible ... but then (folklore) is that executives were worried that it would advance the state-of-the-art too fast and IBM would loosely control of the market ... and kill the project (Amdahl leaves IBM shortly later).

195 multithreading would simulate two processor multiprocessing (two instructions streams, two sets of registers, etc) ... two instruction streams, each running processor at half throughput ... would (possibly) result in keeping the 195 fully busy ... modulo that the MVT 65/MP support was at least as bad as the MVS two processor support only 1.2-1.5 times the throughput of a single processor. Then the decision was to add virtual memory to all 370s (as countermeasure to the bad/poor MVT storage management) and it was decided to stop all new work on 370/195 (considered too much effort to add virtual memory to 195).

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

archived post with pieces of email exchange about decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73

... trivia: original 3380 had 20 track spacings between each data track, they then cut the spacings in half for double the tracks (& capacity) and then cut the spacing again for triple the tracks (& capacity). The father of 801/RISC wanted me to help him with "wide disk head" .... disks are formated with 16 closely spaced data tracks with servo track between. A "wide" disk head would transmit 16 data tracks in parallel, following servo tracks on each side. The problem was that was 50mbyte/sec transfer and IBM (mainframe) channels were still 3mbytes/sec. It wasn't until a couple years later that I was involved with "FCS" and could do 100mbyte/sec concurrently in each direction ... but was getting FCS for RS/6000 (wasn't until much later for IBM mainframe).

posts mentioning getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

TDM Computer Links

From: Lynn Wheeler <lynn@garlic.com>
Subject: TDM Computer Links
Date: 25 Apr, 2024
Blog: Facebook
I was blamed for online computer conferencing in the late 70s and early 80s on the internal network (larger than arpanet/internet from just about beginning until sometime mid/late 80s) ... folklore is that when corporate executive committee was told, 5of6 wanted to fire me. One of the outcomes was official sanctioned and moderated online forums. Early 80s, I got HSDT project ... T1 and faster computer links (both terrestrial and satellite/TDMA&broadcast). Mid-80s, HSDT was having some custom hardware built on the other side of the Pacific. On Friday before leaving for a visit, got an email announcement about new online forum about computer links from the communication group

low-speed: 9.6kbits/sec,
medium speed: 19.2kbits/sec,
high-speed: 56kbits/sec,
very high-speed: 1.5mbits/sec

monday morning on wall of conference room on the other side of pacific, there were these definitions:


low-speed: <20mbits/sec,
medium speed: 100mbits/sec,
high-speed: 200mbits-300mbits/sec,
very high-speed: >600mbits/sec

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

FOILS

From: Lynn Wheeler <lynn@garlic.com>
Subject: FOILS
Date: 25 Apr, 2024
Blog: Facebook
Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

CTSS RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
was redone for CP67/CMS as "SCRIPT"

GML was invented in 1969 at the science center ("G", "M", "L" are initials of 3 inventors last name) and GML tag processing added to SCRIPT ... ref by one of the GML inventors:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

Edson was responsible for CP67 wide-area network which grows into the corporate network (larger than arpanet/internet from just about the beginning until until sometimed mid/late 80s) ... also used for the corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed internet) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

... and back to "foils", from IBM Jargon:
foil - n. Viewgraph, transparency, viewfoil - a thin sheet or leaf of transparent plastic material used for overhead projection of illustrations (visual aids). Only the term Foil is widely used in IBM. It is the most popular of the three presentation media (slides, foils, and flipcharts) except at Corporate HQ, where even in the 1980s flipcharts are favoured. In Poughkeepsie, social status is gained by owning one of the new, very compact, and very expensive foil projectors that make it easier to hold meetings almost anywhere and at any time. The origins of this word have been obscured by the use of lower case. The original usage was FOIL which, of course, was an acronym. Further research has discovered that the acronym originally stood for Foil Over Incandescent Light. This therefore seems to be IBM's first attempt at a recursive language.
... snip ..

Overhead projector
https://en.wikipedia.org/wiki/Overhead_projector
Transparency (projection)
https://en.wikipedia.org/wiki/Transparency_(projection)


:frontm. :titlep. :title.GML for Foils :date.August 24, 1984 :author.xxx1 :author.xxx2 :author.xxx3 :author.xxx4 :address. :aline.T.J. Watson Research Center :aline.P.O. Box 218 :aline.Yorktown Heights, New York :aline.&rbl. :aline.San Jose Research Lab :aline.5600 Cottle Road :aline.San Jose, California :eaddress. :etitlep. :logo. :preface. :p.This manual describes a method of producing foils automatically using DCF Release 3 or SCRIPT3I. The foil package will run with the following GML implementations: :ul. :li.ISIL 3.0 :li.GML Starter Set, Release 3 :eul. :note.This package is an :q.export:eq. version of the foil support available at Yorktown and San Jose Research as part of our floor GML. Yorktown users should contact xxx4 for local documentation. Documentation for San Jose users is available in the document stockroom. .* :p.Any editor can be used to create the foils. Preliminary proofing can be done at the terminal with final output to one of the printers supported by the various implementations: :ul compact. :li.APS-5 :li.4250 :li.Sherpa :li.Phoenix :li.6670 :li.3800 :li.1403 :eul. :note.:hp2.The FOIL package is distributed and maintained only through the IBMTEXT conference disk. This project is not part of our real job. We will enhance it and fix bona fide bugs as time permits. Please report bugs only via FOIL BUGS on the IBMTEXT disk.:ehp2.
... snip ...

... trivia: 6670 was sort of IBM Copier3 with computer link. San Jose Research then modified 6670 for all-points-addressable (6670APA and later added postscript engine) which becomes Sherpa

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet/earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
virtualization experience starting Jan1968, online at home since Mar1970

CP40/CMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP40/CMS
Date: 26 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS

... little drift ... Learson tried (and failed) to stop the bureaucrats, careerists, and MBAs from destroying the Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

20yrs later, appeared to be nearly end of IBM ... IBM has one of the largest losses in history of US corporations and was being reorganized into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk if we could help with breakup. Before we get started, the board hires former president of AMEX as CEO, who (somewhat) reverses the breakup.

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner

for other drift, a series of "z/VM 50th" postings (50 yrs since VM/370 1972)
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-7-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50-part-8-lynn-wheeler/

--
virtualization experience starting Jan1968, online at home since Mar1970

TDM Computer Links

From: Lynn Wheeler <lynn@garlic.com>
Subject: TDM Computer Links
Date: 26 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#21 TDM Computer Links

communication group ... i.e. SNA communication products division

the communication group mainframe products were cap'ed at 56kbit links ... although they had support for "fat pipes" that could treat multiple parallel links as a single logical link. About the same time as the announce for new communication link forum ... they prepared an analysis for the corporate executive committee that customers weren't looking for T1 support until sometime in the 90s. They surveyed "fat pipe" users, showing that use of "fat pipes" for more than six parallel (56kbit) links had dropped to zero. What they didn't know (or didn't want to tell the corporate executive committee) was that telco tariff for T1 link was about the same as six 56kbit links. HSDT trivial survey found 200 customers that had gone to full T1 with non-IBM controller and software.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

some recent posts mentioning "fat pipe"
https://www.garlic.com/~lynn/2024b.html#112 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#83 SNA/VTAM
https://www.garlic.com/~lynn/2024.html#70 IBM AIX

post mentioning when I was undergraduate in 60s, univ hires me fulltime responsible of os/360
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021h.html#65 CSC, Virtual Machines, Internet

I'm not sure when I became aware of name Grace Hopper. While I was at the univ, the library had gotten an ONR (office of naval research)
https://www.nre.navy.mil/

grant to do online catalog ... and they used some of the money to get an IBM 2321 (datacell). Other trivia, the library online catalog was also selected as betatest for the original CICS program product ... and CICS support was added to my tasks. First problem was CICS wouldn't come up. Eventually figured out that CICS code had some undocumented hardcoded BDAM options and the library had built the BDAM files with a different set of options.

cics & bdam posts
https://www.garlic.com/~lynn/submain.html#cics

some recent posts mentioning ONR grant, univ library online catalog, cics betatest
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024.html#69 NIH National Library Of Medicine
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#24 Video terminals
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023.html#108 IBM CICS

--
virtualization experience starting Jan1968, online at home since Mar1970

Tymshare & Ann Hardy

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Tymshare & Ann Hardy
Date: 27 Apr, 2024
Blog: Facebook
Tymshare & Ann Hardy
https://medium.com/chmcore/someone-elses-computer-the-prehistory-of-cloud-computing-bca25645f89
Ann Hardy is a crucial figure in the story of Tymshare and time-sharing. She began programming in the 1950s, developing software for the IBM Stretch supercomputer. Frustrated at the lack of opportunity and pay inequality for women at IBM -- at one point she discovered she was paid less than half of what the lowest-paid man reporting to her was paid -- Hardy left to study at the University of California, Berkeley, and then joined the Lawrence Livermore National Laboratory in 1962. At the lab, one of her projects involved an early and surprisingly successful time-sharing operating system.

... snip ...

If Discrimination, Then Branch: Ann Hardy's Contributions to Computing
https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/

Much more Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167
Ann rose up to become Vice President of the Integrated Systems Division at Tymshare, from 1976 to 1984, which did online airline reservations, home banking, and other applications. When Tymshare was acquired by McDonnell-Douglas in 1984, Ann's position as a female VP became untenable, and was eased out of the company by being encouraged to spin out Gnosis, a secure, capabilities-based operating system developed at Tymshare. Ann founded Key Logic, with funding from Gene Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl mainframes. After closing Key Logic, Ann became a consultant, leading to her cofounding Agorics with members of Ted Nelson's Xanadu project.

... snip ...

Gnosis/KeyKOS trivia: After M/D bought Tymshare, I was brought in to review Gnosis as part of the spinoff to Key Logic (note following mentions Augment and Doug Engelbart while at Tymshare)
http://cap-lore.com/CapTheory/upenn/Gnosis/Gnosis.html

The GNOSIS write-up also mentions the SHARE LSRAD study. I had scanned my copy for putting up on bitsavers
http://www.bitsavers.org/pdf/ibm/share/The_LSRAD_Report_Dec79.pdf
... trivia: note the year it was published, the gov. had increased the duration of copyright, so I had to spend sometime finding somebody in SHARE that would approve putting it up on bitsavers

In 1976, Tymshare also started offering their CMS-based online computer conferencing system to the (IBM mainframe) user group, SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE, archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE for monthly tape dump of all VMSHARE (and later also PCSHARE) files for putting up on internal network and systems. One visit to TYMSHARE they demo'ed a new game (ADVENTURE) that somebody found on Stanford SAIL PDP10 system and ported to VM370/CMS ... I got copy and started making it (also) on internal networks/systems.

virtual machine based commercial online companies
https://www.garlic.com/~lynn/submain.html#online

Posts mentioning GNOSIS and/or Tymshare:
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#37 Online Forums and Information
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#97 Fortran
https://www.garlic.com/~lynn/2023b.html#35 When Computer Coding Was a 'Woman's' Job
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022g.html#92 TYMSHARE
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021j.html#71 book review: Broad Band: The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2021h.html#98 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2019d.html#27 Someone Else's Computer: The Prehistory of Cloud Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

The Last Thing This Supreme Court Could Do to Shock Us

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Last Thing This Supreme Court Could Do to Shock Us
Date: 27 Apr, 2024
Blog: Facebook
The Last Thing This Supreme Court Could Do to Shock Us. There will be no more self-soothing after this.
https://slate.com/news-and-politics/2024/04/supreme-court-immunity-arguments-which-way-now.html
For three long years, Supreme Court watchers mollified themselves (and others) with vague promises that when the rubber hit the road, even the ultraconservative Federalist Society justices of the Roberts court would put democracy before party whenever they were finally confronted with the legal effort to hold Donald Trump accountable for Jan. 6.

... snip ...

... "fake news" dates back to at least founding of the country, both Jefferson and Burr biographies, Hamilton and Federalists are portrayed as masters of "fake news". Also portrayed that Hamilton believed himself to be an honorable man, but also that in political and other conflicts, he apparently believed that the ends justified the means. Jefferson constantly battling for separation of church & state and individual freedom, Thomas Jefferson: The Art of Power,
https://www.amazon.com/Thomas-Jefferson-Power-Jon-Meacham-ebook/dp/B0089EHKE8/
loc6457-59:
For Federalists, Jefferson was a dangerous infidel. The Gazette of the United States told voters to choose GOD AND A RELIGIOUS PRESIDENT or impiously declare for "JEFFERSON-AND NO GOD."

... snip ...

.... Jefferson targeted as the prime mover behind the separation of church and state. Also Hamilton/Federalists wanting supreme monarch (above the law) loc5584-88:
The battles seemed endless, victory elusive. James Monroe fed Jefferson's worries, saying he was concerned that America was being "torn to pieces as we are, by a malignant monarchy faction." 34 A rumor reached Jefferson that Alexander Hamilton and the Federalists Rufus King and William Smith "had secured an asylum to themselves in England" should the Jefferson faction prevail in the government.

... snip ...

posts mention Federalist Society and/or Heritage Foundation
https://www.garlic.com/~lynn/2023d.html#99 Right-Wing Think Tank's Climate 'Battle Plan' Wages 'War Against Our Children's Future'
https://www.garlic.com/~lynn/2023d.html#41 The Architect of the Radical Right
https://www.garlic.com/~lynn/2023c.html#51 What is the Federalist Society and What Do They Want From Our Courts?
https://www.garlic.com/~lynn/2022g.html#37 GOP unveils 'Commitment to America'
https://www.garlic.com/~lynn/2022g.html#14 It Didn't Start with Trump: The Decades-Long Saga of How the GOP Went Crazy
https://www.garlic.com/~lynn/2022d.html#4 Alito's Plan to Repeal Roe--and Other 20th Century Civil Rights
https://www.garlic.com/~lynn/2022c.html#118 The Death of Neoliberalism Has Been Greatly Exaggerated
https://www.garlic.com/~lynn/2022.html#107 The Cult of Trump is actually comprised of MANY other Christian cults
https://www.garlic.com/~lynn/2021f.html#63 'A perfect storm': Airmen, F-22s struggle at Eglin nearly three years after Hurricane Michael
https://www.garlic.com/~lynn/2021e.html#88 The Bunker: More Rot in the Ranks
https://www.garlic.com/~lynn/2020.html#6 Onward, Christian fascists
https://www.garlic.com/~lynn/2020.html#5 Book: Kochland : the secret history of Koch Industries and corporate power in America
https://www.garlic.com/~lynn/2020.html#4 Bots Are Destroying Political Discourse As We Know It
https://www.garlic.com/~lynn/2020.html#3 Meet the Economist Behind the One Percent's Stealth Takeover of America
https://www.garlic.com/~lynn/2019e.html#127 The Barr Presidency
https://www.garlic.com/~lynn/2019d.html#97 David Koch Was the Ultimate Climate Change Denier
https://www.garlic.com/~lynn/2019c.html#66 The Forever War Is So Normalized That Opposing It Is "Isolationism"
https://www.garlic.com/~lynn/2019.html#34 The Rise of Leninist Personnel Policies
https://www.garlic.com/~lynn/2012c.html#56 Update on the F35 Debate
https://www.garlic.com/~lynn/2012b.html#75 The Winds of Reform
https://www.garlic.com/~lynn/2012.html#41 The Heritage Foundation, Then and Now

--
virtualization experience starting Jan1968, online at home since Mar1970

PDP1 Spacewar

From: Lynn Wheeler <lynn@garlic.com>
Subject: PDP1 Spacewar
Date: 27 Apr, 2024
Blog: Facebook
In 60s, person responsible for the internal network, ported PDP1 space war
https://www.computerhistory.org/pdp-1/08ec3f1cf55d5bffeb31ff6e3741058a/
https://en.wikipedia.org/wiki/Spacewar%21
to CSC's 2240M4 (included 1130)
https://en.wikipedia.org/wiki/IBM_2250
i.e. had 1130 as controller
http://www.ibm1130.net/functional/DisplayUnit.html

I would bring my kids in on weekends and they would play

other drift, one of the inventors of GML at science center in 1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

... then science center "wide area network" morphs into the corporate network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s), technology also used for corporate sponsored univ BITNET
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
bitnet/earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

past posts specifically mentioning pdp1 and 1130/2250 spacewar
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#52 IBM Vintage 1130
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2022g.html#23 IBM APL
https://www.garlic.com/~lynn/2022f.html#118 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022.html#63 Calma, 3277GA, 2250-4
https://www.garlic.com/~lynn/2021k.html#47 IBM CSC, CMS\APL, IBM 2250, IBM 3277GA
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2018f.html#72 Jean Sammet — Designer of COBOL – A Computer of One's Own – Medium
https://www.garlic.com/~lynn/2018f.html#59 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2014j.html#103 ? How programs in c language drew graphics directly to screen in old days without X or Framebuffer?
https://www.garlic.com/~lynn/2014g.html#77 Spacewar Oral History Research Project
https://www.garlic.com/~lynn/2013g.html#72 DEC and the Bell System?
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012f.html#6 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011o.html#21 The "IBM Displays" Memory Lane (Was: TSO SCREENSIZE)
https://www.garlic.com/~lynn/2011n.html#9 Colossal Cave Adventure
https://www.garlic.com/~lynn/2011g.html#45 My first mainframe experience
https://www.garlic.com/~lynn/2010d.html#74 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2004f.html#32 Usenet invented 30 years ago by a Swede?
https://www.garlic.com/~lynn/2004d.html#45 who were the original fortran installations?
https://www.garlic.com/~lynn/2003m.html#14 Seven of Nine
https://www.garlic.com/~lynn/2003f.html#39 1130 Games WAS Re: Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003d.html#38 The PDP-1 - games machine?
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2001f.html#13 5-player Spacewar?
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information

--
virtualization experience starting Jan1968, online at home since Mar1970

Wondering Why DEC Is The Most Popular

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wondering Why DEC Is  The Most Popular ...
Newsgroups: alt.folklore.computers
Date: Mon, 29 Apr 2024 12:39:41 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Looking at the software-docs collection at Bitsavers <http://bitsavers.trailing-edge.com/pdf/>, there is over half a terabyte of files there.

Inside IBM: Lessons of a Corporate Culture in Action
https://www.amazon.com/Inside-IBM-Lessons-Corporate-Culture-ebook/dp/B0C8BV1HM3/

Inside IBM: Lessons of a Corporate Culture in Action
https://www.jstor.org/stable/10.7312/cort21300
CHAPTER 11 GRAY LITERATURE IN IBM'S INFORMATION ECOSYSTEM (pp. 317-358)
https://www.jstor.org/stable/10.7312/cort21300.15
It was said within IBM in the 1970s and 1980s that the company was the world's second-largest publisher after the U.S. Government Printing Office (GPO), as measured by the number of pages printed. It might have been an urban myth because there are no extant statistics to document how much IBM published, but a look at a KWIC (Key Word in Context) index of its publications from that period reveals it occupied four to five linear feet. Each page in it had two columns of brief citations printed in font sizes normally reserved for endnotes in academic publications.
... I remember hearing the claim in the 80s ... however it was the total number of pages printed ... as opposed to total number of pages from unique documets

--
virtualization experience starting Jan1968, online at home since Mar1970

Wondering Why DEC Is The Most Popular

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wondering Why DEC Is  The Most Popular ...
Newsgroups: alt.folklore.computers
Date: Mon, 29 Apr 2024 13:34:24 -1000
re:
https://www.garlic.com/~lynn/2024c.html#28 Wondering Why DEC Is The Most Popular ...

note VAX sold into the same mid-range market with IBM 4300s and in about the same numbers for small number orders ... however some large corporations had multi-hundred vm/4300s orders for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). IBM was expecting that 4361/4381 order volume would continue like the 4331/4341 orders ... however can be seen in the VAX numbers, by the mid-80s the mid-range market was starting to move to workstation and large PC servers.

a.f.c. repost from 2002:
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

more drift ... from a 1988 IDC report:


            VAX INVENTORY
            -------------
SYSTEM       US       NON-US    TOTAL
--------- --------- --------- ---------
11/725         950       550     1,500
11/730       4,100     2,950     7,050
11/750      12,230     9,370    21,600
11/780      14,280     9,660    23,940
11/782         190       120       310
11/785       2,460     1,590     4,050
MVI          1,840       960     2,800
MVII        41,000    23,900    64,900
82XX         2,800     1,870     4,670
83XX           900       600     1,500
85XX         1,200       905     2,105
86XX         2,360     1,240     3,600
8700           400       270       670
8800           300       200       500
--------  --------  --------
TOTAL       85,010    54,185   139,195

              VAX SHIPMENTS
              -------------
                                           NO. OF VAX
YEAR         US       NON-US    TOTAL    MODELS SHIPPED
--------- --------- --------- ---------  --------------
1978          312        78       390          1
1979          627       313       940          1
1980        1,512     1,038     2,550          2
1981        1,979     1,726     3,705          2
1982        4,129     2,794     6,923          4
1983        6,178     4,384    10,562          5
1984       11,703     8,227    19,930          7
1985       17,600     7,300    24,900          8
1986       19,190    12,840    32,030         12
1987       21,780    15,485    37,265         12
--------  --------  --------
TOTAL       85,010    54,185   139,195

                 VAX SHIPMENTS - NON US
                 ----------------------
             1978-
SYSTEM       1984      1985      1986      1987     TOTAL
--------   --------  --------  --------  --------  --------
11/725         450       100         0         0       550
11/730       2,350       600         0         0     2,950
11/750       7,040     1,700       430       200     9,370
11/780       7,700     1,500       270       190     9,660
11/782         120         0         0         0       190
11/785          40     1,100       350       100     1,590
MVI            860       100         0         0       960
MVII             0     1,900    10,000    12,000    23,900
82XX             0         0       725     1,145     1,870
83XX             0         0       200       400       600
85XX             0         0       305       600       905
86XX             0       300       470       470     1,240
8700             0         0        60       210       270
8800             0         0        30       170       200
--------  --------  --------  --------  --------
TOTAL       18,560     7,300    12,840    15,485    54,185

                        VAX SHIPMENTS - US
                        ------------------
             1978-
SYSTEM       1984      1985      1986      1987     TOTAL
--------   --------  --------  --------  --------  --------
11/725         650       300         0         0       950
11/730       3,200       900         0         0     4,100
11/750       9,300     2,200       560       170    12,230
11/780      11,500     2,200       400       180    14,280
11/782         190         0         0         0       190
11/785         260     1,600       500       100     2,460
MVI          1,340       500         0         0     1,840
MVII             0     9,000    15,000    17,000    41,000
82XX             0         0     1,150     1,650     2,800
83XX             0         0       300       600       900
85XX             0         0       420       780     1,200
86XX             0       900       730       730     2,360
8700             0         0        80       320       400
8800             0         0        50       250       300
--------  --------  --------  --------  --------
TOTAL       26,440    17,600    19,190    21,780    85,010

                 VAX SHIPMENTS - WORLD-WIDE
                 --------------------------
             1978-
SYSTEM       1984      1985      1986      1987     TOTAL
--------   --------  --------  --------  --------  --------
11/725       1,100       400         0         0     1,500
11/730       5,550     1,500         0         0     7,050
11/750      16,340     3,900       990       370    21,600
11/780      19,200     3,700       670       370    23,940
11/782         310         0         0         0       310
11/785         300     2,700       850       200     4,050
MVI          2,200       600         0         0     2,800
MVII             0    10,900    25,000    29,000    64,900
82XX             0         0     1,875     2,795     4,670
83XX             0         0       500     1,000     1,500
85XX             0         0       725     1,380     2,105
86XX             0     1,200     1,200     1,200     3,600
8700             0         0       140       530       640
8800             0         0        80       420       500
--------  --------  --------  --------  --------
TOTAL       45,000    24,900    32,030    37,265   139,195

... also 1988

 6,500 clusters installed, From 14,000 DEC VAX sites:

Percentage of VAX processors clustered

15% - 1985
21% - 1986
26% - 1987


...

IBM favorite son batch system (MVS) looked at the size of the distributed vm/4341 market and wanted some of the business ... however it required non-datacenter hardware ... and MVS was CKD DASD only, never getting around to supporting FBA (fixed-block) disk ... and the only new CKD DASD was large datacenter 3880/3380 (note there has been no CKD DASD made for decades, all being simulated on industry standard fixed-block disks). Eventually IBM came up with CKD simulation for the 3370 FBA as 3375 ... but didn't do MVS much good. MVS was still scores of staff per system, and the distributed computing market was scores of systems per staff.

posts mentioning DASD, CKD, FBA, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

GML and W3C

From: Lynn Wheeler <lynn@garlic.com>
Subject: GML and W3C
Date: 30 Apr, 2024
Blog: Facebook
Note GML was invented in 1969 at IBM science center in tech sq ... old reference by one of the GML inventors about CSC "wide area network" ...
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

... Edson was responsible for science center "wide area network" which morphs into the corporate network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s), technology also used for corporate sponsored univ BITNET
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

a decade after GML invented, it morphs into ISO standard SGML, and after another decade morphs into HTML at CERN, then W3C offices were a block or two from tech sq.

Science Center also noted for virtual machines, 1st CP40 (on 360/40 with virtual memory hardware modifications), which morphs into CP67 when 360/67s becomes available standard with virtual memory ... then VM370 (when decision was made to add virtual memory to all 370s). First webserver in the US was on Stanford SLAC VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
gml, sgml, html posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

HONE &/or APL

From: Lynn Wheeler <lynn@garlic.com>
Subject: HONE &/or APL
Date: 30 Apr, 2024
Blog: Facebook
23jun1969 unbundling announcement, IBM started charging for (application) software, SE services, maintenance, etc. SE training use to include trainee part of large group at customer datacenter ... however they couldn't figure out how *not* to charge for SE trainee time. This kicked off US CP67 "HONE" datacenter were US branch offices had online connection to HONE datacenters and could practice on guest operating systems in virtual machines.

The science center had also ported APL\360 to CMS for CMS\APL, increased workspaces from 16kbytes (sometimes 32kb) to large virtual memory. Had to redo storage management, every time APL executed an assignment, it allocated new storage location, quickly touching every storage location in workspace .... causing page thrashing in large demand page virtual memory. Also did API for system services (like file i/o). Combination enabled a lot of real-world applications and HONE started to offer CMS\APL-based sales&marketing support applications ... which came to dominate all HONE activity. HONE moved to VM370 and HONE-clones started sprouting up all over the world ... and then all the US HONE datacenters were consolidated in Palo Alto (when FACEBOOK 1st moved into silicon valley, it was into a new bldg built next door to the former US HONE consolidated datacenter). World-wide HONE became the largest user of APL.

trivia: when I 1st joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters and HONE was a long time customer. In the morph from CP67->VM370 they simplified and/or dropped lots of stuff (including multiprocessor support). In 1974, I started migrating lots of stuff from CP67->VM/370 and soon had a VM/370 release2 based production CSC/VM ... that included kernel re-org for multiprocessor operation (but not multiprocessor support itself). Consolidated US HONE VM370 was initially enhanced to the largest 370 "loosely-coupled", shared DASD operation with fall-over and load balancing across the complex. I then added multiprocessor support to a release3-based CSC/VM, initially so US HONE could add a 2nd processor to each of the eight systems (for 16 processors total).

IBM 23jun69 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE &/or APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

UNIX & IBM AIX

From: Lynn Wheeler <lynn@garlic.com>
Subject: UNIX & IBM AIX
Date: 30 Apr, 2024
Blog: Facebook
trivia: feb post in facebook private group
https://www.garlic.com/~lynn/2024.html#103 Multicians

Chandersekaran sent out a request (copying you) asking for somebody to teach CP internals which found its way to me ... my reply (from long ago ... nearly 40yrs ago ... and far away):
https://www.garlic.com/~lynn/2024.html#email851114
https://www.garlic.com/~lynn/2024.html#email851114b
https://www.garlic.com/~lynn/2024.html#email851114c
also
https://www.garlic.com/~lynn/2024b.html#email851114
https://www.garlic.com/~lynn/2011b.html#email851114

as per above, internal IBM politics shutdown the effort with NSF & supercomputer centers.

Note that 801/RISC ROMP chip was suppose to be for the displaywriter follow-on. When that got canceled, the Austin group decided to pivot to unix workstation market and hired the company that had done PC/IX to do port for ROMP ... which becomes "AIX" for the PC/RT.

The IBM Palo Alto group was doing port of UCB BSD to mainframe to VM/370 with mods to do forking. The Palo Alto group then was redirected to do BSD for PC/RT instead ... which was released as "AOS".

trivia: in spring of 1982, I had sponsored an IBM adtech conference and had some of the UNIX projects present ... old archived post
https://www.garlic.com/~lynn/96.html#4a

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

Old adage "Nobody ever got fired for buying IBM"

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Old adage "Nobody ever got fired for buying IBM"
Date: 01 May, 2024
Blog: Facebook
Early 80s, co-worker at San Jose Research left IBM and was doing lots of consulting in silicon valley ... including for senior VP of engineering at large chip shop. He did port of AT&T mainframe C-compiler to CMS doing lots of bug fixes and enhancing code optimization. He then ported a lot of UCB BSD chip apps to CMS. One day IBM marketing rep came through and asked him what he was doing. He said ethernet support so their SGI graphical workstations can use VM/CMS for backend processing. He was then told that he should be doing token-ring support instead or otherwise the shop might not find their mainframe service as timely as in the past. I then got an hour phone call filled with 4-letter words. The next morning the senior VP of engineering had a press conference saying that they were replacing all thier IBM mainframes with SUN servers. IBM then had some number of task forces to analyze why silicon valley wasn't using IBM mainframes, but they weren't allowed to consider IBM marketing and token-ring.

Late 80s, a disk division senior engineer got a talk scheduled at a communication group, annual, world-wide, internal conference supposedly on 3174 performance but open the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing mainframe datacenters to more distributed computing friendly platforms. The disk division had come up with a number of solutions but were constantly being vetoed by the communication group. The communication group had corporate stragic responsibility for everything that crossed the datacenter walls and were fiercely fighting off client/server and distributed computing. The disk division VP of software partial countermeasure was investing in distributed computing startups that would use IBM disks and he would ask us to periodically stop by his investments to see if we could offer any help.

A few short years later, IBM has one of the largest losses in the history of US corporations and was being re-orged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

some other background, 1972 (CEO) Learson was trying (and failed) to block the bureaucrats, careerists, and MBAs from destorying the watson culture/legacy (two decades later, IBM has its enormous loss and being prepared for breakup).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Turn of the century, IBM mainframe hardware revenue was a few percent of IBM revenue and dropping. z12 era, IBM mainframe hardware revenue was a couple percent of IBM revenue and still dropping ... but the mainframe group was 25% of IBM revenue (and 40% of profit), nearly all software and services.

trivia: 1st part of 90s, IBM was "divesting" (not breakup) lots of stuff and was spinning off lots of its chip design software to major chip design software vendor. The problem was that the industry standard platform was SUN. I got a contract to port a Pascal/VS 50,000 statement chip design application to SUN (pascal, in retrospect it would have been easier to rewrite in "C"). I started to think that SUN Pascal had never been used for anything other than educational purposes. SUN hdqtrs was just up the road so it was easy to drop in ... but they had outsourced their Pascal to operation on the opposite of the world (in this case it was really rocket science, I have bill cap from place called "space city") ... so problems took minimum of 24hr turn-around.

CPD having corporate strategic responsibility for everything crossing datacenter wall posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

posts mentioning porting VLSI chip app to SUN
https://www.garlic.com/~lynn/2024.html#8 Niklaus Wirth 15feb1934 - 1jan2024
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023c.html#98 Fortran
https://www.garlic.com/~lynn/2023c.html#75 IBM Los Gatos Lab
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2022g.html#6 "In Defense of ALGOL"
https://www.garlic.com/~lynn/2022f.html#22 STL & other San Jose facilities
https://www.garlic.com/~lynn/2022f.html#13 COBOL and tricks
https://www.garlic.com/~lynn/2021j.html#24 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#47 vs/pascal
https://www.garlic.com/~lynn/2021g.html#31 IBM Programming Projects
https://www.garlic.com/~lynn/2021c.html#95 What's Fortran?!?!
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2021.html#37 IBM HA/CMP Product
https://www.garlic.com/~lynn/2017g.html#43 The most important invention from every state
https://www.garlic.com/~lynn/2014b.html#4 IBM Plans Big Spending for the Cloud ($1.2B)
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters

--
virtualization experience starting Jan1968, online at home since Mar1970

Old adage "Nobody ever got fired for buying IBM"

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Old adage "Nobody ever got fired for buying IBM"
Date: 01 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"

There have been a number of articles about cache-miss, memory latency when measured in count of processor cycles ... is similar to 60s disk I/O latency when measured in count of 60s processor cycles (memory is new disk). This mentions the justification to add virtual memory to all 370s was because they couldn't get enough MVT regions running concurrently ... overlapping processor use while waiting for disk I/O ... to get throughput up for justifying 370/165. some past posts mentioning the issues
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2023g.html#85 Vintage DASD
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2022h.html#116 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2019e.html#102 MIPS chart for all IBM hardware model
https://www.garlic.com/~lynn/2018f.html#12 IBM mainframe today
https://www.garlic.com/~lynn/2017h.html#61 computer component reliability, 1951

equivalent with cache miss is out-of-order execution (branch prediction, speculative execution, etc) ... being able to execute other instructions while (preceding) instruction(s) wait on memory.

late last century, the i86 vendors went to a hardware layer that translated i86 instruction into RISC micro-ops for actual execution ... largely negating the throughput advantage of RISC processors (industry standard benchmark program that counts number of iterations compared to 1MIP reference platform).


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     IBM z900 mainframe processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC 440)

2003 max. configured IBM mainframe z990, 32 processor aggregate 9BIPS
     (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configure IBM mainframe z196, 80 processor aggregate 50BIPS
     (625MIPS/proc)
2010 E5-2600 XEON server blade, 16 processor aggregate 500BIPS
     (31BIPS/proc)

max configured z196 went for $30M, IBM base list price for E5-2600 blade was $1815. This century large cloud operations have been claiming that they assemble their own server blades for 1//3rd price of brand name server blades ($603, or $1.2/BIPS compared to z196 $600,000/BIPS) . Then there was press that i86 server chip makers were shipping at least half their product directly to cloud operations and IBM sells off its i86 server business

a large cloud operation will have a dozen or more megadatacenters around the world, each megadatacenter will half million or more blade servers, each blade server with ten times (or more) max. configured mainframe.

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

trivia: 1980, STL (since rename SVL) was bursting at the seams and were moving 300 from the IMS group to offsite bldg (with dataprocessing back to STL machine room). They had tried "remote 3270" but found the human factors unacceptable. I get con'ed into doing channel extender support, placing channel attached 3270 controllers at offsite bldg with no perceptible human factors difference between offsite and in STL. The was desire to make the support available to customers, but there was group in POK playing with some serial stuff that were afraid it would make it harder to get their stuff release, and get it vetoed. Also STL had been placing 3270 controllers across system channel shared with DASD ... placing 3270 controllers on channel-extenders (which had much lower channel busy) improving system throughput by 10-15% (there was some discussion using channel-extender for all 3270 controllers, for all systems).

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

1988, IBM branch office asked me if I could help LLNL (national lab) standardize some serial stuff that they were playing with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then POK gets their serial stuff released with ES/9000 as ESCON (when it is already obsolete), 17mbyte/sec. Later some POK engineers become involved with FCS and define a heavy-weight protocol that significantly reduces native throughput which eventually ships as FICON. Latest public FICON benchmark I can find is z196 "Peak I/O" getting 2m IOPS using 104 FICON. About the same time a FCS was announced for E5-2600 blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend that SAPs (system assist processors that do actual I/O) be held to 70% CPU ... which would be more like 1.5M IOPS. Also no CKD DASD have been made for decades, all being simulated on industry standard fixed-block disks.

FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
CKD DASD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

The man reinventing economics with chaos theory and complexity science

From: Lynn Wheeler <lynn@garlic.com>
Subject: The man reinventing economics with chaos theory and complexity science
Date: 01 May, 2024
Blog: Facebook
The man reinventing economics with chaos theory and complexity science. Traditional economics makes ludicrous assumptions and poor predictions. Now an alternative approach using big data and psychological insights is proving far more accurate
https://www.newscientist.com/article/mg26234870-200-the-man-reinventing-economics-with-chaos-theory-and-complexity-science/

Fecalnomics
https://www.counterpunch.org/2021/02/17/fecalnomics/
Fecalnomics is the study of poor decision-making. The concept of "fecalnomics" originated with a review I wrote of the book, Thinking: Fast and Slow, in which Nobel economist Daniel Kahneman shows how monkeys throwing feces are more accurate than human stock pickers over the long toss.

... snip ...

something of takeoff on Freakonomics
https://en.wikipedia.org/wiki/Freakonomics
http://freakonomics.com/
https://www.amazon.com/Freakonomics-Rev-Ed-Economist-Everything-ebook/dp/B000MAH66Y/

The (MIS)Behavior Of Markets (Mandelbrot & Hudson)
https://www.amazon.com/The-Misbehavior-Markets-Turbulence-ebook/dp/B004PYDBEO
although
https://en.wikipedia.org/wiki/Benoit_Mandelbrot
from above: Mandelbrot left IBM in 1987, after 35 years and 12 days, when IBM decided to end pure research

Mendelbrot description of period from 60s through the last decade was continuing to use same computations even when they are repeatedly shown to be wrong. Some of Mendelbrot's references are similar to this (by nobel prize winner in economics) Thinking Fast and Slow
https://www.amazon.com/Thinking-Fast-and-Slow-ebook/dp/B00555X8OA
pg212/loc3854-60:
"Since then, my questions about the stock market have hardened into a larger puzzle: a major industry appears to be built largely on an illusion of skill. Billions of shares are traded every day, with many people buying each stock and others selling it to them"

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity

--
virtualization experience starting Jan1968, online at home since Mar1970

Old adage "Nobody ever got fired for buying IBM"

From: Lynn Wheeler <lynn@garlic.com>
Subject: Old adage "Nobody ever got fired for buying IBM"
Date: 02 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#34 Old adage "Nobody ever got fired for buying IBM"

Note linux took over large cloud megadatacenters and cluster supercomputing (big overlap in paradigm and technology) because they needed full unrestricted source to adapt to the huge change with mega cluster operation (somewhat as mega cluster operation started to mature and settle down a little, some of the proprietary software vendors tried to emulate).

.. a large cloud operation will have a dozen or more megadatacenters around the world, each with half million or more server blades, each blade 10-40 times the processing power of max configured IBM mainframe. cloud operations had so radically reduced their system costs ... that power and cooling was increasingly becoming major cost. For on-demand interactive, peak requirements can be ten times (or more) avg use ... so require enormous over provisioning ... and they put enormous pressure on chip makers that power (& cooling) drops to zero when idle, but "instant on" when needed for on-demand interactive.

More than decade ago there were articles where it was possible to use a credit card at a large cloud operator to (remotely) spin-up a "supercomputer" (ranking in top 40 in the world) for couple of "off-shift" hrs. A typical megadatacenter will have something like 70-80 total staff (enormous automation).

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

Large corporations with tens of thousands of 3270s could get IBM/PC with 3270 emulation for lower cost that did both mainframe terminal emulation and some local processing in single desktop foot print.

.... some history of PC market
http://arstechnica.com/articles/culture/total-share.ars
http://arstechnica.com/articles/culture/total-share.ars/3
http://arstechnica.com/articles/culture/total-share.ars/4
http://arstechnica.com/articles/culture/total-share.ars/5

My brother was regional Apple marketing rep (largest physical area CONUS) ... and when he came into town for hdqtrs meetings and I could be invited to business dinners ... I would get to argue MAC design with the Apple people (even before MAC was announced) ... they were pretty much immune to the argument about having single (business) desktop footprint.

... other trivia: he figured out how to remote dial into the IBM S/38, that ran the business, to track manufacturing and delivery schedules

--
virtualization experience starting Jan1968, online at home since Mar1970

Planet Mainframe Profile

From: Lynn Wheeler <lynn@garlic.com>
Subject: Planet Mainframe Profile
Date: 02 May, 2024
Blog: Facebook
I was told they were going to do this, but it just appeared
https://planetmainframe.com/influential-mainframers-2024/lynn-wheeler/
used same picture

lhw picture in article

... they may have picked up "construction" ref from one of my archived posts
https://www.garlic.com/~lynn/2024b.html#44

then there is also mainframe hall of fame
https://www.enterprisesystemsmedia.com/mainframehalloffame
and knights of VM
http://mvmua.org/knights.html
Greater IBM Connections Member Profile 4/2/2009, gone 404
https://www.garlic.com/~lynn/ibmconnect.html
mar2005, systems mag, a little garbled, at wayback
https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/

other IBM history
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

past posts mentioning systems mag article
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023f.html#3 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2022h.html#61 Retirement
https://www.garlic.com/~lynn/2022e.html#17 VM Workshop
https://www.garlic.com/~lynn/2022c.html#40 After IBM
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021h.html#105 Mainframe Hall of Fame
https://www.garlic.com/~lynn/2021e.html#24 IBM Internal Network
https://www.garlic.com/~lynn/2019b.html#4 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2018e.html#22 Manned Orbiting Laboratory Declassified: Inside a US Military Space Station
https://www.garlic.com/~lynn/2017g.html#8 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#105 The IBM 7094 and CTSS
https://www.garlic.com/~lynn/2016c.html#61 Can commodity hardware actually emulate the power of a mainframe?
https://www.garlic.com/~lynn/2016c.html#25 Globalization Worker Negotiation
https://www.garlic.com/~lynn/2015g.html#80 Term "Open Systems" (as Sometimes Currently Used) is Dead -- Who's with Me?
https://www.garlic.com/~lynn/2014d.html#42 Computer museums
https://www.garlic.com/~lynn/2013l.html#60 Retirement Heist
https://www.garlic.com/~lynn/2013k.html#29 The agency problem and how to create a criminogenic environment
https://www.garlic.com/~lynn/2013k.html#28 Flag bloat
https://www.garlic.com/~lynn/2013k.html#2 IBM Relevancy in the IT World
https://www.garlic.com/~lynn/2013h.html#87 IBM going ahead with more U.S. job cuts today
https://www.garlic.com/~lynn/2013h.html#77 IBM going ahead with more U.S. job cuts today
https://www.garlic.com/~lynn/2013f.html#61 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013f.html#49 As an IBM'er just like the Marines only a few good men and women make the cut,
https://www.garlic.com/~lynn/2013e.html#79 As an IBM'er just like the Marines only a few good men and women make the cut,
https://www.garlic.com/~lynn/2013.html#74 mainframe "selling" points
https://www.garlic.com/~lynn/2012p.html#60 Today in TIME Tech History: Piston-less Power (1959), IBM's Decline (1992), TiVo (1998) and More
https://www.garlic.com/~lynn/2012o.html#32 Does the IBM System z Mainframe rely on Obscurity or is it Security by Design?
https://www.garlic.com/~lynn/2012k.html#34 History--punched card transmission over telegraph lines
https://www.garlic.com/~lynn/2012g.html#87 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012g.html#82 How do you feel about the fact that today India has more IBM employees than US?
https://www.garlic.com/~lynn/2012.html#57 The Myth of Work-Life Balance
https://www.garlic.com/~lynn/2011p.html#12 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2011c.html#68 IBM and the Computer Revolution
https://www.garlic.com/~lynn/2010q.html#60 I actually miss working at IBM
https://www.garlic.com/~lynn/2010q.html#30 IBM Historic computing
https://www.garlic.com/~lynn/2010o.html#62 They always think we don't understand
https://www.garlic.com/~lynn/2010l.html#36 Great things happened in 1973
https://www.garlic.com/~lynn/2008p.html#53 Query: Mainframers look forward and back
https://www.garlic.com/~lynn/2008j.html#28 We're losing the battle
https://www.garlic.com/~lynn/2008b.html#66 How does ATTACH pass address of ECB to child?
https://www.garlic.com/~lynn/2008b.html#65 How does ATTACH pass address of ECB to child?
https://www.garlic.com/~lynn/2006q.html#26 garlic.com
https://www.garlic.com/~lynn/2006i.html#11 Google is full
https://www.garlic.com/~lynn/2006c.html#43 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005h.html#19 Blowing My Own Horn
https://www.garlic.com/~lynn/2005e.html#14 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005e.html#9 Making History

--
virtualization experience starting Jan1968, online at home since Mar1970

Joseph Stiglitz is still walking the road to freedom

From: Lynn Wheeler <lynn@garlic.com>
Subject: Joseph Stiglitz is still walking the road to freedom
Date: 03 May, 2024
Blog: Facebook
Joseph Stiglitz is still walking the road to freedom. The veteran economist warned in 2003 of the problems that led to the 2008 crash. Today we are in a better place, he says, but dangers still lurk
https://www.thetimes.co.uk/article/joseph-stiglitz-on-the-threat-of-fake-capitalism-and-freedom-rhetoric-dmd9f2bcp

... Jan1999 I was asked to help prevent the coming economic mess. I was told that some investment bankers had "walked away clean" from the S&L Crisis, were then running "IPO Mills" (invest few million, hype, IPO for a couple billion, needed to fail to leave field clear for next round), and were predicted next to get into securitized mortgages. I worked on improving the integrity of securitized mortgage supporting documents. They then were paying rating agencies for triple-A rating when the agencies knew they weren't worth triple-A (from Oct2008 congressional hearings) ... enabling no-documentation, liar loans/mortgages, 2001-2008 selling over $27T into the bond market.

They then find they can build securitize mortgages designed to fail (creating enormous market/demand for bad mortgages), pay for triple-A, sell into the bond market, and take out CDS gambling bets that they would fail. The largest holder of the CDS gambling bets was AIG and negotiating to pay off at 50cents on the dollar when the SECTREAS steps in and sign a document that they couldn't sue those making the gambling bets and take TARP funds to pay off at 100cents on the dollar. The largest recipient of TARP funds was AIG and the large recipient of face-value payoffs was the firm previously headed by SECTREAS.

Jan2009, I was asked to HTML'ize the Pecora Hearings (30s senate hearings into the '29 crash) with lots of internal HREFs and URLs comparing what happened this time and what happened then (some comments that the new congress might have appetite to do something). I work on it for awhile and then get a call saying it wouldn't be needed after all (comments that capital hill was totally buried under enormous mountains of wallstreet cash).

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
regulatory "capture" posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
too-big-to-fail, too-big-to-prosecute, too-big-to-jail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
glass-steagall and/or pecora hearing posts
https://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass-Steagall
S&L crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Big oil spent decades sowing doubt about fossil fuel dangers, experts testify

From: Lynn Wheeler <lynn@garlic.com>
Subject: Big oil spent decades sowing doubt about fossil fuel dangers, experts testify
Date: 04 May, 2024
Blog: Facebook
Big oil spent decades sowing doubt about fossil fuel dangers, experts testify | Oil and gas companies
https://www.theguardian.com/us-news/2024/may/01/big-oil-danger-disinformation-fossil-fuels
US Senate hearing reviewed report showing sector's shift from climate denial to 'deception, disinformation and doublespeak'

... snip ...

Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming
https://en.wikipedia.org/wiki/Merchants_of_Doubt
Merchants of Doubt
https://www.merchantsofdoubt.org/
Merchants of Doubt
https://www.amazon.com/Merchants-Doubt-Handful-Scientists-Obscured/dp/1608193942
https://www.amazon.com/Merchants-Doubt-Handful-Scientists-Obscured/dp/1596916109

... also ... Confessions of an Economic Hit Man
https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit_Man

Merchants of Doubt posts
https://www.garlic.com/~lynn/submisc.html#merchants.of.doubt
Capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
Griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia

posts specifically mentioning "big oil"
https://www.garlic.com/~lynn/2023e.html#96 Fracking Fallout: Is America's Drinking Water Safe?
https://www.garlic.com/~lynn/2023c.html#81 $209bn a year is what fossil fuel firms owe in climate reparations
https://www.garlic.com/~lynn/2023.html#35 Revealed: Exxon Made "Breathtakingly" Accurate Climate Predictions in 1970's and 80's
https://www.garlic.com/~lynn/2022g.html#89 Five fundamental reasons for high oil volatility
https://www.garlic.com/~lynn/2022g.html#21 'Wildfire of disinformation': how Chevron exploits a news desert
https://www.garlic.com/~lynn/2022f.html#16 The audacious PR plot that seeded doubt about climate change
https://www.garlic.com/~lynn/2022e.html#69 India Will Not Lift Windfall Tax On Oil Firms Until Crude Drops By $40
https://www.garlic.com/~lynn/2022d.html#96 Goldman Sachs predicts $140 oil as gas prices spike near $5 a gallon
https://www.garlic.com/~lynn/2022c.html#117 Documentary Explores How Big Oil Stalled Climate Action for Decades
https://www.garlic.com/~lynn/2021i.html#28 Big oil's 'wokewashing' is the new climate science denialism
https://www.garlic.com/~lynn/2021g.html#72 It's Time to Call Out Big Oil for What It Really Is
https://www.garlic.com/~lynn/2021g.html#16 Big oil and gas kept a dirty secret for decades
https://www.garlic.com/~lynn/2021g.html#13 NYT Ignores Two-Year House Arrest of Lawyer Who Took on Big Oil
https://www.garlic.com/~lynn/2021g.html#3 Big oil and gas kept a dirty secret for decades
https://www.garlic.com/~lynn/2021e.html#77 How climate change skepticism held a government captive
https://www.garlic.com/~lynn/2018d.html#112 NASA chief says he changed mind about climate change because he 'read a lot'
https://www.garlic.com/~lynn/2014m.html#27 LEO
https://www.garlic.com/~lynn/2013e.html#43 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012e.html#30 Senators Who Voted Against Ending Big Oil Tax Breaks Received Millions From Big Oil
https://www.garlic.com/~lynn/2012d.html#61 Why Republicans Aren't Mentioning the Real Cause of Rising Prices at the Gas Pump
https://www.garlic.com/~lynn/2007s.html#67 Newsweek article--baby boomers and computers

--
virtualization experience starting Jan1968, online at home since Mar1970

CMS RED, XEDIT, IOS3270, FULIST, BROWSE

From: Lynn Wheeler <lynn@garlic.com>
Subject: CMS RED, XEDIT, IOS3270, FULIST, BROWSE
Date: 04 May, 2024
Blog: Facebook
Old archived post
https://www.garlic.com/~lynn/2006u.html#26
with some email about "RED" and "XEDIT" editors:
https://www.garlic.com/~lynn/2006u.html#email790606
https://www.garlic.com/~lynn/2006u.html#email800311
https://www.garlic.com/~lynn/2006u.html#email800312
https://www.garlic.com/~lynn/2006u.html#email800429
https://www.garlic.com/~lynn/2006u.html#email800501

Part of discussion with Endicott was about them releasing RED (instead of XEDIT) because RED was much more mature, more feature/function, and faster. Endicott's retort was it was the RED author's fault that RED was so much better than XEDIT ... and so it should be his responsibility to bring XEDIT up to RED's level. Note: after Future System imploded, the head of POK managed to convince corporate to kill the VM370/CMS product, shutdown the development group and move all the people to POK for MVS/XA (or I guess the claim might of been that otherwise MVS/XA wouldn't ship on time; Endicott eventually manages to save the VM370/CMS product mission, but had to recreate a development group from scratch).

Another part was discussion about reworking RED for "R/O" shared segments. I had done a page mapped filesystem for CP67 and could load a (1mbyte 360/67) SHARED SEGMENT direct from CMS module in the filesystem. Then when CP67 was modified to run on 370s (well before VM370), I modified it for 370 64kbyte shared segments. In 1974, I moved a lot of CP67 feature/function to VM370/CMS Release 2 (including full CMS page mapped filesystem and shared segment support) as DCSS. Then for VM370, a very small subset of the shared segment support was added to VM370 Release 3 (w/o the CMS page mapped filesystem support) ... aka up to then VM370 shared segment was only available via the "IPL" command.

CMS IOS3270, FULIST and BROWSE came from sysprog at EU Uithoorne "HONE" datacenter; old archived email with Theo about FULIST
https://www.garlic.com/~lynn/2001f.html#email781010
https://www.garlic.com/~lynn/2001f.html#email781011

trivia: one of my hobbies after joining IBM was enhanced operating systems for internal datacenters and HONE was long time customer. HONE was originally CP67 for US branch office SEs to dial in and practice guest operating skills running in virtual machines. Science center had also done port of APl\360 to CP67/CMS for CMS\APL with lots of improvements ... and HONE started offering CMS\APL-based sales&marketing support applications, which came to dominate all HONE activity (with guest operating system use just dwindled away). US HONE moved from enhanced CP67/CMS to enhanced VM370/CMS and all datacenters consolidated in silicon valley ... as well as HONE datacenter clones cropping up all over the world (I had been asked to do initial installs of a couple of the clones). The early morph of CP67->VM370 simplified and/or dropped a lot of feature/function (including multiprocessor support). For my release 2 work, I did the kernel reorg needed by multiprocessor support, but not actual multiprocessor itself. Then for a VM370R3 "CSC/VM", I added multiprocessor support initially for consolidated US HONE could add a 2nd processor to each system.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE and/or APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
cms paged-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
posts discussing adcons i shared segments
https://www.garlic.com/~lynn/submain.html#adcon
SMP, multiprocessor, tightly-coupled, and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Congratulations Lynne

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Congratulations Lynne
Newsgroups: alt.folklore.computers
Date: Sat, 04 May 2024 16:33:17 -1000
Iron Spring Software <Peter_Flass@Yahoo.com> writes:
2024 Influential Mainframers - Lynn Wheeler
https://planetmainframe.com/infl - Lynn Wheeleruential-mainframers-2024/

"Lynn Wheeler has significantly shaped the world of mainframe computing, most notably through his enhancements to z/VM's CP and CMS, including the creation of the "Wheeler Scheduler." His pioneering work earned him a spot in the founding class of the Knights of VM, highlighting his influence in the mainframe community." ...


and
https://www.garlic.com/~lynn/2024c.html37 Planet Mainframe Profile

... misc. other in addition to knights of VM
http://mvmua.org/knights.html
Greater IBM Connections Member Profile 4/2/2009, gone 404
https://www.garlic.com/~lynn/ibmconnect.html
mar2005, systems mag, a little garbled, at wayback
https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/

--
virtualization experience starting Jan1968, online at home since Mar1970

Netscape

From: Lynn Wheeler <lynn@garlic.com>
Subject: Netscape
Date: 05 May, 2024
Blog: Facebook
Late 80s, HA/6000 was originally approved for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000, I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres that had VAXcluster and unix support in the same source base). Early Jan1992, meeting with Oracle, AWD/Hester tells Oracle CEO that we would have HA/CMP 16 processor clusters mid92 and 128 processor clusters ye92. Then late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (technical/scientific *ONLY*) and we were told that we couldn't work on anything with more than four processors. We leave IBM a few months later.

Not long later I'm brought in as consultant into a small client/server startup that had been formed by some people from NCSA
http://www.ncsa.illinois.edu/enabling/mosaic
... two of the former Oracle people (that were in the cluster scale-up Oracle CEO meeting) are there responsible for something called "commerce server" and want to do payment transactions on the server; the startup had also done some technology called "SSL" they want to use, frequently now called "electronic commerce". I had complete authority for everything between the webservers and financial industry payment networks. Note NCSA complains about their use of the name ... and they have to change it (trivia: what silicon valley company provided the new name?).

other trivia: early 80s, I had HSDT project, T1 and faster computer links (both terrestrial and satellite) and was working with NSF directory; was suppose to get $20M to interconnect the NSF Supercomputer centers. Then congress cuts the budge, some other things happen and eventually an RFP is released. from 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

TYMSHARE, VMSHARE, ADVENTURE

From: Lynn Wheeler <lynn@garlic.com>
Subject: TYMSHARE, VMSHARE, ADVENTURE
Date: 05 May, 2024
Blog: Facebook
I would periodically drop in on Tymshare and/or see them at the monthly user group meetings hosted by Stanford SLAC. In Aug1976, TYMSHARE start offering their VM370/CMS based online computer conferencing system to the (IBM mainframe) SHARE user group as "VMSHARE" ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE for monthly tape dump of all VMSHARE (and later also PCSHARE) files for putting up on internal network and systems. One visit to TYMSHARE they demo'ed a new game (ADVENTURE) that somebody found on Stanford SAIL PDP10 system and ported to VM370/CMS ... I get copy and started making it (also) available on internal networks/systems. I would send source to anybody that could demonstrate they got all the points. Relatively shortly, versions with lots more points appear as well as PLI versions.

posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

posts mentioning tymshare, vmshare, and adventure
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2018f.html#111 Online Timsharing
https://www.garlic.com/~lynn/2017j.html#26 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017h.html#11 The original Adventure / Adventureland game?
https://www.garlic.com/~lynn/2017f.html#67 Explore the groundbreaking Colossal Cave Adventure, 41 years on
https://www.garlic.com/~lynn/2017d.html#100 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016e.html#103 August 12, 1981, IBM Introduces Personal Computer
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
https://www.garlic.com/~lynn/2011f.html#75 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#57 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2006n.html#3 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2005u.html#25 Fast action games on System/360+?
https://www.garlic.com/~lynn/2005k.html#18 Question about Dungeon game on the PDP

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe LAN Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe LAN Support
Date: 05 May, 2024
Blog: Facebook
There was bus&tag interface card done for pc/at in mid-80s (referred to PCCA, aka PC channel attach) used for number of internal mainframe things. The IBM communication group was fighting off client/server, distributed computing and release of mainframe tcp/ip support. The release of TCP/IP got approved and the communication group changed their strategy, saying that since they had corporate strategic responsibility for everything that cross datacenter walls, it had to be released through them. What shipped got 44kbytes aggregate using nearly whole 3090 processor and the PCCA/8232 that had been expected to be priced at $5k, was $40k.

I then did the software changes to support RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed). Of course it wasn't 8232, it was non-IBM channel attached router supporting up to 16 LAN interfaces, multiple T1&T3 telco interfaces, FDDI (about the same price as 8232; then later also supporting RS/6000 SLA ... an incompatible enhanced, faster, full-duplex, modification of ESCON). The engineer responsible for SLA, then wanted to do 800mbit/sec version but manage to con him into joining the FCS standards committee instead (in 1988, the IBM branch office had con'ed me into helping LLNL standardize some serial stuff they had been playing with, which quickly becomes FCS, initially 1gbit/sec, full-duplex, 200mbyte/sec aggregate, when ESCON ships it was already obsolete).

The communication group telco products had been capped at 56kbit/sec and prepared report for the corporate executive committee that customers wouldn't be interested in T1 until at least later in the 90s. VTAM had fat-pipe support treating multiple parallel 56kbit links as single logical link ... and they showed number of customers with fat-pipe had dropped to zero by seven parallel links (what they didn't know or didn't want to tell executive committee that typical telco tariff for T1 was about the same as between 5 and 7 56kbit links). I had HSDT project from the early 80s, T1 and faster computer links (both terrestrial and satellite) and trivia customer survey found 200 customers with T1 links (with non-IBM software and hardware).

The communication group did finally come out with the 3737 in the later 80s, it had a boat load of M68k processors and memory that simulated a CTCA VTAM to the local mainframe VTAM ... immediately acking receipt of RUs, spoofing host VTAM, in order to keep the traffic flowing ... and then using non-SNA to the remote 3737 on T1 link ... peaking out 2mbit/sec on even short-haul T1 terrestrial link (US full-duplex T1 3mbit/sec aggregate, EU full-duplex T1 4mbit/sec aggregate).

trivia: AWD (workstation division) for the PC/RT (had pc/at 16bit bus) had done their own 4mbit t/r card. However for RS/6000 with microchannel, corporate told AWD that they couldn't do their own microchannel cards but had to use the (communication group heavily performance kneecapped) PS2 microchannel cards. Simple example was PS2 microchannel 16mbit t/r card had lower card throughput than the PC/RT 4mbit t/r card (making a RS/6000 16mbit t/r server slower than PC/RT 4mbit t/r server). Note: the heavy performance kneecaping microchannel cards wasn't just the t/r cards.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

sme recent posts mentioning 3737
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2024b.html#56 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023d.html#120 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023c.html#57 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#77 IBM HSDT Technology
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#103 IBM ROLM
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021j.html#103 Who Knew ?
https://www.garlic.com/~lynn/2021j.html#32 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#31 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021h.html#109 The Age of Battleships Is Dead and Long Gone
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021d.html#14 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#97 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe LAN Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe LAN Support
Date: 06 May, 2024
Blog: Facebook
re: htt://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support

A couple issues with PCCA/8232 .... instead of releasing it with TCP/IP router support ... its was released as LAN/MAC bridge ... that required all the IP->LAN/MAC work had to be done back in the host IP code. The story I heard about 8232 being $40k (instead of $5k) was the communication contrived that forecast was just for the AT&T UNIX + TSS/370 SSUP market which was just internal AT&T ... as 14 sales total ... so all the upfront fixed IBM product costs were just spread across 14 units (instead of the large number of VM370 TCP/IP ... note the VM370 TCP/IP was also made available for MVS by implementing VM370 diagnose instruction function simulation ... since it was already slow for VM370 ... the extra overhead contributed to MVS complaints about how slow it was). Some part of my performance getting VM370 TCP/IP running at sustained channel throughput using only a modest amount of 4341 was supporting a (non-IBM) router box, aka RFC1044 (rather than LAN bridge box).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

PCCA &/or 8232 posts
https://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2021i.html#73 IBM MYTE
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2013m.html#9 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013i.html#62 Making mainframe technology hip again
https://www.garlic.com/~lynn/2013g.html#17 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2010n.html#27 z/OS, TCP/IP, and OSA
https://www.garlic.com/~lynn/2010c.html#25 Processes' memory
https://www.garlic.com/~lynn/2010c.html#24 Processes' memory
https://www.garlic.com/~lynn/2008l.html#20 IBM-MAIN longevity
https://www.garlic.com/~lynn/2006n.html#18 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2005u.html#49 Channel Distances
https://www.garlic.com/~lynn/2005t.html#48 FULIST
https://www.garlic.com/~lynn/2005t.html#45 FULIST
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2005r.html#17 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#2 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003d.html#37 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#35 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#33 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003c.html#77 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003.html#67 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2002q.html#27 Beyond 8+3
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?

--
virtualization experience starting Jan1968, online at home since Mar1970

Big oil spent decades sowing doubt about fossil fuel dangers, experts testify

From: Lynn Wheeler <lynn@garlic.com>
Subject: Big oil spent decades sowing doubt about fossil fuel dangers, experts testify
Date: 06 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#39 Big oil spent decades sowing doubt about fossil fuel dangers, experts testify

An Oil Price-Fixing Conspiracy Caused 27% of All Inflation Increases in 2021. The FTC just found evidence that American oil companies colluded with the Saudi government to hike gas prices, costing the average family $3,000 last year. The question is, what can we do about it?
https://www.thebignewsletter.com/p/an-oil-price-fixing-conspiracy-caused

... griftopia ... commodity market secret letters allowing speculators to play
http://www.amazon.com/Griftopia-Machines-Vampire-Breaking-America-ebook/dp/B003F3FJS2/
... commodity market used to require players to have significant holdings ... because speculators resulted in wild, irrational price swings (betting on how prices would move and manipulating news to push prices in direction bet on (both up and down).

There were articles about US speculators being behind enormous oil (& gas) price spike summer 2008. Then a member of congress releases the speculation transactions that identified the corporations responsible for the enormous oil (& gas) price spike/swings. For some reason, the press then pillared&vilified the member of congress for violating corporation privacy (& exposing the corporations preying on US public, rather than trying to hold the speculators accountable).

griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe LAN Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe LAN Support
Date: 06 May, 2024
Blog: Facebook
re: htt://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support htt://www.garlic.com/~lynn/2024c.html#45 IBM Mainframe LAN Support

almaden research was heavily provisioned with cat4 presumably for 16mbit token-ring ... but found 10mbit ethernet had higher aggregate bandwidth and lower latency over cat4. that is besides that $69 10mbit ethernet had much higher card thruput (capable of 8.5mbit) than the $800 (heavily performance kneecapped) 16mbit token-ring cards

also for 300 machines .... the price difference between the high performance ethernet cards and the $800 (kneecapped) token-ring cards ... could get five high-end TCP/IP routers, each with 16 ethernet networks (80 total), IBM channel interfaces and other features ... even be able spreading 300 machines across the 80 networks (four machines/network) ... while traditional SNA token-ring would tend to have all 300 sharing a LAN network.

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning Almaden token-ring versus ethernet
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#83 IBM's Near Demise
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2014h.html#88 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2013m.html#7 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2011h.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe LAN Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe LAN Support
Date: 06 May, 2024
Blog: Facebook
re: htt://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support htt://www.garlic.com/~lynn/2024c.html#45 IBM Mainframe LAN Support htt://www.garlic.com/~lynn/2024c.html#45 IBM Mainframe LAN Support

... from long ago and far away, I don't remember (physical) internals for the Cambridge box ... although requirements were a PC/AT for each "cambridge channel attach box" and supported standard 3088/trouter CTCA mode (but not "waitread" or 3088*/spider mode). I had HSDT project starting in early 80s, T1 and faster computer links (both terrestrial and satellite). With regard to part of the following, Boulder had developed a channel emulation card (I think part of hardware/sofware testing 3800(?) printer).
Date: 08/21/85 12:19:55
From: wheeler

re: boulder channel attach versis pcca;

problem with YKT pcca is that it is 360 ctca ... not even 3088(/trouter). For HSDT you need two things a) dual 370 subchannel addresses, one for input/reads -- the other for output/writes and b) "waitread" function. Standard vendor box (for almost decade) has a "waitread" operation and 3088*(/spider) now has a similar function. Problem on existing CTC/3088(/trouter) is that input operation requires that operating system wait for an attention interrupt, operating system fields the attention and then schedules a read operation. Waitread allows an outstanding read operation to be alwas be pending on the input channel. W/o waitread, the "latency" of the software in fielding the attention interrupt and scheduling the read operation takes longer than anything else.

Boulder channel attach is cheap (if you already have a 3088*/spider) and they are ready to ship now. Software can be developed on that basis pending upgrading the PCCA to 3088*(/spider) mode.


... snip ... top of post, old email index
Date: 08/21/85 16:43:08
From: wheeler

re: channel attach cards; cc: hsdt; there have been some comments about the "performance" of the various PC channel attach cards. One of the areas was the YKT PCCA card is suppose to be "good" (better than the others) is in hardware latency to start data transfer. However, the effective thru-put of a card will also be dependent on its total operation characteristic. Protocols that use the old 360 CTCA protocol, with a single subchannel address, have effectively a very long start-up latency for incoming data ... because there is first an attention interrupt that has to be presented to the operating system, the operating system has to field the interrupt and then put up a read. Such software "start-up latency" appears to be much longer than any of the various "hardware" latencies.

To address this problem you need support for both (a) pairs of read/write subchannel addresses and (b) the equivalent of the standard vendor box waitread CCW ... or the special 3088*/spider CCW op. The YKT PCCA uses the old 360 CTCA protocol; single subchannel address, no special op. The Cambridge channel attach supposedly supports 3088/trouter mode ... but doesn't have 3088* support. It would look like the PC 370 channel simulators can be attached to a 3088* and you get the support very inexpensively (assuming that you already have a 3088*).

For software development, 3088*/spider to PC via PC channel simulator would appear to be the best bet ... pending the availability of a PC control unit simulator that attaches to a 370 channel (i.e. upgrade YKT PCCA to 3088*/spider mode).

... snip ... top of post, old email index
Date: 08/29/85 17:53:38
From: wheeler
To: cambridge

can i get (over network) copies of the cambridge control unit documents? Can I be added to the computer conference on the same? Any schedule on plans for supporting a 3088* interface to the host???


... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

Left Unions Were Repressed Because They Threatened Capital

From: Lynn Wheeler <lynn@garlic.com>
Subject: Left Unions Were Repressed Because They Threatened Capital
Date: 07 May, 2024
Blog: Facebook
Left Unions Were Repressed Because They Threatened Capital. During the 20th century's two red scares in US and Canada, Wobblies and Communist-aligned unions faced fierce repression from employers and government. They were targeted because they were seen as posing a real threat to the capitalist social order.
https://portside.org/2024-05-05/left-unions-were-repressed-because-they-threatened-capital

Capitalism and social democracy ... have pros & cons and can be used for checks & balances ... example, On War
https://www.amazon.com/War-beautifully-reproduced-illustrated-introduction-ebook/dp/B00G3DFLY8/
loc394-95:
As long as the Socialists only threatened capital they were not seriously interfered with, for the Government knew quite well that the undisputed sway of the employer was not for the ultimate good of the State.

... snip ...

i.e. the government needed general population standard of living sufficient that soldiers were willing to fight to preserve their way of life. Capitalists tendency was to reduce worker standard of living to the lowest possible ... below what the government needed for soldier motivation ... and therefor needed socialists as counterbalance to the capitalists in raising the general population standard of living. Saw this fight out in the 30s, American Fascists opposing all of FDR's "new deals" The Coming of American Fascism, 1920-1940
https://historynewsnetwork.org/article/172004
The truth, then, is that Long and Coughlin, together with the influential Communist Party and other leftist organizations, helped save the New Deal from becoming genuinely fascist, from devolving into the dictatorial rule of big business. The pressures towards fascism remained, as reactionary sectors of business began to have significant victories against the Second New Deal starting in the late 1930s. But the genuine power that organized labor had achieved by then kept the U.S. from sliding into all-out fascism (in the Marxist sense) in the following decades.

... snip ...

aka "Coming of America Fascism" shows socialists countered the "New Deal" becoming fascist ... which had been the objective of the capitalists ... and possibly contributed to forcing them further into the Nazi/fascist camp. When The Bankers Plotted To Overthrow FDR
https://www.npr.org/2012/02/12/145472726/when-the-bankers-plotted-to-overthrow-fdr
The Plots Against the President: FDR, A Nation in Crisis, and the Rise of the American Right
https://www.amazon.com/Plots-Against-President-Nation-American-ebook/dp/B07N4BLR77/

June1940, Germany had a victory celebration at the NYC Waldorf-Astoria with major industrialists. Lots of them were there to hear how to do business with the Nazis
https://www.amazon.com/Man-Called-Intrepid-Incredible-Narrative-ebook/dp/B00V9QVE5O/
loc1925-29:
One prominent figure at the German victory celebration was Torkild Rieber, of Texaco, whose tankers eluded the British blockade. The company had already been warned, at Roosevelt's instigation, about violations of the Neutrality Law. But Rieber had set up an elaborate scheme for shipping oil and petroleum products through neutral ports in South America.

... snip ...

Later somewhat replay of the 1940 celebration, conference of 5000 industrialists and corporations from across the US at the Waldorf-Astoria, except in part because they had gotten such a bad reputation for the depression and supporting Nazis/fascism, so attempting to refurbish their horribly corrupt and venal image, they approved a major propaganda campaign to equate Capitalism with Christianity.
https://www.amazon.com/One-Nation-Under-God-Corporate-ebook/dp/B00PWX7R56/
part of the result by the early 50s was adding "under god" to the pledge of allegiance. slightly cleaned up version
https://en.wikipedia.org/wiki/Pledge_of_Allegiance

Corporatism is an American, Bipartisan Scourge. Matt Stoller's Goliath recalls when workers' rights became 'consumer advocacy,' and we all lost the language of anti-monopoly.
https://www.theamericanconservative.com/articles/corporatism-is-an-american-bipartisan-scourge/
Stoller also delves into the secret production compacts between American and Nazi producers delivering a timeless lesson that corporate giants will nearly always pursue profit above morality in their dealings with authoritarian regimes.

... snip ...

Goliath: The 100-Year War Between Monopoly Power and Democracy
https://www.amazon.com/Goliath-Monopolies-Secretly-Took-World-ebook/dp/B07GNSSTGJ/

Gangsters of Capitalism
https://www.amazon.com/Gangsters-Capitalism-Smedley-Breaking-Americas-ebook/dp/B092T8KT1N/
Smedley Butler was the most celebrated warfighter of his time. Bestselling books were written about him. Hollywood adored him. Wherever the flag went, "The Fighting Quaker" went--serving in nearly every major overseas conflict from the Spanish War of 1898 until the eve of World War II. From his first days as a 16-year-old recruit at the newly seized Guantanamo Bay, he blazed a path for empire: helping annex the Philippines and the land for the Panama Canal, leading troops in China (twice), and helping invade and occupy Nicaragua, Puerto Rico, Haiti, Mexico, and more. Yet in retirement, Butler turned into a warrior against war, imperialism, and big business, declaring: "I was a racketeer for capitalism."

... snip ...

Smedley Butler
https://en.wikipedia.org/wiki/Smedley_Butler
Business Plot
https://en.wikipedia.org/wiki/Business_Plot
War Is a Racket
https://en.wikipedia.org/wiki/War_Is_a_Racket
Confessions of an Economic Hit Man
https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit_Man
War profiteering
https://en.wikipedia.org/wiki/War_profiteering

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war

--
virtualization experience starting Jan1968, online at home since Mar1970

third system syndrome, interactive use, The Design of Design

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: third system syndrome, interactive use, The Design of Design
Newsgroups: comp.arch
Date: Wed, 08 May 2024 07:58:52 -1000
John Levine <johnl@taugh.com> writes:
According to MitchAlsup1 <mitchalsup@aol.com>: TSS was a disaster due to an extreme case of second system syndrome, but Michigan's MTS and IBM skunkworks CP/67 worked great.

TSS at CMU was extensively rewritten in assembly and became quite tolerable--hosting 30+ interactive jobs along with a background batch processing system. When I arrived in Sept 1975 it was quite unstable with up times less than 1 hour. 2 years later it would run for weeks at a time without going down.

For reasons I do not want to try to guess, AT&T did the software development for the 5ESS phone switches in a Unix system that sat on top of TSS. After IBM cancelled TSS, AT&T continued to use it as some sort of special order thing. At IBM there were only a handful of programmers working on it, by that time all quite experienced, and I hear that they also got rid of a lot of cruft and made it much faster and more reliable.

At the same time, IBM turned the skunkworks CP/67 into VM/370 with a much larger staff, leading to predictable consequences.


TSS/360 was decommitted and group reduced from 1100 to 20. Morph of TSS/360 to TSS/370 was much better (with only 20 people).

Both Amdahl and IBM hardware field support claimed they wouldn't support 370 machines w/o industrial strength EREP. The effort to add industrial strength EREP to UNIX was many times the effort to do 370 port. They did a stripped down TSS/370 with just hardware layer and EREP (called SSUP) with UNIX built on top. IBM AIX/370 and Amdahl UTS were run in VM/370 virtual machines ... leveraging VM/370 industrial EREP.

CP/40 was done on 360/40 with virtual memory hardware mods; it morphs into CP/67 when 360/67 standard with virtual memory became available. Group had 11 people (1/100th TSS/360).

When I graduate and join IBM, one of my hobbies was enhanced production operating systems for internal datacenters. With the decision to add virtual memory to all 370s, it was decided to do VM/370 and some of the science center people move to the 3rd flr taking over the IBM Boston Programming Center for VM/370 group. The group was expanding to 200+ and outgrew the 3rd flr, moving to the vacant IBM SBS bldg out in Burlington Mall (of rt128).

Note the morph of CP67->VM370 dropped and/or simplified a bunch of features (including multiprocessor support). In 1974, I started migrating a bunch of CP67 stuff to VM370 R2. I had also done automated benchmarking system and was the the 1st thing I migrated ... however, VM370 couldn't complete a full set of benchmarks w/o crashing ... so the next thing I had to migrate was the CP67 kernel synchronization & serialization function ... it order for VM370 to complete benchmark series. Then I started migrating a bunch of my enhancements.

For some reason AT&T longlines got an early version of my production VM370 CSC/VM (before the multiprocessor support) ... and over the years moved it to latest IBM 370s and propagated around to other locations. Then comes the early 80s when next new IBM was 3081 ... which was originally a multiprocessor only machine. The IBM corporate marketing rep for AT&T tracks me down to ask for help with retrofitting multiprocessor support to old CSC/VM ... concern was that all those AT&T machines would migrate to the latest Amdahl single processor (which had about the same processing as aggregate of the 3081 two processor).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement and page transfer
https://www.garlic.com/~lynn/subtopic.html#clock
HONE & APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, tightly-coupled, compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

some cp/40 history
https://www.garlic.com/~lynn/cp40seas1982.txt
melinda's history website
http://www.leeandmelindavarian.com/Melinda#VMHist
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

--
virtualization experience starting Jan1968, online at home since Mar1970

third system syndrome, interactive use, The Design of Design

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: third system syndrome, interactive use, The Design of Design
Newsgroups: comp.arch
Date: Wed, 08 May 2024 15:10:20 -1000
EricP <ThatWouldBeTelling@thevillage.com> writes:
Lynn Wheeler wrote: For some reason AT&T longlines got an early version of my production VM370 CSC/VM (before the multiprocessor support) ... and over the years moved it to latest IBM 370s and propagated around to other locations. Then comes the early 80s when next new IBM was 3081 ... which was originally a multiprocessor only machine. The IBM corporate marketing rep for AT&T tracks me down to ask for help with retrofitting multiprocessor support to old CSC/VM ... concern was that all those AT&T machines would migrate to the latest Amdahl single processor (which had about the same processing as aggregate of the 3081 two processor).

Regarding retrofitting multiprocessor support to old CSC/VM, by which I take it you mean adding SMP support to a uni-processor OS, do you remember what changes that entailed? Presumably a lot more than acquiring one big spinlock every time the OS was entered. That seems like a lot of work for one person.


re:
https://www.garlic.com/~lynn/2024c.html#50 third system syndrome, interactive use, The Design of Design

Charlie had invented compare&swap (for his initials CAS) when he was doing fine-grain CP/67 multiprocessor locking at the science center ... when presented to the 370 architecture owners for adding to 370 ... they said that the POK favorite son operating system (OS/360 MVT/MVS) owners that 360/67 test&set was sufficient (i.e. they had a big kernel spin-lock) ... this also accounted for MVS documentation saying that two-processor support only had 1.2-1.5 times the throughput of single processor.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

I had initially done the multiprocessor kernel re-org for VM/370 for VM/370 Release2 based CSC/VM ... but not the actual multiprocessor support. The internal world-wide sales&marketing support HONE systems were long time customer for my enhanced CSC/VMs and then the US HONE datacenters were consolidated in silicon valley (trivia: when facebook 1st moves moves into silicon valley, it was into a new bldg built next door to the former US HONE consolidated datacenter). They had added "loosely-coupled" shared DASD support to complex of eight large systems with load-balancing and fall-over. I then added SMP, tightly-coupled, multiprocessor to VM/370 Release3 based CSC/VM so they could add a 2nd processor to each system (for 16 processors total). Their two processor systems were getting twice the throughput of single processor ... a combination of very low overhead SMP, tightly-coupled, multiprocessor locking support and a hack for cache affinity that improved the cache hit ratio (with faster processing offsetting the multiprocessor overhead).

The VM/370 SMP, tightly-coupled, multiprocessor locking was rather modest amount of work ... compared to all the other stuff I was doing.

SMP, multiprocessor, tightly-coupled, compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE postings
https://www.garlic.com/~lynn/subtopic.html#hone

trivia: The future system stuff (to replace all 370) was going on during much of this period. When FS implodes there was mad rush to stuff back into the 370 product pipelines, including kicking off quick&dirty 3033 and 3081 in parallel
http://www.jfsowa.com/computer/memo125.htm

about the same time, I'm roped into helping with a 16-processor tightly-coupled 370 effort and we con the 3033 processor engineers to work on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips) ... everybody thot it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system had (effective) 16-processor support (aka their spin-lock, POK doesn't ship 16-processor SMP until after the turn of century). Then the head of POK invites some of us to never visit POK again. The head of POK also manages to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (supposedly otherwise they wouldn't be able to ship MVS/XA on time) ... Endicott eventually manages to save the VM370 product mission for the low&midrange ... but have to recreate a VM370 development group from scratch.

I then transfer out to west coast and get to wander around (both IBM & non-IBM) datacenters in silicon valley, including disk engineering (bldg14) and disk product test (bldg15) across the street. At the time they are running prescheduled, 7x24, stand-alone testing ... and had recently tried MVS but it had 15min mean-time-between failure (in that environment, lots of faulty hardware). I offer to rewrite the I/O supervisor to make it bullet-proof and never fail so they can have any amount of on-demand testing, greatly improving productivity (downside any time they have problems, they imply its my software and I have to spend increasing time playing disk engineering diagnosing their hardware problems). I do a (internal only) San Jose Reseach report on the I/O Integrity work and happen to mention the MVS 15min MTBF, bringing down the wrath of the MVS organization on my head.

posts mentioning getting to play disk engineer in bldg14&5
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

backward architecture, The Design of Design

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: backward architecture, The Design of Design
Newsgroups: comp.arch
Date: Wed, 08 May 2024 15:43:52 -1000
John Levine <johnl@taugh.com> writes:
implementations of each one. So when they went to S/370, there was the 370/115, /125, /135, /138, /145, /148, /158, and /168 which were upward and downward compatible as were the 303x and 434x series. The /155 and /165 were originally missing the paging hardware but later could be field upgraded.

shortly after joining IBM I get con'ed into helping 370/195 to add multi-threading; 195 pipeline didn't have branch prediction, speculative execution, etc ... so conditional branches drained the pipeline ... and most codes only ran at half rate. Multi-thread would simulate two processor operation ... and two i-streams running at half rate might keep aggregate 195 throughput much higher (modulo the OS360 MVT multiprocessor support only having 1.2-1.5 throughput of single processor).

SMP, multiprocessor, tightly-coupled, compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

little over decade ago I was asked to track down decision to add virtual memory to all 370s and found staff to executive making the decision. Basically OS/360 MVT storage management was so bad, the execution regions had to be specified four times larger than used, as a result a 1mbyte 370/165 normally would only run four regions concurrently, insufficient to keep system busy and justified. Mapping MVT to a 16mbye virtual address space (aka VS2/SVS) would allow increasing number of concurrently running regions by factor of four times (with little or no paging), keeping 165 systems busy ... overlapping execution with disk I/O.

archived posts with pieces of email with staff to executive making virtual memory decision
https://www.garlic.com/~lynn/2011d.html#73

I had gotten into something of a dustup with VS2/SVS, claiming their page replacement algorithm was making poor choices ... they eventually fell back to since they were expecting nearly negligible paging rates, it wouldn't make any difference.

Along the way, 370/165 engineers said that if they had to retrofit the full 370 virtual memory architecture ... it would slip the announcement date by six months ... so decision was made to drop features ... and all the other systems had to retrench to the 165 subset ... and any software dependent on the dropped features had to be reworked. For VM/370, they were planning on using R/O shared segment protection (one of the features dropped for 370/165) for sharing CMS pages ... and so had to substitute a real kludge. Also the 370/195 multi-threading was canceled ... since it was deamed to difficult to upgrade 195 for virtual memory.

Amdahl had won the battle to make ACS, 360 compatible ... folklore is that executives then killed ACS/360 because it would advance the state-of-the-art too fast and IBM could loose control of the market (Amdahl leaves IBM shortly later)
https://people.computing.clemson.edu/~mark/acs_end.html
... above also has multi-threading reference.

a couple old posts mentioning virtual memory and having to do 370/165 subset
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2008h.html#96 Old hardware

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3705 & 3725

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3705 & 3725
Date: 09 May, 2024
Blog: Facebook
Early on, there was attempt by some of the science center people to get CPD to use the (Series/1) Peachtree processor for 3705 (much better than the UC processor). Mid-80s, some branch and boca people rope me into working on putting out a NCP/VTAM implemented on Series/1 clusters by one of the baby bells (with a rapid effort to move to RS/6000/RIOS) as a "type-1" product. About the same time I reported to same executive with the person that created (AWP164) what became APPN ... I would kid him on coming over to work on real networking (I also had HSDT project, T1 and faster computer links, working with the NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers) ... since the SNA people aren't going to appreciate what he was doing (when it came time to announce APPN, CPD non-concurred ... after some escalation, the APPN announcement letter was carefully rewritten to not imply any relationship between APPN and SNA).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

I did presentation to the communication group SNA ARB in Raleigh comparing the Series/1 implementation to 3725. Then I got a constant barrage of attacks that the comparison wasn't valid ... but they could never say why. The Series/1 implementation was taken from real live configuration at the baby bell ... and the 3725 numbers were taken from the communication group's HONE 3275 configurator (used by sales&marketing to configure and order 3725s). Several IBMers that had long experience with communication group IBM politics attempted to erect all sort of countermeasures. What the communication group did next to torpedo the product can only be described as truth is greater than fiction. Archive post with part of my 86 presentation at Raleigh SNA ARB meeting:
https://www.garlic.com/~lynn/99.html#67
part of presentation by one of the baby bells at 86 IBM COMMON user group conference
https://www.garlic.com/~lynn/99.html#70

sales&marketing support HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone

Note the Series/1 configuration had no single point of failure. Also IMS DBMS group wanted it for "IMS hot standby" ... VTAM session establishment was really heavy weight operation. While IMS hot standby could "fall over" in couple minutes, it could take VTAM over an hour to get large terminal network back up and fully operational ... even on a large 3090. The series/1 could support "shadow sessions" ... where the "hot standby" machine already had all VTAM sessions waiting to go.

trivia: late 80s, a senior disk engineer got talk scheduled at internal, annual, communication group conference, supposedly on 3174 performance but opened the talk with statement that the communication group was going to be responsible for demise of the disk division. The disk division was see sales falling with data fleeing datacenters to more distributed computing friendly datacenters. The disk division had come up with a number of solutions, but they were all being vetoed by the communication group. The communication group had stranglehold on datacenters with the corporate strategic ownership of everything that crossed datacenter walls and was fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm. The communication group stranglehold wasn't just disks and a couple years later, IBM had one of the largest losses in the history of US corporations and was being reorganized into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

posts about Communication group taking down the disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

more trivia: I had taken a two credit hr intro to fortran/computers and at the end of semester was hired to re-implement 1401 MPIO on 360/30. The univ. was getting a 360/67 for tss/360 to replace 709/1401 ... and temporarily got a 360/30 to replace 1401 pending arrival of 360/67 (sort of part of getting 360 experience). The univ. shutdown datacenter on weekends and I would have the place dedicated (although 48hrs w/o sleep made monday morning classes hard). i was given a lot of hardware & software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and within a few weeks had a 2000 card assembler program. The 360/67 arrived within a year of taking intro class and I was hired fulltime responsible for os/360 (tss/360 never came to fruition).

Later, science center came out to install CP/67 (3rd installation after cambridge itself and MIT lincoln labs). I mostly played with it in my weekend dedicated time ... reWriting lots of code improving os/360 running in virtual machine (at the start a os/360 test jobstream ran 322 secs on bare machine and 856secs in virtual machine (CP/67 CPU 534secs). Within six months I had CP67 CPU down to 113secs (from 534secs).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

CP67 had 1052 and 2741 terminal support with automagic terminal type identification (using SAD CCW to switch terminal type port scanner for line). Univ. had gotten some number of TTY ASCII terminals and I added ASCII support (integrated with automagic terminal type identification; trivia: when the ASCII terminal type scanner arrived for the controller, it came in a HEATHKIT box). I then wanted to do single dialup number for all terminal types ("hunt group") ... almost worked except IBM had taken short cut and hard wired line speed to each port. This kick off a clone controller project, build channel interface board for Interdata/3 (which had a 360-like instruction set) programmed to simulate IBM controller, with the addition it supported automatic terminal line speed. Later it was enhanced with Interdata/4 for the channel interface and cluster of Interdata/3s for the port interfaces ... and four of us get written up responsible for (some part of) IBM clone controller business. Interdata was selling the boxes as clone controllers, later with PE logo after Perkin-Elmer buys Interdata.
https://en.wikipedia.org/wiki/Interdata
Interdata bought by Perkin-Elmer
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
later spun off as Concurrent
https://en.wikipedia.org/wiki/Concurrent_Computer_Corporation

Around turn of century, I visited a datacenter that had descendant of the box handling nearly all credit-card dial-up terminal transactions east of the Mississippi.

plug compatible 360 controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

a little ASCII topic drift, 360 was originally suppose to be ASCII machine, but the unit record gear wasn't ready so "temporarily" it was going to use BCD gear as EBCDIC ... biggest computer goof ever:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3705 & 3725

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3705 & 3725
Date: 09 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725

Communication group was also trying to block release of mainframe TCP/IP support. when that got reversed, they changed their tactic and said that it had to be released through them since they had corporate responsibility for everything that crossed datacenter walls, what ships gets aggregate 44kbytes/sec using nearly full 3090 processor. I then add RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341 got sustained channel throughput using only modest amount of 4341 CPU (something like 500 times increase in bytes moved per instruction executed).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

Early 90s, communication group hires silicon valley contractor to implement TCP/IP support directly in VTAM. What he demonstrates gets higher throughput than LU6.2. He is then told that everybody knows that a "proper" TCP/IP implementation is much slower than LU6.2 and they would only be paying for a "proper" implementation.

NSF "$20m" trivia: congress cuts the budget, some other things happen and finally an RFP was released (in part based on what we already had running). from 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet. I didn't take the follow offer:

Date: 4 January 1988, 14:12:35 EST
To: distribution
Subject: NSFNET Technical Review Board Kickoff Meeting 1/7/88

On November 24th, 1987 the National Science Foundation announced that MERIT, supported by IBM and MCI was selected to develop and operate the evolving NSF Network and gateways integrating 12 regional networks. The Computing Systems Department at IBM Research will design and develop many of the key software components for this project including the Nodal Switching System, the Network Management applications for NETVIEW and some of the Information Services Tools.

I am asking you to participate on an IBM NSFNET Technical Review Board. The purpose of this Board is to both review the technical direction of the work undertaken by IBM in support of the NSF Network, and ensure that this work is proceeding in the right direction. Your participation will also ensure that the work complements our strategic products and provides benefits to your organization. The NSFNET project provides us with an opportunity to assume leadership in national networking, and your participation on this Board will help achieve this goal.


... snip ... top of post, old email index, NSFNET email

... somebody had been collecting executive misinformation email (not only forcing the internal network to SNA, but claiming that SNA could be used for NSFnet) and it was forwarded to us ... old post with email heavily clipped and redacted (to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

Also late 80s, got the HA/6000 project, originally for the NYTimes to move their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (that had VAXCluster support in same source base with Unix, Oracle, Sybase, Informix, Ingres). Early jan1992, Oracle meeting, AWD/Hester tells Oracle CEO that IBM would have 16 processor cluster mid92 and 128 processor cluster ye92.

We had been studying service outages ... and commodity hardware was getting much more reliable and outages were increasingly environmental (floods, earthquakes, hurricanes, etc) and out marketing I coined the term disaster survivability and geographic survivability (in contrast to disaster/recovery). The S/88 Product Administer started taking us around to their customers and got me to write a section for the corporate continuous availability strategy document (however it got pulled when both Rochester/AS400 and POK/mainframe complained that they couldn't meet the requirements).

Also end of jan1992, cluster scale-up was transferred for announced as "IBM Supercomputer" (for technical/scientific "only") and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

backward architecture, The Design of Design

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: backward architecture, The Design of Design
Newsgroups: comp.arch
Date: Thu, 09 May 2024 17:45:06 -1000
Lynn Wheeler <lynn@garlic.com> writes:
little over decade ago I was asked to track down decision to add virtual memory to all 370s and found staff to executive making the decision. Basically OS/360 MVT storage management was so bad, the execution regions had to be specified four times larger than used, as a result a 1mbyte 370/165 normally would only run four regions concurrently, insufficient to keep system busy and justified. Mapping MVT to a 16mbye virtual address space (aka VS2/SVS) would allow increasing number of concurrently running regions by factor of four times (with little or no paging), keeping 165 systems busy ... overlapping execution with disk I/O.

re:
https://www.garlic.com/~lynn/2024c.html#50 third system syndrome, interactive use, The Design of Design
https://www.garlic.com/~lynn/2024c.html#51 third system syndrome, interactive use, The Design of Design
https://www.garlic.com/~lynn/2024c.html#52 backward architecture, The Design of Design

In some sense IBM CKD DASD was tech trade-off being able to use disk&channel capacity to search for information because of limited real memory for keeping track of it. By the mid-70s that trade-off was starting to invert. In the early 80s, I was also pontificating that since mid-60s 360, relative system disk throughput had declined by an order of magnitude ... disks had gotten 3-5 times faster while systems had gotten 40-50 times faster. A disk division executive took exception to my statements and assigned the division performance group to refute it. After a couple weeks, they came back and explained that I had understated the problem. They then respun the analysis for recommendations for optimizing disk configurations for system throughput ... that was presented at IBM mainframe user groups.

Now the MVT->VS2/SVS was actually capped at 15 concurrently executing regions because it was (still) using 4bit storage protect keys to keep the regions separate (in a single 16mbyte virtual address space) ... which prompted SVS->MVS with a different virtual address space for each executing region. However the OS/360 history was heavily pointer passing APIs ... and to facilitate kernel calls, an 8mbyte image of the MVS kernel was mapped into each 16mbyte application address space (so kernel code to easily fetch/store application data). However, for MVS, MVT subsystems were given their own virtual address space ... so for API parameter and returning information a one common segment area (CSA) was (also) mapped into every 16mbyte virtual address space (leaving 7mbytes for application). However, requirement for CSA space is somewhat proportional to number of number of subsystems and number of concurrently running applications ... and CSA quickly becomes multiple segement area and the "Common System Area" ... and by late 70s and 3033, it was frequently 5-6mbytes (leaving 2-3mbytes for applications) and threatening to become 8mbytes (leaving zero).

That was part of the mad rush to get to 370/XA (31-bit) and MVS/XA (while separate virtual address spaces theoretically allowed for large number of concurrently computing programs, being able to overlap execution with waiting on disk i/o, the CSA kludge had severely capped it).

There were a number of 3033 temporary hacks. One was retrofitting part of 370/xa access registers to 3033 as "dual-address space". A called subsystem in its own address space could have a secondary address space pointing to the calling application's address space ... so didn't require CSA for API passing&returning information. They also took two "unused" bits from page table to prefix to real page number ... while all instructions could only specify real & virtual 24bit address (16mbytes), it was possible to have virtual->real mapping up to 64mbytes for execution (attaching more than 16mbytes of real storage to 3033).

a couple posts mentioning justification for adding virtual memory to all 370s, CSA, common segment/system area, and 16Aug1984, SHARE 63, B874 presentation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2019b.html#94 MVS Boney Fingers
https://www.garlic.com/~lynn/2009k.html#52 Hercules; more information requested

part of email exchange about 370 virtual memory justification
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

--
virtualization experience starting Jan1968, online at home since Mar1970

Token-Ring Again

From: Lynn Wheeler <lynn@garlic.com>
Subject: Token-Ring Again
Date: 09 May, 2024
Blog: Facebook
IBM AWD (workstation) had done their own (at bus) 4mbit token-ring card for the PC/RT. However for RS/6000 and microchannel, AWD was told that they could only use standard PS2 microchannel cards (that were heavily performance kneecapped by the communication group), typical example was the 16mbit T/R microchannel card had lower card throughput than the PC/RT 4mbit token-ring card (and the $800 microchannel 16mbit token-ring card also had really significantly lower throughput than $69 10mbit ethernet card that benchmarked at 8.5mbits/sec).

New Almaden research was heavily provisioned with cat4 presumably for 16mbit token-ring ... but found 10mbit ethernet had higher aggregate bandwidth and lower latency, that is besides $69 10mbit ethernet card had much higher card thruput than the $800 (heavily performance kneecapped) 16mbit token-ring cards

A 300 station configuration was $20,700 for $69 high performance 10mbit ethernet cards compared to $240,000 for $800 16mbit t/r (with significantly lower throughput) which tended to be configured on single LAN. It was possible to get high-performance TCP/IP routers that could have mainframe (not just IBM) channel interfaces, up to 16 Ethernet LANs, FDDI, T1&T3 telco interfaces (and later RS/6000 SLA interface) for $40,000. The difference between card costs could get five such high-performance routers, each with channel interfaces, aggregate of 80 Ethernet LAN interfaces (possibly configuring 300 RS/6000s, four per LAN)

Later RS/6000 SLA was sort of an incompatible, enhanced, faster, full-duplex, modification of ESCON. The engineer responsible for SLA, then wanted to do 800mbit/sec version but manage to con him into joining the FCS standards committee instead (in 1988, the IBM branch office had con'ed me into helping LLNL standardize some serial stuff they had been playing with, which quickly becomes FCS, initially 1gbit/sec, full-duplex, 200mbyte/sec aggregate, when ESCON eventually shipped it was already obsolete).

My wife had been asked to be co-author for IBM response to gov. request for high-security, distributed, campus-like operation where she included 3-tier architecture in the response. We were then out making customer executive presentations for TCP/IP, Ethernet, high-performance, high-throughput, 3-tier, secure operation ... and getting lots of grief and attacks (involving all sorts of misinformation) from the communication group and token-ring forces

fibre-channel standards (FCS) and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
3 tier, middle layer, saa posts
https://www.garlic.com/~lynn/subnetwork.html#3tier

recent token-ring posts
https://www.garlic.com/~lynn/2024c.html#47 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#47 OS2
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024b.html#0 Assembler language and code optimization
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe, TCP/IP, Token-ring, Ethernet

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe, TCP/IP, Token-ring, Ethernet
Date: 10 May, 2024
Blog: Facebook
... maybe too much information

1st half 80s token-ring, ethernet & IBM mainframe
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe

Then communication group was fiercely fighting off client/server, distributed computing (trying to preserve their dumb terminal paradigm) and oppose the release of mainframe tcp/ip support. The was reversed and communication group change their strategy and said that since they have corporate strategic responsibility for everything that crosses datacenter walls and so TCP/IP has to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did the changes to support RFC1044 and in some tuning tests at Cray Research between Cray and 4341 got sustained channel throughput using nearly whole 3090 processor (something like 500 times improvement in bytes moved per instruction executed).

Part of the difference, TCP/IP shipped with support for a IBM 8232, LAN "bridge" (PCCA, rack PC/AT with channel interface) ... so IP->LANMAC processing had to be done in mainframe. RFC1044 support included high-performance, (both IBM and non-IBM channel support) channel-attached TCP/IP router support ... allowing some of the processing to be offloaded to the router. High-performance TCP/IP router, for about same price as 8232 support: (not just IBM) mainframe channel interfaces, up to 16 Ethernet LANs, T1&T3 telco interfaces, FDDI LAN and later RS/6000 SLA and FCS. RS/6000 SLA was sort of enhanced ESCON, incompatible, faster throughput, full-duplex with concurrent transfers in both directions. The engineer then wants to do a 800mbit version, but I convince him to joine the FCS committee instead (in 1988, IBM branch office asks me to help LLNL get some serial stuff they were playing with, standardized, which quickly becomes fibre-channel standard (FCS, including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, 200mbytes/sec aggregate (when ESCON was eventually released, it was already obsolete).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Mainframe TCP/IP started out VM370 implemented in VS/PASCAL ... it got ported to MVS by simulating some of the VM370 "diagnoses". Early 90s, communication group hires silicon valley contractor to implement TCP/IP support directly in VTAM. What he demonstrates gets much higher throughput than LU6.2. He is then told that "everybody knows" that a "proper" TCP/IP implementation is much slower than LU6.2 and they would only be paying for a "proper" implementation.

trivia: in the late 80s there was an analysis of UNIX TCP/IP implementations compared to VTAM LU62. UNIX TCP/IP had 5k instruction pathlength and five buffer copies; VTAM LU62 had 160k pathlength and 15 buffer copies.

RFC1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044

from post in another group ... IBM making sure that OSI stays "aligned" with SNA:
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."

... snip ...

some references to co-worker at science center, we then transfer out to IBM SJR in 1977
https://www.garlic.com/~lynn/2024c.html#30 GML and W3C
https://www.garlic.com/~lynn/2024c.html#27 PDP1 Spacewar
https://www.garlic.com/~lynn/2024c.html#22 FOILS
https://www.garlic.com/~lynn/2024b.html#104 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#82 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#32 HA/CMP
https://www.garlic.com/~lynn/2024.html#110 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#55 EARN 40th Anniversary Conference
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing

responsible for the cambridge CP/67 wide-area network which morphs into internal corporate network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) and also used for the corporate sponsored univ BITNET (also larger than internet for a time) ... Ed took his ideas to DARPA in 1975 about internetworking
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

GML also invented at the science center in 1969, which morphs into ISO standard SGML a decade later, and after another decade morphs into HTML at CERN. First webserver in the states is at Stanford SLAC VM370 system (CP67 done at science center then morphs into VM370 when decided to add virtual memory to all 370s).
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML invented at science center 1969, SGML, HTML, etc
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

also references working with NSF director for NSF Supercomputer center interconnect, NSFnet, precursor to modern internet
https://www.garlic.com/~lynn/2024c.html#54 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#42 Netscape
https://www.garlic.com/~lynn/2024c.html#22 FOILS
https://www.garlic.com/~lynn/2024b.html#112 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#32 HA/CMP
https://www.garlic.com/~lynn/2024b.html#10 Some NSFNET, Internet, and other networking background
https://www.garlic.com/~lynn/2024.html#119 Transfer SJR to YKT
https://www.garlic.com/~lynn/2024.html#111 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2024.html#57 EARN 40th Anniversary Conference
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe

28Mar1986 Preliminary Announcement:
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

... funding new software at NCSA begat MOSAIC & HTTP
http://www.ncsa.illinois.edu/enabling/mosaic

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

some recent posts mentioning wide-area network
https://www.garlic.com/~lynn/2024c.html#30 GML and W3C
https://www.garlic.com/~lynn/2024c.html#27 PDP1 Spacewar
https://www.garlic.com/~lynn/2024c.html#22 FOILS
https://www.garlic.com/~lynn/2024b.html#109 IBM->SMTP/822 conversion
https://www.garlic.com/~lynn/2024b.html#104 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#89 Dialed in - a history of BBSing
https://www.garlic.com/~lynn/2024b.html#86 Vintage BITNET
https://www.garlic.com/~lynn/2024b.html#82 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#32 HA/CMP
https://www.garlic.com/~lynn/2024.html#110 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#65 IBM Mainframes and Education Infrastructure
https://www.garlic.com/~lynn/2024.html#55 EARN 40th Anniversary Conference
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2023g.html#24 Vintage ARPANET/Internet
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023f.html#4 GML/SGML separating content and format
https://www.garlic.com/~lynn/2022d.html#72 WAIS. Z39.50
https://www.garlic.com/~lynn/2021c.html#68 Online History

past posts mentioning IBM RS/6000 "SLA"
https://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024b.html#52 IBM Token-Ring
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023c.html#92 TCP/IP, Internet, Ethernet, 3Tier
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#19 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022b.html#85 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#66 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2017d.html#31 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016b.html#74 Fibre Channel is still alive and kicking
https://www.garlic.com/~lynn/2016b.html#69 Fibre Channel is still alive and kicking
https://www.garlic.com/~lynn/2014h.html#87 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2013g.html#41 A History Of Mainframe Computing
https://www.garlic.com/~lynn/2013g.html#2 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2012k.html#69 ESCON
https://www.garlic.com/~lynn/2011p.html#40 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011g.html#74 We list every company in the world that has a mainframe computer
https://www.garlic.com/~lynn/2010h.html#63 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2010f.html#7 What was the historical price of a P/390?
https://www.garlic.com/~lynn/2009s.html#36 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#32 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#8 Union Pacific Railroad ditches its mainframe for SOA
https://www.garlic.com/~lynn/2009q.html#32 Mainframe running 1,500 Linux servers?
https://www.garlic.com/~lynn/2009p.html#85 Anyone going to Supercomputers '09 in Portland?
https://www.garlic.com/~lynn/2009j.html#64 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009j.html#59 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2008q.html#60 Mainframe files under AIX etc
https://www.garlic.com/~lynn/2007o.html#54 mainframe performance, was Is a RISC chip more expensive?
https://www.garlic.com/~lynn/2006x.html#13 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#11 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006p.html#46 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006m.html#52 TCP/IP and connecting z to alternate platforms
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2005v.html#0 DMV systems?
https://www.garlic.com/~lynn/2005l.html#26 ESCON to FICON conversion
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2004n.html#45 Shipwrecks
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2000f.html#31 OT?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe, TCP/IP, Token-ring, Ethernet

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe, TCP/IP, Token-ring, Ethernet
Date: 12 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#57 IBM Mainframe, TCP/IP, Token-ring, Ethernet

HSDT project starting early 80s, T1 and faster computer links (both satellite and terrestrial) ... very early had dynamic adaptive rate-based pacing.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

As an aside, 1988 ACM SIGCOMM had two papers ... 1) one about Ethernet throughput tests of 30 station network, effective throughput dropped from 8.5mbits to 8mbits when all the device drivers were put into low-level loop constantly transmitting minimum sized packets and 2) TCP "slow-start" (window) congestion control showed non-stable in large network (multiple intermediate routers) with bursty traffic.

I was on Chesson's XTP TAB and wrote dynamic adaptive rate-based pacing into XTP specification
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

3 tier, middle layer, saa posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

posts mentioning 1988 ACM SIGCOMM papers (ethernet throughput and/or slow-start non-stable)
https://www.garlic.com/~lynn/2024b.html#56 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#19 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021b.html#45 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#17 IBM Kneecapping products
https://www.garlic.com/~lynn/2019.html#74 21 random but totally appropriate ways to celebrate the World Wide Web's 30th birthday
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
https://www.garlic.com/~lynn/2017k.html#18 THE IBM PC THAT BROKE IBM
https://www.garlic.com/~lynn/2017d.html#29 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2017d.html#28 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2015h.html#108 25 Years: How the Web began
https://www.garlic.com/~lynn/2015d.html#41 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014m.html#128 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013m.html#30 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013m.html#18 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013i.html#83 Metcalfe's Law: How Ethernet Beat IBM and Changed the World
https://www.garlic.com/~lynn/2013i.html#46 OT: "Highway Patrol" back on TV
https://www.garlic.com/~lynn/2013b.html#32 Ethernet at 40: Its daddy reveals its turbulent youth
https://www.garlic.com/~lynn/2012g.html#39 Van Jacobson Denies Averting 1980s Internet Meltdown
https://www.garlic.com/~lynn/2010b.html#68 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009m.html#83 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009m.html#80 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2005q.html#22 tcp-ip concept
https://www.garlic.com/~lynn/2005q.html#18 Ethernet, Aloha and CSMA/CD -
https://www.garlic.com/~lynn/2004e.html#17 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2003k.html#57 Window field in TCP header goes small
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP
https://www.garlic.com/~lynn/2002q.html#41 ibm time machine in new york times?
https://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york times?
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2000f.html#39 Ethernet efficiency (was Re: Ms employees begging for food)
https://www.garlic.com/~lynn/2000f.html#38 Ethernet efficiency (was Re: Ms employees begging for food)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "Winchester" Disk

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "Winchester" Disk
Date: 12 May, 2024
Blog: Facebook
1973: "Winchester" pioneers key HDD technology. IBM 3340 employs new low-cost, low-load, landing read/write heads
https://www.computerhistory.org/storageengine/winchester-pioneers-key-hdd-technology/
Derived from the original specification of a system having two spindles each with a disk capacity of 30 MB, Haughton is reported to have said: "If it's a 30-30, then it must be a Winchester" after the .30-30 Winchester rifle cartridge. With a load of less than 20 grams, the ferrite read/write head patented by team member Mike Warner started and stopped in contact with the disk on a dedicated landing zone but flew over the disk on an air bearing 18 microinches thick between the magnetic head and spinning disk.

... snip ...

.... other trivia; when I transferred to SJR, got to wander around IBM & non-IBM datacenters in silicon valley ... including disk engineering (bldg14) and disk product test (bldg15) across the street. They were running prescheduled, around the clock, stand-alone mainframe testing. They mentioned that they had recently tried MVS, but it had 15min mean-time-between-failure (in that environment). I offer to rewrite I/O supervisor to make it bullet proof and never fail, enabling any amount of on-demand, concurrent testing, greatly improving productivity (downside was they started blaming me for any problems, and I had to spend increasing amount of time playing disk engineer shooting their hardware issues).

Bldg15 would get very early engineering processor system machines for disk I/O test ... when they got a 3033 ... disk I/O testing only took percent or two of processing, so we scrounged a string of 3330s and a 3830 controller and setup our own private online service. At the time there was air-bearing simulation ... part of thin-film head design
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

being run on research's 370/195, but only able to get a a few turn-arounds a month. We set him up on the bldg15 3033 and even though it only had approx. half the processing throughput (of 370/195), he was able to get several turn-arounds a day.

posts mentioning getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

some recent posts mentioning air-bearing simulation
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#25 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#9 3880 DASD Controller
https://www.garlic.com/~lynn/2022c.html#74 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2022.html#64 370/195
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#28 IBM Cottle Plant Site
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "Winchester" Disk

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "Winchester" Disk
Date: 12 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#59 IBM "Winchester" Disk

The last product did at IBM initially started out HA/6000 for NYTimes to port their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (that had VAXCluster support in same source base with Unix, Oracle, Sybase, Informix, Ingres). Early jan1992, Oracle meeting, AWD/Hester tells Oracle CEO that IBM would have 16 processor cluster mid92 and 128 processor cluster ye92.

We had been studying service outages ... and commodity hardware was getting much more reliable and outages were increasingly environmental (floods, earthquakes, hurricanes, etc) and out marketing I coined the term "disaster survivable" and "geographic survivable" (in contrast to disaster/recovery). The S/88 Product Administer started taking us around to their customers and got me to write a section for the corporate continuous availability strategy document (however it got pulled when both Rochester/AS400 and POK/mainframe complained that they couldn't meet the requirements). Also end of jan1992, cluster scale-up was transferred for announce as "IBM Supercomputer" (for technical/scientific "only") and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later).

The Hursley 9333 was really great price/performance ... basically 80mbit full-duplex serial copper with basically SCSI at the opposite end ... and had much higher throughput than identical SCSI disks attached via SCSI. Note in 1988, the IBM branch office asked me to help LLNL get some serial stuff they were playing with, standardized; which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, 200mbyte/sec aggregate. I was hopping to get Hursley 9333 enhanced as interoperable, fractional speed FCS ... but we had left IBM and and it becomes SSA instead:
https://en.wikipedia.org/wiki/Serial_Storage_Architecture

Note by the time ESCON ships (17mbytes), it was already obsolete. Then some POK engineers become involved with FCS and define heavyweight protocol (that significantly cuts the native throughput), which eventually is released as FICON.
https://en.wikipedia.org/wiki/Fibre_Channel
https://en.wikipedia.org/wiki/FICON

The latest public benchmark I've found is z196 "Peak I/O" that got 2M IOPS using 104 FICON. About the same time a FCS was announced for E5-2600 server blades that claimed over one million IOPS, two such FCS having higher throughput than 104 FICON. Also note, no CKD DASD have been manufactured for decades, all being simulated on industry fixed-block disks. Also, IBM pubs has recommended that SAPs (system assist processors that actual do I/O) are kept to max 70% CPU (which would have been more like 1.5M IOPS).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
CKD DASD, FBA, fixed-block, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd

some recent posts mentioning "SSA"
https://www.garlic.com/~lynn/2024.html#82 Benchmarks
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#79 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2022e.html#47 Best dumb terminal for serial connections
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2021k.html#127 SSA
https://www.garlic.com/~lynn/2021g.html#1 IBM ESCON Experience

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "Winchester" Disk

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "Winchester" Disk
Date: 12 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#59 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024c.html#60 IBM "Winchester" Disk

Internally CKD 1.5mbyte/sec 2305 emulation listed as "1655" ... later shops running operating systems with FBA (fixed-block architecture) support, could get them as 3mbyte/sec FBA SSD. The folklore is that POK "VULCAN" (electronic paging device) got canceled when they were told that IBM was selling every memory chip it was making as processor memory (at higher markup than electronic disk).

I had earlier ran afoul of "VULCAN" when I was trying to get the 3350FH shipped with "multiple exposure" (multiple subchannel addresses like 2305) so FH CCW could transfer data overlapped with arm motion & rotation. VULCAN got it vetoed because they thought its use for paging could impact their market. By the time VULCAN was canceled, it was too late to resurrect 3350FH multiple exposure

getting to play disk engineer in bldg14 and bldg15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
CKD DASD, FBA, fixed-block, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

some recent 1655 posts
https://www.garlic.com/~lynn/2024c.html#12 370 Multiprocessor
https://www.garlic.com/~lynn/2024.html#28 IBM Disks and Drums
https://www.garlic.com/~lynn/2023g.html#84 Vintage DASD
https://www.garlic.com/~lynn/2023f.html#49 IBM 3350FH, Vulcan, 1655
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2022d.html#46 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2021j.html#65 IBM DASD
https://www.garlic.com/~lynn/2021f.html#75 Mainframe disks

--
virtualization experience starting Jan1968, online at home since Mar1970

HTTP over TCP

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HTTP over TCP
Date: 12 May, 2024
Blog: Facebook
Not long after leaving IBM, I was brought in as consultant into small client/server startup, two former Oracle employers (that we had been working with on IBM's HA/CMP) were there responsible for something called "commerce server" and wanted to do payment transactions, the startup had also done this technology called "SSL" that they wanted to use; result is frequently called "electronic commerce" (I would have authority for everything between webservers and financial industry payment networks).

As webservers were starting to increase load ... platforms started seeing 95% CPU load running FINWAIT list .... and it was at least six months before vendors started shipping fixes. TCP assumption had been that sessions were relatively long lived and so FINWAIT list would be quite short and just did linear search for session in process of being closed ... but HTTP&HTTPS were quick in&out and as load went up, FINWAIT lists were reaching thousands (and servers started spending all their time running FINWAIT list checking if incoming packet was part of session in process of being closed). The startup did get a large Sequent server ... Sequent had "fixed" the FINWAIT problem some time before in Dynix.

I had also been member of Chesson's XTP ARB where XTP had minimum three packet exchange for reliable session, compared to TCP minimum seven packet exchange ... tried to get them interested. Later Postel sponsored my talk on "Why Internet Wasn't Business Critical Dataprocessing" based on the stuff I had to do for "electronic commerce".

The other problem was many places started moving from flat file webservers to RDBMS ... resulted in a lot exploits. Problem was RDBMS webserver maintenance was lot more labor intensive. The sequence was systems were taken offline and attack countermeasures turned off for performing maintenance .... then because time was running out and in the rush to get the servers back online, frequently turning countermeasures back on was overlooked.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
"electronic commerce" & payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

some recent posts mentioning FINWAIT
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#18 MOSAIC
https://www.garlic.com/~lynn/2023d.html#57 How the Net Was Won
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2022f.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2021k.html#80 OSI Model
https://www.garlic.com/~lynn/2021h.html#86 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021f.html#29 Quic gives the internet's data transmission foundation a needed speedup

posts mentioning talk on Why Internet Isn't Business Critical Dataprocessing
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#94 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021j.html#10 System Availability
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019b.html#100 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017j.html#31 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017g.html#81 Running unsupported is dangerous was Re: AW: Re: LE strikes again
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017e.html#90 Ransomware on Mainframe application ?
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017d.html#92 Old hardware
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2009o.html#62 TV Big Bang 10/12/09
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2005h.html#16 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005d.html#42 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/aadsm28.htm#69 VCs have a self-destruction gene, let's tweak it

--
virtualization experience starting Jan1968, online at home since Mar1970

UNIX 370

From: Lynn Wheeler <lynn@garlic.com>
Subject: UNIX 370
Date: 13 May, 2024
Blog: Facebook
I was told by both IBM & Amdahl field support that they wouldn't support machine w/o industrial strength EREP .... while direct UNIX port to 370 was straight forward ... adding an industrial strength EREP would have been many times the effort of a simple port. IBM did stripped down TSS/370 as SSUP (or SSS/370) for hardware support & EREP with UNIX built on top. IBM AIX/370 and Amdahl UTS were run in VM/370 virtual machines, leveraging VM/370 EREP.

There was an unsuccessful attempt to find somebody in IBM to hire the student that had done a early unix port to 370 ... so when he graduated, Amdhal hired him for what became UTS.

past posts mentioning TSS/370 for Unix, IBM AIX/370 and Amdahl Unix
https://www.garlic.com/~lynn/2024c.html#50 third system syndrome, interactive use, The Design of Design
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#85 IBM AIX
https://www.garlic.com/~lynn/2024.html#15 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2022c.html#42 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#1 On why it's CR+LF and not LF+CR [ASR33]
https://www.garlic.com/~lynn/2021k.html#64 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021k.html#63 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021e.html#83 Amdahl
https://www.garlic.com/~lynn/2020.html#33 IBM TSS
https://www.garlic.com/~lynn/2019d.html#121 IBM Acronyms
https://www.garlic.com/~lynn/2018d.html#93 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017g.html#102 SEX
https://www.garlic.com/~lynn/2017d.html#80 Mainframe operating systems?
https://www.garlic.com/~lynn/2017d.html#76 Mainframe operating systems?
https://www.garlic.com/~lynn/2014j.html#17 The SDS 92, its place in history?
https://www.garlic.com/~lynn/2014f.html#74 Is end of mainframe near ?
https://www.garlic.com/~lynn/2013n.html#92 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013n.html#24 Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012f.html#78 What are you experiences with Amdahl Computers and Plug-Compatibles?
https://www.garlic.com/~lynn/2012e.html#21 A z/OS Redbook Corrected - just about!
https://www.garlic.com/~lynn/2012.html#67 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011f.html#85 SV: USS vs USS
https://www.garlic.com/~lynn/2010o.html#0 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2010i.html#44 someone smarter than Dave Cutler

--
virtualization experience starting Jan1968, online at home since Mar1970

HTTP over TCP

From: Lynn Wheeler <lynn@garlic.com>
Subject: HTTP over TCP
Date: 14 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP

Not so much internet, but sun. HA/CMP started out HA/6000 for NYTimes to migrate their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had VAXCluster support in same source base as Unix. Early Jan1992 meeting with Oracle, AWD/Hester tells Ellison that we would have 16processor clusters by mid92 and 128processor clusters by ye92. I was keeping FSD (IBM gov.) appraised of work with national labs and they must have told the IBM supercomputer group that FSD decided to with HA/CMP because by end of JAN92, HA/CMP cluster was transferred for announce as IBM supercomputer (for technical/scientific only) and we were told that we couldn't do anything with more than four processors (we leave IBM a few months later).

Not long after started working on "electronic commerce", was also brought into a financial services outsourcing group that had done the first magstripe merchant stored value cards in the US, all implemented on "HA/SUN" platform which had an outage during early pilot period (one was national gas station chain) ... that had backup/redundant ... and there was disk outage that lost the RDBMS of current value in card accounts and found that because of a glitch during (hardware) service/maintenance a few weeks prior, mirroring to the backup RDBMS hadn't been re-activated. At the initial review meeting, the SUN executive responsible for HA/SUN gave introduction that sounded almost exactly like my marketing pitch that I use to give for HA/CMP. In any case, to get the pilot back on track, all the card accounts were refreshed with their original values.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

More CPS

From: Lynn Wheeler <lynn@garlic.com>
Subject: More CPS
Date: 14 May, 2024
Blog: Facebook
Some of the MIT 7094/CTSS folks went to the 5th flr to do MULTICS
https://en.wikipedia.org/wiki/Multics
... others went to the IBM Science Center on the 4th flr, did virtual machines, CP40/CMS, CP67/CMS, bunch of online apps, invented GML in 1969, etc
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
https://en.wikipedia.org/wiki/History_of_CP/CMS

... and the IBM Boston Programming Center was on the 3rd flr that did CPS
https://en.wikipedia.org/wiki/Conversational_Programming_System
... although a lot was subcontracted out to Allen-Babcock (including the CPS microcode assist for the 360/50)
https://www.bitsavers.org/pdf/allen-babcock/cps/
https://www.bitsavers.org/pdf/allen-babcock/cps/CPS_Progress_Report_may66.pdf

When decision was made to add virtual memory to all 370s, we started on modification to CP/67 (CP67H) to support 370 virtual machines (simulating the difference in architecture) and a version of CP/67 that ran on 370 (CP67I, was in regular use a year before the 1st engineering 370 machine with virtual memory was operational, later "CP370" was in wide use on real 370s inside IBM). A decision was made to do official product, morphing CP67->VM370 (dropping and/or simplifying lots of features) and some of the people moved to the 3rd flr to take over the IBM Boston Programming Center ... becoming the VM370 Development group ... when they outgrew the 3rd flr, they moved out to the empty IBM SBC bldg at Burlington Mall on rt128.

After the FS failure and mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 in parallel ... more
http://www.jfsowa.com/computer/memo125.htm
the head of POK convinced corporate to kill vm370 product, shutdown the development group, and move all the people to POK for MVS/XA ... or supposedly MVS/XA wouldn't be able to ship on time ... endicott managed to save the VM370 product mission, but had to reconstitute a development group from scratch). They hadn't planned on telling the group about shutdown&move until the very last minute, to minimize the number that might escape into the Boston area. The shutdown managed to leak and several managed to escape (joke the head of POK was major contributor to the infant DEC VMS effort).

trivia: some of the former BPC people did get CPS running on CMS.

other trivia: CP40/CMS was done on 360/40 with virtual memory hardware mods. They had originally wanted a 360/50, but all the spare 50s were going to FAA ATC effort and so had to settle for 360/40 ... then when 360/67 becomes available standard with virtual memory, it morphs into cp67/cms.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some cp/40 history
https://www.garlic.com/~lynn/cp40seas1982.txt
melinda's history website
http://www.leeandmelindavarian.com/Melinda#VMHist
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

BPC responsible for CPS was on the 3rd flr ... but then taken over for the VM370 development group ... and 3rd, 4th and 5th flrs all doing online interactive

posts mentioning CPS, BPC, 3rd flr, 360/50, allen-babcock
https://www.garlic.com/~lynn/2024b.html#5 Vintage REXX
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2016d.html#34 The Network Nation, Revised Edition
https://www.garlic.com/~lynn/2014e.html#74 Another Golden Anniversary - Dartmouth BASIC
https://www.garlic.com/~lynn/2013l.html#28 World's worst programming environment?
https://www.garlic.com/~lynn/2012n.html#26 Is there a correspondence between 64-bit IBM mainframes and PoOps editions levels?
https://www.garlic.com/~lynn/2012e.html#100 Indirect Bit
https://www.garlic.com/~lynn/2010e.html#14 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2008s.html#71 Is SUN going to become x86'ed ??

--
virtualization experience starting Jan1968, online at home since Mar1970

WEB Servers, Browsers and Electronic Commerce

From: Lynn Wheeler <lynn@garlic.com>
Subject: WEB Servers, Browsers and Electronic Commerce
Date: 14 May, 2024
Blog: Facebook
First webserver in US was Stanford SLAC VM370 system:
https://ahro.slac.stanford.edu/wwwslac-exhibit

After leaving IBM, was brought in as consultant into small client/server startup (some founders were from NCSA), NCSA complained about the use of "MOSAIC" and they got their new name from local silicon valley internet company), two former Oracle people (that we had worked with at IBM on HA/CMP) were there responsible for something called "commerce server" and wanted to do credit card transactions, the startup had also invented this technology called "SSL" they wanted to use, the result is frequently now called "electronic commerce"; I would have responsibility for implementation of webservers to the payment networks.

A few years later a major large web hosting company mentioned that they hosted ten porn webservers that had much higher use than those in the monthly top listed webservers (not interested in competing for top webserver). They also mentioned that porn webservers had almost zero credit card fraud while software & game webservers were experiencing 30-50% credit card fraud (operators constantly being threatened with stricken from accepting credit cards)

Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
Electronic Commerce Payment Gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

NSF OASC Mar86 Preliminary Announce, new software. NCSA, and MOSAIC posts
https://www.garlic.com/~lynn/2024c.html#57 IBM Mainframe, TCP/IP, Token-ring, Ethernet
https://www.garlic.com/~lynn/2024c.html#42 Netscape
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2023g.html#67 Waiting for the reference to Algores creation documents/where to find- what to ask for
https://www.garlic.com/~lynn/2023f.html#18 MOSAIC
https://www.garlic.com/~lynn/2023e.html#61 Early Internet
https://www.garlic.com/~lynn/2023c.html#29 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022b.html#109 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe Addressing

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe Addressing
Date: 15 May, 2024
Blog: Facebook
trivia: 360/67 supported 32bit (virtual) addressing.

For 370 ... MVS was getting really bloated and there was mad rush to get to 308x and mvs/xa. In the mean time there were hacks of partial retrofit to 3033. Subset of XA access registers were retrofitted to 3033 as "dual-address space" mode. Transition from MVT&SVS to MVS gave each region/application its own 16mbyte address space. However, OS360 heritage was heavily pointer passing API and so they mapped an 8mbyte image of the MVS kernel into each address space (leaving eight). Then they created the Common Segment Area (CSA) mapped into every address space ... leaving 7mbytes. However CSA requirements are somewhat proportional to number of subsystems and concurrently executing programs and quickly exploded to 5-6mbytes (leaving 2-3 mbytes) and threatening to become 8mbytes (leaving zero for programs).

each subsystem was moved to its own address space ... however it was getting parameter list from application space ... so they allocated space in csa so both could access it. dual address space mode was semi-priv subsystem could access application address space

They also took two unused bits in page table entry to prefix to (real) page number allowing up to 64mbytes of real storage ... instructions still only supported 24bit addressing (but a virtual 24bit address could be mapped into a real 26bit address or 64mbytes). CCW I/O addresses were also 24bit addressing (with CCW op-code in high order byte) but IDALs were introduced with 370 and the CCW address could be pointer to IDAL which was full-word (so 31bit was possible).

posts mentiong 3033 dual-address space and/or >16mbyte line
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2016b.html#35 Qbasic
https://www.garlic.com/~lynn/2015b.html#46 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2014f.html#22 Complete 360 and 370 systems found
https://www.garlic.com/~lynn/2010c.html#41 Happy DEC-10 Day
https://www.garlic.com/~lynn/2007g.html#59 IBM to the PCM market(the sky is falling!!!the sky is falling!!)

some discussion of MVS "common segment area" bloating to "common system area" and threatening to becoming 8mbytes
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#36 "The Big One" (IBM 3033)
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia

--
virtualization experience starting Jan1968, online at home since Mar1970

Berkeley 10M

From: Lynn Wheeler <lynn@garlic.com>
Subject: Berkeley 10M
Date: 15 May, 2024
Blog: Facebook
Hawaii trivia: Early 80s, got HSDT project, T1 and faster computer links (both terrestrial and satellite). Although I was in SJR, also had (more) office and lab space out in Los Gatos lab. LSG had T3 collins digital radio back to main plant site. Setup early T1 circuit from LSG to plant site, to the IBM T3 TDMA satellite service to Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston, that had a whole boatload of Floating Point Systems boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems
some of which had 40mbyte/sec disk arrays.

Was also working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers ... then congress cut the budget, some other things happened and eventually a RFP was released. Along the way, I was asked to talk to UCB, NSF thought they were giving UC grant for UCB supercomputer center, but folklore is that the regent's master building plan was UCSD would get the next new bldg, and it became the UCSD supercomputer center instead.

Working with the IBM UCB account team, in 1983 I was also asked if I would talk to the Berkeley "10M telescope" people and had a number of meetings with them and visit/tours of some testing being done at Lick Observatory (east of San Jose). The 10M effort was also working on transition from film to CCD ... and the plans were to put it on a mountain in Hawaii and wanted to do remote observing from the mainland. CCDs were still fairly primitive ... but starting to get better, in any case it looked like remote viewing would start out requiring around 800kbits/sec. Along the way, they got grants from the Keck Foundation ... and it morphs into the Keck Observatory.
https://en.wikipedia.org/wiki/W._M._Keck_Observatory
https://www.keckobservatory.org/

some archived (alt.folklore.computer) posts with old 10m email
https://www.garlic.com/~lynn/2007c.html#email830803b
https://www.garlic.com/~lynn/2007c.html#email830804c
https://www.garlic.com/~lynn/2004h.html#email830804
https://www.garlic.com/~lynn/2004h.html#email830822
https://www.garlic.com/~lynn/2004h.html#email830830
https://www.garlic.com/~lynn/2004h.html#email841121
https://www.garlic.com/~lynn/2004h.html#email860519

other trivia: 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87)

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 16 May, 2024
Blog: Facebook
IBM AWD (workstation) had done their own (PC/AT bus) 4mbit token-ring card for the PC/RT. However for RS/6000 and microchannel, Corporate told AWD that they couldn't do their own microchannel cards, but had to use standard PS2 microchannel cards (that were heavily performance kneecapped by the communication group), typical example was the 16mbit T/R microchannel card had lower card throughput than the PC/RT 4mbit token-ring card (and the $800 microchannel 16mbit token-ring card also had really significantly lower throughput than $69 10mbit ethernet card that benchmarked at 8.5mbits/sec).

New Almaden research was heavily provisioned with IBM cabling, presumably for 16mbit token-ring ... but found 10mbit ethernet (using same cabling) had higher aggregate bandwidth and lower latency, that is besides $69 10mbit ethernet card had much higher card thruput than the $800 (heavily performance kneecapped) 16mbit token-ring cards

A 300 station configuration was $20,700 for $69 high performance 10mbit ethernet cards compared to $240,000 for $800 16mbit t/r (with significantly lower throughput) which tended to be configured on single LAN. It was possible to get high-performance TCP/IP routers that could have mainframe (not just IBM) channel interfaces, up to 16 Ethernet LANs, FDDI, T1&T3 telco interfaces (and later RS/6000 SLA interface followed by FCS) for $40,000. The difference between card costs could get five such high-performance routers, each with channel interfaces, aggregate of 80 Ethernet LAN interfaces (possibly configuring 300 RS/6000s, four per LAN)

One article in ACM SIGCOMM 1988 had some Ethernet analysis, 30 station 10mbit ethernet, individual cards could get 8.5mbits/sec and a test with all 30 stations in low level device driver loop constantly tansmitting minimum sized packets, aggregate effective throughput dropped from 8.5mibts/sec to 8.0mbits/sec.

Later RS/6000 SLA was sort of an incompatible, enhanced, faster, full-duplex, modification of ESCON. The engineer responsible for SLA, then wanted to do 800mbit/sec version but manage to con him into joining the FCS standards committee instead (in 1988, the IBM branch office had con'ed me into helping LLNL standardize some serial stuff they had been playing with, which quickly becomes FCS, initially 1gbit/sec, full-duplex, 200mbyte/sec aggregate, when ESCON eventually shipped it was already obsolete).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

recent posts mentioning token-ring and ethernet
https://www.garlic.com/~lynn/2024c.html#57 IBM Mainframe, TCP/IP, Token-ring, Ethernet
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024c.html#47 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#47 OS2
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024b.html#0 Assembler language and code optimization
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#103 More IBM Downfall
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#62 Silicon Valley Mainframes
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023e.html#30 Apple Versus IBM
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023d.html#27 IBM 3278
https://www.garlic.com/~lynn/2023c.html#92 TCP/IP, Internet, Ethernet, 3Tier
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#84 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#83 IBM's Near Demise
https://www.garlic.com/~lynn/2023b.html#50 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#34 Online Terminals
https://www.garlic.com/~lynn/2023b.html#13 IBM/PC
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#19 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022f.html#18 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#77 IBM Quota
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#7 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#125 Google Cloud
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021k.html#74 IBM2Dos
https://www.garlic.com/~lynn/2021k.html#28 APL
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#49 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#36 Programming Languages in IBM
https://www.garlic.com/~lynn/2021j.html#2 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#73 IBM MYTE
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021h.html#69 IBM Graphical Workstation
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021d.html#18 The Rise of the Internet
https://www.garlic.com/~lynn/2021d.html#15 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021c.html#85 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021b.html#46 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#45 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#17 IBM Kneecapping products
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3705 & 3725

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3705 & 3725
Date: 17 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#54 IBM 3705 & 3725

when Jim left SJR for Tandem, he was foisting some stuff on me, supporting BofA System/R joint study (System/R was original SQL/relational implementation and BofA was getting 60 vm/4341s for distributed operation) and DBMS consulting for IMS.

I was also blamed for online computer conferencing on the IBM internal network in the late 70s and early 80s ... it really took off spring81 after I distributed a trip report of visit to Jim at Tandem (only about 300 participated, but claim something like 25,000 were reading) ... and after the corporate executive committee was told, folklore is 5of6 wanted to fire me.

After Tandem, Jim 1st went to DEC DBMS group before MS ... and he and I got in little tiff at ACM SIGOPS ... I was doing HA/CMP with commodity hardware (and replacing DEC VAXClusters) and he was claiming that it needed higher quality hardware. Later at MS, he was on stage with MS CEO talking about HA/MS with commodity hardware

original sql/relational (System/R) posts
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3705 & 3725

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3705 & 3725
Date: 17 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#54 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#70 IBM 3705 & 3725

Starting early 80s, also had HSDT effort, T1 and faster computer links (both terrestrial and satellite) ... also working with NSF director and suppose to get $20M to interconnect NSF supercomputer centers (see upthread comments in this post). HSDT also getting a transponder on SBS4 ... which was going up on STS41D and we were down at Kennedy for the launch.
https://www.nasa.gov/mission/sts-41d/

It got delayed ... so we figured we had enough time to take a side-trip to Raleigh. We had recently also had meetings with GM/EDS ... who told us they were moving off SNA ... which we mentioned in the Raleigh meeting ... they argued it for awhile and then left the room. When they came back they said, yes, GM/EDS decided to move off SNA ... but it doesn't make any difference since they had already spent that year's budget on CPD/SNA gear.

... above NASA article says "SBS-4" as "Small Business Systems" ... which was really an IBM company "Satellite Business Systems" (although jointly owned 1/3rd with COMSAT and 1/3rd with a large insurance company).
https://en.wikipedia.org/wiki/Satellite_Business_Systems

... a little drift ... got HA/CMP effort late 80s and in 1990 was also asked to be an IBM rep to the auto "C4 Taskforce" ... which was for make-over the US auto manufacturing ... and since they planning on heavily leveraging technology, they asked high-tech companies to participate. This dates back to congress putting quota on foreign automakers ... greatly increasing US auto maker business (and they were suppose to use the greatly increased profits to remake themselves becoming more competitive ... however they just pocketed the money). Auto making business regularly took 7-8yrs elapsed from design to rolling off the line (they tended to have two efforts in parallel offset 3-4yrs to make it look faster). We were told the foreign makers, making inexpensive entry autos, but at the quota levels, decided they could sell that many high-end autos ... so they totally switched their product line, also cutting the elapsed product time to 3-4yrs (from 7-8yrs, combination of foreign quota limit and move to expensive product line, allowed US to further increase their prices). Roll forward to 1990, the foreign makers were in the process of cutting it in half again (18-24months), while US was still 7-8yrs (giving foreign makers competitive advantages able to more quickly respond to changes in customer preferences and changes in technology). Offline, I would ask the IBM POK "brethren" how could they contribute since they suffered many of the same short comings.

They had also spun off their part business which aggravated the 7-8yr cycle ... after 7-8yrs, there were cases where the parts in the original design where no longer available and they had expensive redesign based on currently available parts.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
auto c4 taskforce posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3705 & 3725

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3705 & 3725
Date: 18 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#54 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#70 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#71 IBM 3705 & 3725

and
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

note, one of the countermeasure for the S1 NCP/VTAM was to get (claimed/supposedly) the largest 3725 customer to totally fund making it a type1 product with no strings attached ... they claimed that having it as a type1 product .... they would totally recoup the fully funding of the effort within 9months

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe and Blade Servers

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe and Blade Servers
Date: 18 May, 2024
Blog: Facebook
The 3880 controller had special hardware path for 3mbyte/sec transfer but an extremely slow processor for everything else (much slower than 3830). To try and mask how slow, they would present end-of-operation early and assume they could finish up everything before operating system got around to trying to redrive with new operation. MVS was so slow, that early testing worked every time ... however, I would blow it out of the water, every redrive operation was met with controller busy (CC=1 SM+BUSY), requeue the operation and then have to wait for CUE interrupt to try again (resulting in further 3880 slowdown & overhead, I've claimed that major motivation for 370/XA I/O changes were countermeasure to the enormous MVS pathlength from interrupt to device redrive). In any case, they have to do a lot more 3880 tweaks to try and mask the problems before shipping to customers.

The 3090 people had designed number of channels assuming 3880 was same as 3830 controller, but with (3380) 3mbyte/sec transfer. When they found how bad the 3880 channel busy actually was, they realized that they had to significantly increase the number of channels (to compensate for the significant 3880 channel busy) ... which then required an extra TCM (the 3090 office semi-facetiously said that the would bill the 3880 organization for the increase in 3090 manufacturing cost). Eventually marketing respun the significant increase in 3090 channels as being wonderful I/O machine (rather than just required to offset the increase 3880 channel busy overhead).

1988, the IBM branch office asks me to help LLNL (national lab) get some serial stuff they were playing with standardized, which quickly becomes fibre-channel standad ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then when ESCON was announced with ES/9000, it was already obsolete (17mbytes/sec).

Later some POK engineers become involved in FCS and define a heavy-weight protocol that radically reduces the native throughput. Latest public FICON benchmark I can find is z196 "Peak I/O" getting 2M IOPS using 104 FICON. About the same time a FCS was announced for E5-2600 server blade claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend that SAPs (system assist processors that do actual I/O) be held to 70% CPU ... which would be more like 1.5M IOPS. Also no CKD DASD have been made for decades, all being simulated on industry standard fixed-block disks.

https://en.wikipedia.org/wiki/Fibre_Channel
https://en.wikipedia.org/wiki/FICON
post from last year
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/

late last century, the i86 vendors went to a hardware layer that translated i86 into RISC micro-ops for actual execution ... largely negating the throughput advantage of RISC processors (industry standard benchmark program that counts number of iterations compared to 1MIP reference platform).


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     IBM z900 mainframe processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC 440)

2003 max. configured IBM mainframe z990, 32 processor aggregate 9BIPS
     (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configure IBM mainframe z196, 80 processor aggregate 50BIPS
     (625MIPS/proc)
2010 E5-2600 XEON server blade, 16 processor aggregate 500BIPS
     (31BIPS/proc)

max configured z196 went for $30M, IBM base list price for E5-2600 blade was $1815. This century large cloud operations have been claiming that they assemble their own server blades for 1//3rd price of brand name server blades ($603, or $1.2/BIPS compared to z196 $600,000/BIPS) . Then there was press that i86 server chip makers were shipping at least half their product directly to cloud operations and IBM sells off its i86 server business


z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
z12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS (1000MIPS/proc), Sep2019
z16, 200 processors, 222BIPS (1111MIPS/proc), Sep2022

aka: 12yrs after z196, max configured z16 is approx 4.5 times more processing (than z196), while server blade processing has increased 10-20 times

A large cloud operation can have scores of megadatacenters around the world, each megadatacenter with half-million or more server blades (each server blade 10-20 times the processing of max-configured mainframe) with enormous automation, 70-80 staff per megadatacenter.

posts mentioning getting to play disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning CKD DASD, FBA, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd
FICON and FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe and Blade Servers

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe and Blade Servers
Date: 19 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers

Early last decade there were articles that a credit-card could be used with cloud operation to spin-up a supercomputer (that ranked in the worlds top 100) for a few hrs (big overlap in megadatacenter operation and cluster supercomputing).

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

speed matching buffer .... "calypso", which required ECKD channel commands (originally for calypso) was hack of 3880 controller to connect 3380 3mbyte/sec to 1.5mbyte/sec channels .... had lots of bugs and customer severity one situations in MVS data management

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

post mentioning calypso, ECKD, speed-matching
https://www.garlic.com/~lynn/2023d.html#117 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2018.html#81 CKD details
https://www.garlic.com/~lynn/2015g.html#15 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2015f.html#86 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2012o.html#64 Random thoughts: Low power, High performance
https://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2010h.html#30 45 years of Mainframe
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2009p.html#11 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2009k.html#44 Z/VM support for FBA devices was Re: z/OS support of HMC's 3270 emulation?
https://www.garlic.com/~lynn/2007e.html#40 FBA rant

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe and Blade Servers

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe and Blade Servers
Date: 19 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#74 Mainframe and Blade Servers

note, when I originally transferred to SJR, got to wander around IBM (and non-IBM) datacenters in silicon valley, including bldg14 (disk engineering) and bldg15 (disk product tst) across the street. They were doing 7x24, pre-scheduled, stand-alone testing and had mentioned that they had tried MVS, but it had 15min MTBF (requiring manual re-ipl) in that environment. I offered to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity. Downside they got into habit of blaming me for problems and I had to spend increasing amount of time diagnosing their hardware problems. Later I wrote an "internal" research report on the I/O reliability work and happen to mention the MVS 15min MTBF, bringing down the wrath of the MVS organization on my head.

Later when 3380s were getting close to ship, FE had test of 57 simulated errors they thought likely to occur, in all 57 cases, MVS was still crashing (requiring manual re-ipl) and in 2/3rds of the cases no indication of what caused the crash (and I didn't feel bad).

Note the original 3380s had 20 track spacing between data tracks. Later they cut the spacing in half, doubling the number of cyls and then cut again for triple the cyls. However, during the early pre/post ship of 3380 and calypso (speed matching), they had done something similar for 3350, doubling the number of cylinders ... but MVS data management was so bogged down in fixing 3380 & calypso problems, there was no chance of getting them to support double density 3350.

getting to play disk engineer in bldg14/bldg15
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Inventing The Internet

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Inventing The Internet
Date: 19 May, 2024
Blog: Facebook
article about Al Gore and "inventing the internet" (i.e. his father had been involved in passing the interstate highway system)
https://en.wikipedia.org/wiki/Information_Superhighway
and it was suppose to be a similar gov service
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/

I was attending NII meetings at LLNL
https://en.wikipedia.org/wiki/National_Information_Infrastructure
It proposed to build communications networks, interactive services, interoperable computer hardware and software, computers, databases, and consumer electronics in order to put vast amounts of information available to both public and private sectors.[2] NII was to have included more than just the physical facilities (more than the cameras, scanners, keyboards, telephones, fax machines, computers, switches, compact disks, video and audio tape, cable, wire, satellites, optical fiber transmission lines, microwave nets, switches, televisions, monitors, and printers) used to transmit, store, process, and display voice, data, and images; it was also to encompass a wide range of interactive functions, user-tailored services, and multimedia databases that were interconnected in a technology-neutral manner that will favor no one industry over any other.[3]

... snip ...

... note vendors were being asked to provide technology for the testbed on their own nickel ... they somewhat recovered when Singapore invited all the (US) testbed participants to do one in a (fully paid for) testbed in Singapore.

This NII was starting to show up in the 2nd half of the 80s with the NSF supercomputer interconnect (morphing into NSFNET backbone, precursor to modern internet). Fiber-optic was enormously increasing available bandwidth but telcos still had copper-wire billing bandwidth mentality. Early 80s, I had HSDT effort (T1 and faster computer links) and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually a RFP is released (in part based on what we already had running) ... Preliminary announcement (Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87)
https://www.technologyreview.com/s/401444/grid-computing/

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

posts mnetioning NII & LLNL
https://www.garlic.com/~lynn/2023g.html#67 Waiting for the reference to Algores creation documents/where to find- what to ask for
https://www.garlic.com/~lynn/2023g.html#25 Vintage Cray
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023b.html#100 5G Hype Cycle
https://www.garlic.com/~lynn/2023.html#1 IMS & DB2
https://www.garlic.com/~lynn/2022h.html#12 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#3 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#9 Cloud Timesharing
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2022.html#16 IBM Clone Controllers
https://www.garlic.com/~lynn/2021k.html#109 Network Systems
https://www.garlic.com/~lynn/2021k.html#84 Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#83 Internet Old Farts
https://www.garlic.com/~lynn/2021h.html#25 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2019.html#75 21 random but totally appropriate ways to celebrate the World Wide Web's 30th birthday
https://www.garlic.com/~lynn/2017d.html#73 US NII
https://www.garlic.com/~lynn/2011o.html#89 What is Cloud Computing?
https://www.garlic.com/~lynn/2011o.html#63 Intel's 1 teraflop chip
https://www.garlic.com/~lynn/2011n.html#60 Two studies of the concentration of power -- government and industry

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe and Blade Servers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe and Blade Servers
Date: 19 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#74 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#75 Mainframe and Blade Servers

In the wake of Future System implosion (was going to completely replace 360&370 and internal politics was killing off 370 efforts)
http://www.jfsowa.com/computer/memo125.htm there was mad rush to get
stuff back into the 370 production pipelines, including kicking off quick&dirty 3033 and 3081 efforts in parallel. I was con'ed into helping with a 16 processor SMP and we con'ed the 3033 processor engineers into helping in their spare time (lot more interesting than remapping 168 logic to 20% faster chips) ... everybody thot was great, until somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) had (effective) 16 processor support (at the time MVS documentation stated that two processor support only had 1.2-1.5 times throughput of single processor).

I had recently done VM370 release3 two-processor support, originally for the US sales&marketing HONE system so they could add a 2nd processor to each of an eight system loosely-coupled operation (16 processors total) ... and each two processor system was getting twice the throughput of single processor. Then some of us were invited to never visit POK again (POK doesn't ship a 16 processor SMP system until after the turn of century, more than two decades later), I then transfer to SJR. The head of POK also convinced corporate to kill the VM/370 product, shutdown the development group, and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM/370 product for the mid-range, but had to recreate a development group from scratch).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, tightly-coupled, and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Internal Network

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Internal Network
Date: 20 May, 2024
Blog: Facebook
There is in famous case of STL (now SVL) and Hursley trying to set up 56kbit double hop satellite link so computing resources in each other's datacenter could be used offshift. They 1st put up link between two VM370 VNET/RSCS systems and data flowed fine. Then a executive (heavily indoctrinated in MVS/JES2) insisted the link be moved between two MVS/JES2 systems and nothing worked. They then moved the link back to two VNET/RSCS systems and data flowed just fine. The executive then said that it was obvious that VNET/RSCS was so dumb that it didn't know the link didn't work (even though data was flow fine). The (real) problem was that the round-trip double-hop transmission delay exceeded SNA time-out value.

Part of the issue was that Ed had done a elegant VNET/RSCS layered architecture where it was even possible to do a link drivers that simulated JES2 protocol
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/

... while JES2 mixed network fields with job control fields (JES2 network code having come from MVT/HASP source code that still contained "TUCC" in cols 68-71) and 1) used empty slots in the 255 entry pseudo device table, typically 160-180 entries, 2) origin and/or destination values that weren't in local table, got trashed (restricting MVS/JES2 to boundary nodes, early 70s, internal network was already well over 255 nodes), 3) (because of intermix network and job control fields) slight field changes between releases, traffic between systems at different releases, tended to result in MVS system crashes.

Because of #3, a library of special VNET/RSCS JES2 drivers evolved that could reorganize header fields to the format required by the directly connected MVS/JES2 system (as countermeasure to frequent MVS crashes). There was one incident where files from a recently modified MVS/JES2 system in San Jose was crashing MVS/JES2 systems in Hursley and the Hursley VNET/RSCS group was blamed (because there wasn't an update for VNET/RSCS JES2 drivers).

By the time there was update for JES2 to handle up to 999 network nodes, the internal network had already passed 1000.

I had written an edit macro that would apply information from the weekly status updates (adds/delete) to the network configuration files. Old archived post that contains a few weekly status update examples and summary of the corporate locations that added one or more new nodes during 1983. The science center wide-area network morphing into the internal network (non-SNA) was larger than arpanet/internet from just about the beginning until sometime mid/late 80s.
https://www.garlic.com/~lynn/2006k.html#8

A major reason for internet passing internal network was TCP/IP on PCs and workstations ... while the communication group was fiercely fighting off client/server and distributed computing ... and PCs/workstations were (mostly) restricted to terminal emulation.

technology also used for the corporate sponsored univ BITNET
https://en.wikipedia.org/wiki/BITNET
... which was also larger than internet for a time

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HASP, ASP, JES2, JES3, NJI, NJE posts
https://www.garlic.com/~lynn/submain.html#hasp
communication group fighting client/server and distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

some posts mentioning Science Center "wide-area" network (morphing into the internal network
https://www.garlic.com/~lynn/2024c.html#57 IBM Mainframe, TCP/IP, Token-ring, Ethernet
https://www.garlic.com/~lynn/2024c.html#22 FOILS
https://www.garlic.com/~lynn/2024b.html#109 IBM->SMTP/822 conversion
https://www.garlic.com/~lynn/2024b.html#104 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#86 Vintage BITNET
https://www.garlic.com/~lynn/2024b.html#82 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024.html#110 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#65 IBM Mainframes and Education Infrastructure
https://www.garlic.com/~lynn/2023g.html#24 Vintage ARPANET/Internet

1000th node globe

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe and Blade Servers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe and Blade Servers
Date: 20 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#74 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#75 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#77 Mainframe and Blade Servers

Late 80s got HA/6000 effort, originally for NYTimes to move their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I rename it HA/CMP after starting doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had both VAXCluster and Unix support in the same source base. Hardware reliability was increasing and services outages were starting to shift to environmental, earthquakes, hurricanes, floods, etc ... and besides scale-up also extended to multiple physical locations. Out marketing I coin the terms disaster survivability and geographic survivability and the IBM S/88 Product Administer started taking us around to their customers and also got me to write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain they couldn't meet the requirements).

Early Jan1992 in meeting with Oracle CEO, AWD/Hester says that we would have 16 processor HA/CMP mid92 and 128 processor HA/CMP ye92. During Jan, I was keeping IBM FSD appraised with our cluster scale-up work with the national labs ... and possibly FSD informed the Kingston supercomputer group .... because by the end of Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later).

Trivia: my wife had been in the GBURG JES group and was one of the catchers for ASP/JES3 and co-author of "JESUS" (JES Unified System, all the features of JES2 & JES3 that the respective customers couldn't live w/o) that never came to fruition. She was then con'ed into going to POK responsible for loosely-coupled architecture where she did Peer-Coupled Shared Data Architecture". However she didn't remain long, 1) repeated battles with communication group trying to force into using SNA/VTAM for loosely-coupled operation and 2) little uptake (except for IMS Hot Standby) until much later with SYSPLEX and Parallel SYSPLEX. She has story about asking Vern Watts who he would ask for permission to do IMS "hot standby", he replies nobody, he would just tell them when it was all done.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
peer-coupled shared data posts
https://www.garlic.com/~lynn/submain.html#shareddata
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

some posts mentioning VTAM/NCP simulation implemented on Series/1 that IMS wanted to have "shadow" sessions on the hot-standby (because VTAM could otherwise take over an hour to get everything up on the hot-standby)
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#51 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022c.html#79 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022b.html#102 370/158 Integrated Channel
https://www.garlic.com/~lynn/2021k.html#115 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021b.html#72 IMS Stories
https://www.garlic.com/~lynn/2019d.html#114 IBM HONE
https://www.garlic.com/~lynn/2018e.html#94 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2018e.html#2 Frank Heart Dies at 89
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2016e.html#85 Honeywell 200
https://www.garlic.com/~lynn/2013l.html#46 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013j.html#67 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2013i.html#37 The Subroutine Call
https://www.garlic.com/~lynn/2010f.html#2 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2009l.html#66 ACP, One of the Oldest Open Source Apps

--
virtualization experience starting Jan1968, online at home since Mar1970

Inventing The Internet

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Inventing The Internet
Date: 21 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#75 Inventing The Internet

didn't take the follow offer:

Date: 4 January 1988, 14:12:35 EST
To: distribution
Subject: NSFNET Technical Review Board Kickoff Meeting 1/7/88

On November 24th, 1987 the National Science Foundation announced that MERIT, supported by IBM and MCI was selected to develop and operate the evolving NSF Network and gateways integrating 12 regional networks. The Computing Systems Department at IBM Research will design and develop many of the key software components for this project including the Nodal Switching System, the Network Management applications for NETVIEW and some of the Information Services Tools.

I am asking you to participate on an IBM NSFNET Technical Review Board. The purpose of this Board is to both review the technical direction of the work undertaken by IBM in support of the NSF Network, and ensure that this work is proceeding in the right direction. Your participation will also ensure that the work complements our strategic products and provides benefits to your organization. The NSFNET project provides us with an opportunity to assume leadership in national networking, and your participation on this Board will help achieve this goal.


... snip ... top of post, old email index, NSFNET email

... somebody had been collecting executive misinformation email (not only forcing the internal network to SNA, but claiming that SNA could be used for NSFnet) and it was forwarded to us ... old post with email heavily clipped and redacted (to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109

I was asked to be the red team for the T3 proposal (and blue team was couple dozen people from half dozen labs around the world). I presented 1st, then 5mins into the blue team presentation, the executive running the review pounds on the table and says he would lay down in front of a garbage truck before he lets anything but the blue team proposal go forward. I (and a few others) get up and leave.

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Inventing The Internet

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Inventing The Internet
Date: 21 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#75 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#80 Inventing The Internet

I had got HSDT project in early 80s, T1 and faster computer links (both terrestrial and satellite). Quickly had a number of T1 links in operation, including over some T3 satellite trunks. Then when SBS4 goes up on STS-41D in Aug1984, get a dedicated transponder and our own custom designed TDMA system. Initially three earth stations, IBM Yorktown Research (on the east coast), IBM Los Gatos lab (on the west coast), and IBM Austin AWD (workstation division). On the west coast, there were a couple custom chip design logic simulators (that did logic simulation 20,000-50,000 times faster than mainframe) ... and Austin claimed that being able to transmit chip design over HSDT with west coast help bring the RIOS (RS/6000) chip set in a year early.

... note the PC/RT weren't really T1 routers, they had 440kbit/sec links but to sort of look like T1, they put in T1 trunks with telco multiplexers ... I periodically would ridiculed them that they could call it T5 since some of the T1 trunks might in turn run over T5 trunks.

In 1988, the IBM branch office had asked if I could help LLNL get some serial stuff they were playing with, standardized ... which quickly becomes fibre-channel standard (FCS, including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec. Then IBM mainframe gets their serial stuff released with ES/9000 as ESCON when it is already obsolete.

Also got project to do HA/6000, initially for NYTimes to move their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that have DEC VAXcluster support in same source base with UNIX.

Early JAN1992 have meeting with Oracle CEO, where AWD/Hester tells them that we would have 16processor clusters by mid92 and 128processor clusters by YE1992. During Jan I update IBM FSD (federal system division) about national lab status and they must have told the Kingston supercomputing group, because by the end of Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we can't do anything with more than four processors (we leave IBM a few months later).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
FICON and FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

some posts mentioning SBS4
https://www.garlic.com/~lynn/2024c.html#71 IBM 3705 & 3725
https://www.garlic.com/~lynn/2023c.html#0 STS-41-D and SBS-4
https://www.garlic.com/~lynn/2018b.html#13 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017h.html#110 private thread drift--Re: Demolishing the Tile Turtle
https://www.garlic.com/~lynn/2014i.html#49 Sale receipt--obligatory?
https://www.garlic.com/~lynn/2014b.html#67 Royal Pardon For Turing
https://www.garlic.com/~lynn/2013i.html#0 By Any Other Name
https://www.garlic.com/~lynn/2011g.html#20 TELSTAR satellite experiment
https://www.garlic.com/~lynn/2010o.html#51 The Credit Card Criminals Are Getting Crafty
https://www.garlic.com/~lynn/2010k.html#12 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010i.html#69 Favourite computer history books?
https://www.garlic.com/~lynn/2010i.html#27 Favourite computer history books?
https://www.garlic.com/~lynn/2010c.html#58 watches
https://www.garlic.com/~lynn/2010c.html#57 watches
https://www.garlic.com/~lynn/2009q.html#0 Anyone going to Supercomputers '09 in Portland?
https://www.garlic.com/~lynn/2009o.html#36 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2008l.html#64 Blinkylights
https://www.garlic.com/~lynn/2007p.html#61 Damn
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006m.html#16 Why I use a Mac, anno 2006
https://www.garlic.com/~lynn/2006k.html#55 5963 (computer grade dual triode) production dates?
https://www.garlic.com/~lynn/2006d.html#24 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006.html#26 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2005q.html#17 Ethernet, Aloha and CSMA/CD -
https://www.garlic.com/~lynn/2005h.html#21 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2003j.html#76 1950s AT&T/IBM lack of collaboration?
https://www.garlic.com/~lynn/2002p.html#28 Western Union data communications?
https://www.garlic.com/~lynn/aadsm18.htm#51 link-layer encryptors for Ethernet?

--
virtualization experience starting Jan1968, online at home since Mar1970

Inventing The Internet

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Inventing The Internet
Date: 21 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#75 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#80 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#81 Inventing The Internet

other trivia: late 70s and early 80s, I was blamed for online computer conferencing on the IBM internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s). Fall of 1980, Jim Gray leaves San Jose Research for Tandem foisting off some DBMS stuff on me. Spring of 1981, I distribute a trip report of visit to Jim at Tandem and activity really picks up, only about 300 actively participating but claims that 25,000 were reading ... also folklore is that when corporate executive committee was told, 5of6 wanted to fire me. Possibly for various transgressions, I'm transferred from San Jose Research to Yorktown Research, left to live in San Jose but having to commute to Yorktown a couple times a month.

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Not long after leaving IBM, major payment processor has me go into a small client/server startup (some of the founders had come from NCSA), two former Oracle employees (that had worked with on cluster scale-up and had been in the Jan92 Oracle CEO meeting) were there responsible for something called "commerce server" and want to do payment transactions, the startup had also invented this technology they call "SSL", result is now frequently called "electronic commerce". I have responsibility for everything between webservers and payment networks. I then do a "Why Internet Isn't Business Critical Dataprocessing" talk based on work I had to do for electronic commerce (which Postel sponsored at ISI/USC). I'm then asked to be member of the financial industry standards body (ANSI X9).

webserver gateway to payment network posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

posts mentioning Postel sponsoring Internet Isn't Business Critical Dataprocessing talk
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021j.html#10 System Availability
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017j.html#31 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017d.html#92 Old hardware
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable

--
virtualization experience starting Jan1968, online at home since Mar1970

Inventing The Internet

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Inventing The Internet
Date: 22 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#75 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#80 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#81 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet

when IBM Jargon was young and "Tandem Memos" was new ...
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... and
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Hacker's Conference

From: Lynn Wheeler <lynn@garlic.com>
Subject: Hacker's Conference
Date: 22 May, 2024
Blog: Facebook
... annual hacker's conference in the 80s was up in santa cruz mountains ... one year CBS 60mins wanted to do segment ... there was lengthy negotiations that they wouldn't represent us as the bad guys ... however the segment opened with statement that a secret group in the santa cruz mountains was plotting to take over the world. next year, conference tshirt was CBS logo with bar across it.

past posts mentioning hackers conference
https://www.garlic.com/~lynn/2023f.html#96 Conferences
https://www.garlic.com/~lynn/2021c.html#89 Silicon Valley
https://www.garlic.com/~lynn/2018d.html#71 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017g.html#92 In Silicon Valley, dropping in at the GooglePlex, tech museums and the Jobs garage
https://www.garlic.com/~lynn/2016.html#34 Shout out to Grace Hopper (State of the Union)
https://www.garlic.com/~lynn/2014m.html#22 Whole Earth
https://www.garlic.com/~lynn/2014g.html#108 Xanadu, The World's Most Delayed Software, Is Finally Released After 54 Years In The Making
https://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2014d.html#44 [CM] Ten recollections about the early WWW and Internet
https://www.garlic.com/~lynn/2014b.html#95 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014b.html#12 Mac at 30: A love/hate relationship from the support front
https://www.garlic.com/~lynn/2012k.html#6 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2011j.html#53 Steve Jobs, the Whole Earth Catalog, & The WELL
https://www.garlic.com/~lynn/2010o.html#62 They always think we don't understand
https://www.garlic.com/~lynn/2010i.html#9 Favourite computer history books?

--
virtualization experience starting Jan1968, online at home since Mar1970

New 9/11 Evidence Points to Deep Saudi Complicity

From: Lynn Wheeler <lynn@garlic.com>
Subject: New 9/11 Evidence Points to Deep Saudi Complicity
Date: 23 May, 2024
Blog: Facebook
New 9/11 Evidence Points to Deep Saudi Complicity. Two decades of U.S. policy appear to be rooted in a mistaken understanding of what happened that day.
https://www.theatlantic.com/ideas/archive/2024/05/september-11-attacks-saudi-arabia-lawsuit/678430/
For more than two decades, through two wars and domestic upheaval, the idea that al-Qaeda acted alone on 9/11 has been the basis of U.S. policy. A blue-ribbon commission concluded that Osama bin Laden had pioneered a new kind of terrorist group--combining superior technological know-how, extensive resources, and a worldwide network so well coordinated that it could carry out operations of unprecedented magnitude. This vanguard of jihad, it seemed, was the first nonstate actor that rivaled nation-states in the damage it could wreak.

... snip ... well ...

... note that 9/11 victims were prohibited from suing Suadis for responsibility. That was lifted fall 2013 and they were allowed to sue for 9/11 responsibility: 9/11 Families 'Ecstatic' They Can Finally Sue Saudi Arabia
https://abcnews.go.com/US/911-families-ecstatic-finally-sue-saudi-arabia/story?id=21290177
Democratic senators increase pressure to declassify 9/11 documents related to Saudi role in attacks (2021)
https://thehill.com/policy/national-security/566547-democratic-senators-increase-pressure-to-declassify-9-11-documents/

Before the Iraq invasion, the cousin of white house chief of staff Card ... was dealing with the Iraqis at the UN and was given evidence that WMDs (tracing back to US in the Iran/Iraq war) had been decommissioned. the cousin shared it with (cousin, white house chief of staff) Card and others ... then is locked up in military hospital, book was published in 2010 (4yrs before decommissioned WMDs were declassified)
https://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/

NY Times series from 2014, the decommission WMDs (tracing back to US from Iran/Iraq war), had been found early in the invasion, but the information was classified for a decade
http://www.nytimes.com/interactive/2014/10/14/world/middleeast/us-casualties-of-iraq-chemical-weapons.html

note the military-industrial complex had wanted a war so badly that corporate reps were telling former eastern block countries that if they voted for IRAQ2 invasion in the UN, they would get membership in NATO and (directed appropriation) USAID (can *ONLY* be used for purchase of modern US arms, aka additional congressional gifts to MIC complex not in DOD budget). From the law of unintended consequences, the invaders were told to bypass ammo dumps looking for WMDs, when they got around to going back, over a million metric tons had evaporated (showing up later in IEDs)
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/

... from truth is stranger than fiction and law of unintended consequences that come back to bite you, much of the radical Islam & ISIS can be considered our own fault, VP Bush in the 80s
https://www.amazon.com/Family-Secrets-Americas-Invisible-Government-ebook/dp/B003NSBMNA/
pg292/loc6057-59:
There was also a calculated decision to use the Saudis as surrogates in the cold war. The United States actually encouraged Saudi efforts to spread the extremist Wahhabi form of Islam as a way of stirring up large Muslim communities in Soviet-controlled countries. (It didn't hurt that Muslim Soviet Asia contained what were believed to be the world's largest undeveloped reserves of oil.)

... snip ...

Saudi radical extremist Islam/Wahhabi loosened on the world ... bin Laden & 15of16 9/11 were Saudis (some claims that 95% of extreme Islam world terrorism is Wahhabi related)
https://en.wikipedia.org/wiki/Wahhabism

Mattis somewhat more PC (political correct)
https://www.amazon.com/Call-Sign-Chaos-Learning-Lead-ebook/dp/B07SBRFVNH/
pg21/loc349-51:
Ayatollah Khomeini's revolutionary regime took hold in Iran by ousting the Shah and swearing hostility against the United States. That same year, the Soviet Union was pouring troops into Afghanistan to prop up a pro-Russian government that was opposed by Sunni Islamist fundamentalists and tribal factions. The United States was supporting Saudi Arabia's involvement in forming a counterweight to Soviet influence.

... snip ...

and internal CIA
https://www.amazon.com/Permanent-Record-Edward-Snowden-ebook/dp/B07STQPGH6/
pg133/loc1916-17:
But al-Qaeda did maintain unusually close ties with our allies the Saudis, a fact that the Bush White House worked suspiciously hard to suppress as we went to war with two other countries.

... snip ...

The Danger of Fibbing Our Way into War. Falsehoods and fat military budgets can make conflict more likely
https://www.pogo.org/analysis/2020/01/the-danger-of-fibbing-our-way-into-war/
The Day I Realized I Would Never Find Weapons of Mass Destruction in Iraq
https://www.nytimes.com/2020/01/29/magazine/iraq-weapons-mass-destruction.html

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmd

posts mentioning 9/11 victims allowed to sue saudis
https://www.garlic.com/~lynn/2021i.html#50 FBI releases first secret 9/11 file
https://www.garlic.com/~lynn/2021g.html#99 Democratic senators increase pressure to declassify 9/11 documents related to Saudi role in attacks
https://www.garlic.com/~lynn/2021e.html#42 The Blind Strategist: John Boyd and the American Art of War
https://www.garlic.com/~lynn/2021c.html#26 Fighting to Go Home: Operation Desert Storm, 30 Years Later
https://www.garlic.com/~lynn/2020.html#22 The Saudi Connection: Inside the 9/11 Case That Divided the F.B.I
https://www.garlic.com/~lynn/2019e.html#143 "Undeniable Evidence": Explosive Classified Docs Reveal Afghan War Mass Deception
https://www.garlic.com/~lynn/2019e.html#114 Post 9/11 wars have cost American taxpayers $6.4 trillion, study finds
https://www.garlic.com/~lynn/2019e.html#105 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#85 Just and Unjust Wars
https://www.garlic.com/~lynn/2019e.html#70 Since 2001 We Have Spent $32 Million Per Hour on War
https://www.garlic.com/~lynn/2019e.html#67 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019e.html#58 Homeland Security Dept. Affirms Threat of White Supremacy After Years of Prodding
https://www.garlic.com/~lynn/2019e.html#26 Radical Muslim
https://www.garlic.com/~lynn/2018b.html#65 Doubts about the HR departments that require knowledge of technology that does not exist
https://www.garlic.com/~lynn/2016f.html#24 Frieden calculator
https://www.garlic.com/~lynn/2016e.html#39 JOINT INQUIRY INTO INTELLIGENCE COMMUNITY ACTIVITIES BEFORE AND AFTER THE TERRORIST ATTACKS OF SEPTEMBER 11, 2001
https://www.garlic.com/~lynn/2016c.html#93 Qbasic
https://www.garlic.com/~lynn/2016c.html#87 Top secret "28 pages" may hold clues about Saudi support for 9/11 hijackers
https://www.garlic.com/~lynn/2016c.html#50 Iraqi WMDs
https://www.garlic.com/~lynn/2016.html#72 Thanks Obama
https://www.garlic.com/~lynn/2015g.html#12 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015d.html#54 The Jeb Bush Adviser Who Should Scare You
https://www.garlic.com/~lynn/2015b.html#78 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015b.html#73 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015b.html#27 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2015.html#72 George W. Bush: Still the worst; A new study ranks Bush near the very bottom in history
https://www.garlic.com/~lynn/2015.html#64 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2014i.html#51 How Comp-Sci went from passing fad to must have major
https://www.garlic.com/~lynn/2014d.html#89 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2014d.html#38 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014d.html#14 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014d.html#4 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014c.html#103 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014c.html#99 Reducing Army Size
https://www.garlic.com/~lynn/2014.html#42 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014.html#13 Al-Qaeda-linked force captures Fallujah amid rise in violence in Iraq
https://www.garlic.com/~lynn/2014.html#11 NSA seeks to build quantum computer that could crack most types of encryption
https://www.garlic.com/~lynn/2013o.html#83 NSA surveillance played little role in foiling terror plots, experts say
https://www.garlic.com/~lynn/2013o.html#51 U.S. Sidelined as Iraq Becomes Bloodier

--
virtualization experience starting Jan1968, online at home since Mar1970

Inventing The Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: Inventing The Internet
Date: 24 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#76 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#80 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#81 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#83 Inventing The Internet

we regularly had Friday after work get togethers ... from long ago and far away.

Date: FRI, 05/15/87 15:15:21 PDT
From: WHEELER @ ALMVMA
Subject: friday; cc: friday; misc. piece of news:

nsf is starting up new program to involve/fund industry in research activities & ibm says that they will participate, may see the rebirth of hsdt yet (at least if NSF director & gordon bell have their way).


... snip ... top of post, old email index, NSFNET email

At the time Gordon Bell was down at NSF and showed up at some of the HSDT presentations. Of course rebirth of HSDT proposal was just wishful thinking in the light of all the IBM opposition.

email an hour earlier
https://www.garlic.com/~lynn/2006s.html#email870515

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Gordon Bell

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Gordon Bell
Date: 25 May, 2024
Blog: Facebook
After implosion of IBM's future system project
http://www.jfsowa.com/computer/memo125.htm

there was mad rush to get stuff back into the 370 product pipelines (during FS, internal politics was shutting down 370 projects), including kicking off quick&dirty 3033&3081 efforts in parallel. Also the head of POK managed to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA. They weren't planning on telling the people until just before shutdown and move, however the info managed to leak early and some managed to escape into the Boston area (including joke that head of POK was major contributor to the infant VAX/VMS effort) ... Endicott managed to save the VM370 product effort, but had to recreate a development group from scratch.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Note VM/4300s sold into the same mid-range market as VAX/VMS and in about the same numbers for small unit orders. The big difference was large companies ordering hundreds of VM/4300s at a time for distributing out into departmental areas (sort of the leading edge of the coming distributed computing tsunami). Old post with decade of VAX numbers, sliced&diced by year, model, US/non-US (by mid-80s, mid-range was starting to move to large PC&workstation servers).
https://www.garlic.com/~lynn/2002f.html#0

and from a recent thread over in (facebook public) internet group mentions when we were working with NSF on what was going to become the modern internet (with lots of resistance from IBM), Gordon was down at NSF with NSF Director:
https://www.garlic.com/~lynn/2024c.html#76 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#80 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#81 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#83 Inventing The Internet
and
https://www.garlic.com/~lynn/2024c.html#86 Inventing The Internet
with old HSDT/NSFNET email mentioning Gordon Bell
https://www.garlic.com/~lynn/2024c.html#email870515

At the time Gordon Bell was down at NSF and showed up at some of the HSDT presentations. Of course rebirth of HSDT proposal was just wishful thinking in the light of all the IBM opposition. Previous archived post with email from an hr earlier
https://www.garlic.com/~lynn/2006s.html#email870515

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Machines

From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Machines
Date: 26 May, 2024
Blog: Facebook
Science Center wanted to get a 360/50 to modify with virtual memory to implement virtual machines ... but had to settle for 360/40 (apparently all the spare 360/50s were going to the FAA ATC project). CP40/CMS then morphs into CP67/CMS when they get 360/67 standard with virtual memory. When it was decided to make virtual memory available on all 370s, CP67/CMS was modified to run on 370s ... before VM370/CMS was available. In the morph of CP67/CMS to VM370/CMS lots of features were simplified or dropped (like multiprocessor support) ... but most were eventually added back. Melinda's history has more on CP40 and CP67:
http://www.leeandmelindavarian.com/Melinda#VMHist

During the Future System period (totally different from 370 and was going to completely replace 370s, also the lack of new 370s during the FS period is credited with giving clone 370 system makers their market foothold). When FS finally implodes there is a mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts.
http://www.jfsowa.com/computer/memo125.htm

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

The head of POK lab also convinces corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (the Endicott lab eventually saves the VM370 product mission but has to recreate a VM370 development group from scratch).

misc. other from two years ago:
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-7-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50-part-8-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

... little over a decade ago, I was asked to track down decision to make virtual memory available on all 370s; found staff to executive making decision. Basically, MVT storage management was so bad, region sizes had to be specified four times larger than used. As a result a 1mbyte 370/165, typically would only run four concurrent regions, insufficient to keep it busy and justified. Running MVT in 16mbyte virtual address could increase the number of concurrently running regions by factor of four times with little or no paging (aka VS2/SVS; similar to running MVT in a CP67 16mbyte virtual machine). Archived post with pieces of email exchange
https://www.garlic.com/~lynn/2011d.html#73

CP67 was 1st modified to have option for 370 virtual machine ... with 370 architecture simulation (CP67H). Then CP67H was modified to run on 370 architecture (CP67I). A year before an engineering 370 was operational with virtual memory, CP67L running on real 360/67 with CP67H running in 360/67 virtual machine, with CP67I running in a 370 virtual machine was in regular operation. The extra layer of CP67H running in 360/67 virtual machine was the science center system regularly had (non-IBM) staff, profs, students from Boston area univ using the system and required the extra layer of security to prevent unannounced 370 virtual memory from leaking. CP67I was then used for IPL to validate 1st operational engineering 370/145 with virtual memory. Then some DASD engineers came out from San Jose to add 3330 & 2305 device support for CP67SJ ... which saw wide use internally within IBM on 370s.

posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning cp67 l, h, i, sj
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2013.html#71 New HD
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2007i.html#16 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?

--
virtualization experience starting Jan1968, online at home since Mar1970

Inventing The Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: Inventing The Internet
Date: 26 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#76 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#80 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#81 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#83 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#86 Inventing The Internet

Note: after his term as NSF director in 80s, we would periodic visit him at "Council on Competitiveness" (I referred to as K-street lobbying, but it was more like H-street)
https://compete.org/

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

one of the things they were pushing for was commercializing government technology ... and getting anti-consortium/monopoly laws relaxed for government agencies commercializing government technology ... Some archived posts mentioning "Council on Competitiveness"
https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2016b.html#102 Qbasic
https://www.garlic.com/~lynn/2007t.html#15 Newsweek article--baby boomers and computers

--
virtualization experience starting Jan1968, online at home since Mar1970

Gordon Bell

From: Lynn Wheeler <lynn@garlic.com>
Subject: Gordon Bell
Date: 27 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#87 Gordon Bell

Bertram ... also after FS implosion, I got roped into helping with a 16 processor SMP 370 and we con the 3033 processor engineers into helping with it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips). Everybody thot it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) had (effective) 16processor SMP support (POK doesn't ship a 16 processor machine until after the turn of the century, over two decades later) and some of us were invited to never visit POK again (and 3033 processor engineers instructed to stay heads down on 3033). Note in this time frame MVS pubs are that MVS two processor SMP has 1.2-1.5 times the throughput of single processor (and MVS SMP overhead increasing non-linear as number of processors increased).

trivia: one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and the world-wide, online sales&marketing support HONE systems were a long time customer. In the morph of CP67->VM370 they simplify and/or drop a lot of features (including multiprocessor support). In 1974, I start migrating a bunch of stuff to a VM370R2-based CSC/VM for internal datacenters (including HONE) ... which included reorganization of kernel for multiprocessor operation (but not the actual SMP support).

US HONE had consolidated all their HONE datacenters up in palo alto (trivia: when FACEBOOK 1st moves into silicon valley, it is into a new bldg built next door to the former US HONE datacenter) ... consolidating all the systems into large loosely-coupled, single-system-image, shared dasd operation with fall-over and load balancing across the complex. I then add multiprocessor support to VM370R3-based CSC/VM system, initially for US HONE so they can add a 2nd processor to each system (16 processor complex, largest in the world) ... and with various slight of hand, each system was getting twice the throughput of a single processor system (rather than 1.2-1.5).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP posts
https://www.garlic.com/~lynn/subtopic.html#smp
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

Late 70 & early 80s, POK executives were brow beating internal datacenters that they had to move off VM370 to MVS ... because VM370 would no longer be supported on POK machines. US HONE complained so bitterly about one such visit that the executive had to come back and explain that HONE had obviously misunderstood what he said. --
virtualization experience starting Jan1968, online at home since Mar1970

Gordon Bell

From: Lynn Wheeler <lynn@garlic.com>
Subject: Gordon Bell
Date: 27 May, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#87 Gordon Bell
https://www.garlic.com/~lynn/2024c.html#90 Gordon Bell

Note little over decade ago, customer asks if I could track down decision to add virtual memory to all 370s. I found somebody that had been staff to executive making the decision; basically MVT storage management was so bad that region sizes tended to be specified four times larger than used, resulting in 1mbyte 370/165 typically doing only four concurrent regions, insufficient to keep system busy and justified. Going to single 16mbyte virtual memory allowed number of concurrent executing regions to be increased by a factor of four times with little or no paging (sort of like running MVT in a CP67 16mbyte virtual machine).

archived posts with pieces of email exchange about decision to add virtual memory to all 370s (also maintains list of all my other posts that references this post)
https://www.garlic.com/~lynn/2011d.html#73

Note transition from MVT to VS2/SVS ... was being able to get (up to) 15 regions using 4bit storage protect keys in single address space .... then in transition from VS2/SVS to VS2/MVS ... inter-region isolation was by using separate address space for each region. However OS/360 heritage was heavily pointer-passing APIs and for kernel call APIs they map an 8mbyte image of the kernel into every application 16mbyte virtual address space, leaving 8mbytes. Then because subsystems were mapped into their own separate virtual address space, a one mbyte segment ("CSA", common segment area) was mapped into every virtual address space, leaving 7mbytes. However the common area requirements were somewhat proportional to the number of concurrently executing "applications/regions" and number of subsystems, by 3033, it was typically running 5-6mbytes (and had become "common system area" CSA), leaving 2-3mbytes (and threatening to become 8mbytes leaving zero mbytes), contributing to the MVS mad rush to get to 370/XA, MVS/XA, and 31-bit addressing (some amount of 370/XA had been tailored specifically to address MVS issues).

Note in the early transition from VS2/SVS to VS2/MVS (in part because bloated MVS throughput overhead), customers weren't moving like planned and so there was sales incentive to get customers to move, information became known with the SHARE "boney fingers" song
http://www.mxg.com/thebuttonman/boney.asp

Then after MVS/XA was available in early 80s, similar to customers not moving to MVS (boney fingers reference), POK was having problems with getting customers to move to MVS/XA. Amdahl was having better success since they had HYPERVISOR (multiple domain facility, virtual machine subset done in microcode) that could run MVS and MVS/XA concurrently on the same machine (but POK had killed any VM/370 for 370/XA). Now there had been the VMTOOL (& SIE, note SIE was necessary for 370/XA virtual machine operation on 3081, but 3081 lacked the necessary microcode space, so had to do microcode "paging" seriously affecting performance) virtual machine subset, done in POK supporting MVS/XA development ... but MVS/XA development didn't require the features and performance for VM370-like production use. To compete with Amdahl, eventually this is shipped as VM/MA (migration aid) and VM/SF (system facility) for running MVS & MVS/XA concurrently to aid in migration. IBM doesn't ship LPAR & PR/SM HYPERVISOR competitor until almost a decade later, late in the 3090 product life.

Internally, a system programmer in IBM Rochester did enhanced (Endicott's) VM370 with full 370/XA support ... and then POK had a proposal for a few hundred person group in Kingston to bring the VMTOOL features, performance, and throughput up to VM/370 level, as a counter to Endicott's VM370 370/XA (from IBM Rochester).

posts mentioning MVS "boney fingers"
https://www.garlic.com/~lynn/2024b.html#90 IBM User Group Share
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2023e.html#66 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2022h.html#39 IBM Teddy Bear
https://www.garlic.com/~lynn/2022f.html#41 MVS
https://www.garlic.com/~lynn/2022f.html#34 Vintage Computing
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2021k.html#81 IBM Fridays
https://www.garlic.com/~lynn/2021.html#25 IBM Acronyms
https://www.garlic.com/~lynn/2019b.html#92 MVS Boney Fingers
https://www.garlic.com/~lynn/2014f.html#56 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2009q.html#14 Electric Light Orchestra IBM song, in 1981?
https://www.garlic.com/~lynn/2009q.html#11 Electric Light Orchestra IBM song, in 1981?
https://www.garlic.com/~lynn/99.html#117 OS390 bundling and version numbers -Reply

Posts mentioning CSA and migrating to 370/XA
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2020.html#36 IBM S/360 - 370
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2014k.html#36 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2013m.html#71 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2010p.html#21 Dataspaces or 64 bit storage

--
virtualization experience starting Jan1968, online at home since Mar1970

TCP Joke

From: Lynn Wheeler <lynn@garlic.com>
Subject: TCP Joke
Date: 28 May, 2024
Blog: Facebook
2nd half of 80s, I was on (Greg Chesson's) XTP technical advisory board ... where defined a reliable session done in minimum of three packet exchange (compared to TCP's 7 packet minimum packet exchange). There were some gov. entities involved so took it to (ISO charted for layer 3&4 protocols) ANSI X3S3.3 for standardization as "HSP" (high speed protocol). Eventually they told us that ISO requirements was only could do standards for protocols that conform to OSI model. XTP/HSP violated because 1) supported internetwork layer (which doesn't exist in OSI model), 2) bypassed layer 3/4 interface, and 3) went directly to LAN MAC interface, which doesn't exist in OSI model, sitting somewhere in middle of layer 3.

XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

Also 88 ACM SIGCOMM had paper on why (TCP) slow-start (congestion control) wasn't stable (in large multiple router network). Basically, returning ACKs (in windowing algorithm) can bunch up at intermediate router and then arrive in burst ... resulting in burst of back-to-back packets being transmitted, overwhelming some intermediate router. As aside, HSDT project started in early 80s had T1 and faster computer links (both terrestrial and satellite) and one of the 1st things we did was rate-based congestion control (as fix for window-based issues). I also wrote "rate-based" pacing into XTP specification.

Not long after leaving IBM in the 90s, was brought into small client/server startup as consultant. Two former Oracle employees (that we had work with on HA/CMP cluster scale-up) were there responsible for something they called "commerce server" and they wanted to do payment transactions on servers; the startup had also invented this technology called "SLL" they wanted to use, the result is frequently called "electronic commerce".

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

As their (payment and non-payment) servers were being deployed, they were finding many customer server CPUs became bogged down in loops. Turns out TCP close had FINWAIT list of sessions in processor of being closed ... which had a linear scan for each packet arriving (to see if they were dangling packets in sessions being closed) ... assumptions were the lists would be very short, but using TCP for HTTP/HTTPS, the FINWAIT lists were exploding to thousands of entries. I had responsibility for everything between "electronic commerce" servers and payment networks and did a talk on "Why The Internet Wasn't Business Critical Dataprocessing" ... based on work I had to do for "electronic commerce" (trivia: Postel sponsored the talk at ISI/USC).

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
electronic commerce payment network posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

a few recent posts mentioning FINWAIT
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#18 MOSAIC
https://www.garlic.com/~lynn/2023d.html#57 How the Net Was Won
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2022f.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2021k.html#80 OSI Model
https://www.garlic.com/~lynn/2021h.html#86 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021f.html#29 Quic gives the internet's data transmission foundation a needed speedup

--
virtualization experience starting Jan1968, online at home since Mar1970

ASCII/TTY33 Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: ASCII/TTY33 Support
Date: 29 May, 2024
Blog: Facebook
At the end of semester taking two credit hr intro to fortran/computers, I was hired to re-implement 1401 MPIO in assembler for 360/30. Univ replacing 709/1401 with a 360/67 for tss/360 ... temporarily the 1401 was replaced with 360/30 (pending availability of 360/67, 360/30 for starting to get familiar with 360, 360/30 also had microcode 1401 emulation). The univ shutdown datacenter on weekends and I would have it dedicated, although 48hrs w/o sleep made Monday classes hard. They gave me a bunch of hardware and software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc. and within a few weeks had a 2000 card assembler program. Then within a year of intro class, the 360/67 comes in and I'm hired fulltime responsible for OS/360 (tss/360 never really came to production, so ran as 360/65, I continue to have my 48hr dedicated datacenter on weekends). 709 tape->tape ran student fortran in less than second. Initially OS/360 MFT ran student fortran over minute. I install HASP and cuts time in half. I then start redoing STAGE2 SYSGEN to carefully order datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. It never beats 709 until I install Univ of Waterloo WATFOR.

Three people from the science center came out to install CP67/CMS (3rd after cambridge itself and MIT Lincoln Labs) and I mostly get to play with it in my weekend dedicated time. The 360/67 telecommunication controller had come with port scanners for 1052 and 2741 terminals (which cp67 supported with automagic terminal type recognition switching port scanner type with "SAD" CCW), but then some in univ got a few TTY33s and at least one TTY35 (ASCI/TTY port scanner upgrade arrived in box from Heathkit) and I added TTY terminal support and upgrade automagic terminal type recognition to include TTY/ASCII (fast TTY33 touch typing required a lot more finger strength than 2741).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

I then wanted to have a single dialin phone number for all terminal types ... "hunt group" https://en.wikipedia.org/wiki/Line_hunting which didn't quite work since IBM had taken a short cut and hard wired line speed for each port. This kick off a clone controller project, build channel interface board for Interdata/3 (which had a 360-like instruction set) programmed to simulate IBM controller, with the addition it supported automatic terminal line speed. Later it was enhanced with Interdata/4 for the channel interface and cluster of Interdata/3s for the port interfaces ... and four of us get written up responsible for (some part of) IBM clone controller business. Interdata was selling the boxes as clone controllers, later with PE logo after Perkin-Elmer buys Interdata.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
later spun off as Concurrent
https://en.wikipedia.org/wiki/Concurrent_Computer_Corporation

360 plug compatible controller
https://www.garlic.com/~lynn/submain.html#360pcm

IBM picked up my TTY/ASCII support and shipped in CP67 ... I had done a hack in TTY support with one-byte line lengths ... and the MIT Urban Lab modified their CP67 (tech sq bldg across quad from 545) for somebody down at Harvard that got an ASCII device with 1200(?) char length ... but didn't adjust my one-byte hack ... crashing system 27 times in single day.
http://www.multicians.org/thvv/360-67.html

ASCII trivia: 360s were suppose to be ASCII machines, but the ASCII unit record gear wasn't ready yet, so they were going to "temporary" support EBCDIC using old BCD unit record gear, Biggest Computer Goof ever:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
also
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

above mentions it was Learson ... another Learson reference trying (and failed) to block the bureaucrats, careerists, and MBAs from destroying Watson culture and legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
aka two decades later, IBM has one of the largest losses in the history of US companies and was in the process of being reorg'ed into the 13 "baby blues" in preparation for breaking up the compny
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Before I had graduated, I was hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in a independent business unit). I think Renton datacenter largest in the world (couple hundred million in 360 gear), 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room (somebody jokes that Boeing was bringing in 360/65s like other companies bring in keypunch machines). Lots of politics between Renton director and Boeing CFO, who only had 360/30 up at Boeing Field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I'm not doing other stuff). At the time 747#3 was flying skies of Seattle getting FAA flt certification. When I graduate, I join IBM Science Center (instead of staying with Boeing CFO).

some recent posts mentioning 1401 MPIO, interdata, and boeing cfo
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Memory Paging

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Memory Paging
Date: 01 Jun, 2024
Blog: Facebook
I took two credit hr intro to computers/fortran and at end of semester was hired to rewrite 1401 MPIO (unit record front end for 709) in assembler for 360/30. Univ was getting 360/67 for TSS/360 replacing 709/1401 and pending availability of 360/67, got a 360/30 temporarily replacing 1401 (for getting 360 experience). The univ. shutdown the datacenter over the weekend and I would have it dedicated, although 48hrs w/o sleep made monday classes hard. They gave me a bunch of hardware & software manuals and I got to design my own monitor, device drivers, interrupt handler, error recovery, storage management, etc ... with a few weeks, I had 2000 card program. Within a year of intro class, 360/67 arrives and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition and ran as 360/65). 709 ran student fortran tape->tape in less than second, initially os/360 they ran over minute. I install HASP which cuts the time in half. I then redo STAGE2 SYSGEN, carefully placing datasets and PDS members to optimize arm seek and multi-track search cutting another 2/3rds to 12.9secs. Student fortran never got better than 709 until I install Univ. of Waterloo WATFOR

Jan1968, CSC comes out to install CP67/CMS (precursor to VM370/CMS, 3rd install after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my dedicated weekend time. I start out rewriting CP67 pathlengths to improve OS/360 running in virtual machine, benchmark ran 322secs stand-alone, initially in virtual machine 858secs, CP67 CPU 534secs ... after 6months got CP67 CPU down to 113secs for the benchmark. Then I start on I/O optimized ... ordered seek for movable arm DASD and chained page requests maximizing transfers/rotation when no arm motion required (got fixed head 2301 drum from about 70/secs to peak of 270/sec). I then rewrite paging, page replacement algorithm, dynamic adaptive resource management, scheduling, dispatching, etc.

In late 70s I'm working with Jim Gray and Vera Watson on original SQL/relational implementation (System/R) at San Jose Research and in fall of 1980 Jim Gray leaves IBM for TANDEM and palms off some stuff on me. A year later, at Dec81 ACM SIGOPS meeting, Jim asked me to help a TANDEM co-worker get his Stanford PHD that heavily involved GLOBAL LRU (and the "local LRU" forces from 60s academic work, were heavily lobbying Stanford to not award a PHD for anything involving GLOBAL LRU). Jim knew I had detailed stats on the CP67 Cambridge/Grenoble global/local LRU comparison (showing global significantly outperformed local). Early 70s, IBM Grenoble Science Center had a 1mbyte 360/67 (155 4k pageable pages) running 35 CMS uses and had modified "standard" CP67 with working set dispatcher and local LRU page replacement ... corresponding to 60s academic papers. I was then at Cambridge which had 768kbyte 360/67 (104 4k pageable pages, only 2/3rds the number of Grenoble) and running 80 CMS users, similar kind of workloads, similar response, better throughput (with twice as many users) running my "standard" CP67 that I had originally done as undergraduate in the 60s. I had loads of Cambridge benchmarking data, in addition to the Grenoble APR73 CACM article and lots detailed of other performance data from Grenoble.

Late 70s and early 80s, I had (also) been blamed for online computer conferencing on the IBM internal network which really took off spring of 1981 when I distributed a trip report of visit to Jim Gray, only about 300 actively participated but claims 25,000 was reading (folklore that when corporate executive committee was told, 5of6 wanted to fire me). IBM blocked me from responding to Jim's request for local/global paging info for nearly a year, until fall of 1982 (I hoped that they believed it was punishment for online computer onferencing and not that they were meddling in academic dispute).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some refs:
L. Belady, The IBM History of Memory Management Technology, IBM Journal of R&D, V35N5 R. Carr, Virtual Memory Management, Stanford University, STAN-CS-81-873 (1981) R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm for Virtual Memory Management, ACM SIGOPS, v15n5, 1981 P. Denning, Working sets past and present, IEEE Trans Softw Eng, SE6, jan80 J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, CACM16, APR73 D. Hatfield J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971

--
virtualization experience starting Jan1968, online at home since Mar1970

Codd almighty! Has it been half a century of SQL already?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Codd almighty! Has it been half a century of SQL already?
Date: 02 Jun, 2024
Blog: Facebook
Codd almighty! Has it been half a century of SQL already? The Reg talks to Donald Chamberlin, Michael Stonebraker and more about the legendary programming language
https://www.theregister.com/2024/05/31/fifty_years_of_sql/
System R was IBM's attempt to demonstrate that the relational model could be executed practically by the computers available at the time, but because Codd's paper had been published in a journal, the IBM team were not the only ones working on the problem. Up the coast near San Francisco, another team at University of California Berkeley was working on a similar project.

... snip ...

... past posts mentioning System/R
https://www.garlic.com/~lynn/submain.html#systemr
--
virtualization experience starting Jan1968, online at home since Mar1970

Online Library Catalog

From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Library Catalog
Date: 02 Jun, 2024
Blog: Facebook
Within a year of taking two credit hour intro to computers/fortran, I was hired fulltime responsible for OS/360. Then the univ library got an ONR (office naval research) grant to do an online catalog, part of the money going for an IBM 2321 (datacell)
https://en.wikipedia.org/wiki/IBM_2321_Data_Cell

the effort was also selected as beta test for the original IBM CICS product ... and supporting CICS was also added to my tasks. One of the 1st problems was CICS wouldn't come up. Turns out it had undocumented hard coded BDAM dataset options and the library had created the datasets with different set of options.
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm

In the 90s, I was brought into NIH's NLM and two of the original NLM developers from the 60s, were still there and we talked about early days of OS/360.
https://www.nlm.nih.gov/

CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

a few posts mentioning univ. library online catalog
https://www.garlic.com/~lynn/2024.html#69 NIH National Library Of Medicine
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians
https://www.garlic.com/~lynn/2022c.html#39 After IBM
https://www.garlic.com/~lynn/2022.html#38 IBM CICS
https://www.garlic.com/~lynn/2019c.html#28 CICS Turns 50 Monday, July 8
https://www.garlic.com/~lynn/2018c.html#13 Graph database on z/OS?
https://www.garlic.com/~lynn/2017i.html#4 EasyLink email ad
https://www.garlic.com/~lynn/2017f.html#34 The head of the Census Bureau just quit, and the consequences are huge
https://www.garlic.com/~lynn/2016d.html#36 The Network Nation, Revised Edition
https://www.garlic.com/~lynn/2010j.html#73 IBM 3670 Brokerage Communications System

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Memory Paging

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Memory Paging
Date: 02 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging

Note some of the MIT 7094/CTSS went to the 5th flr to do MULTICs. Others went to the IBM science center on the 4th flr and did virtual machines ... CP40 on 360/40 modified with virtual memory hardware, which morphs into CP/67 (precursor to VM370) when 360/67 standard with virtual memory becomes available. The official virtual memory IBM product for 360/67 was TSS/360, but it was enormous bloated monster that had trouble coming to fruition and CP67 became the most common use of 360/67 virtual memory. CMS had a standard 360 I/O filesystem ... and in early 70s, I decided I could modify CMS to have a page mapped filesystem that turned out to have at least three times the throughput of the standard CMS filesystem (and scaled up better, some of the effort was part of friendly rivalry between Science Center on the 4th flr and Multics on the 5th flr) ... I would claim that I learned what not to do for a page mapped filesystem from TSS/360. This was at the time that IBM was focused on replacing 370 with Future System (completely different than 370) and also had a page mapped filesystem somewhat adapted from TSS/360
http://www.jfsowa.com/computer/memo125.htm

I would periodically ridicule the FS activity (which wasn't exactly career enhancing activity) ... drawing analogy with long running cult film that was playing down at Central Sq. Downside that after FS implodes, there was growing opinion that page mapped filesystems weren't good idea (in general, as opposed to TSS360&FS implementation specifically).

some articles that cache misses, memory latency ... when measured in terms of processor cycles, is similar to 60s disk latencies when measured in terms of 60s processor cycles ... aka memory is the new disk.

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
memory mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
paging posts
https://www.garlic.com/~lynn/subtopic.html#wsclock

recent thread in public mainframe group discusses migration of OS/360 to virtual memory SVS, MVS, MVS/XA
https://www.garlic.com/~lynn/2024c.html#87
https://www.garlic.com/~lynn/2024c.html#90
https://www.garlic.com/~lynn/2024c.html#91

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Memory Paging

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Memory Paging
Date: 02 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#97 Virtual Memory Paging

Six copies of about three hundred pages were printed along with executive summary and summary of the summary ... and placed in "TANDEM" 3-ring binders and sent to the corporate executive committee.

One of the outcomes was task force study and authors of 1978 book, "The Network Nation" were brought in as consultants. Also official software and official sanctioned moderated conferences ... and a researcher was paid to study how I communicated ... sat in the back of my office for nine months taking notes on face-to-face and telephone conversations along with getting copies of all my incoming and outgoing email and logs of all instant messages. Material was used for reports, papers, conference talks, books and a stanford PHD (joint between language and computer AI, winograd was advisor on AI side). From when ibm jargon was young and "Tandem Memos" was new ...
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... and
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Memory Paging

From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Memory Paging
Date: 02 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#97 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#98 Virtual Memory Paging

other drift, I was introduced to Boyd in the early 80s and use to sponsor his briefings ... as well as attending Boyd conferences at Quantico

Updated version of Boyd's Aerial Attack Study
https://tacticalprofessor.wordpress.com/2018/04/27/updated-version-of-boyds-aerial-attack-study/
Fighter Mafia: Colonel John Boyd, The Brain Behind Fighter Dominance
https://www.avgeekery.com/fighter-mafia-colonel-john-boyd-the-brain-behind-fighter-dominance/

Boyd redid the F15 design, originally swing-wing follow-on to F111 ... he eliminated the swing-wing and cut the weight nearly in half. Then behind the YF16&YF17 (that became F16 & F18) and helped with A10, A New Conception of War
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/
PDF->kindle, loc1783-88:
Boyd's collaboration with associate Pierre Sprey on the development of the A-10 close air support (CAS) aircraft sparked his exploration of history. The project was Sprey's, with Sprey consulting Boyd on performance analysis, E-M Theory, and views on warfare in general. When designing the A-10, Sprey had to determine what aircraft features provided the firepower and loiter time required by ground forces, while also granting survivability against the enemy ground fire that would inevitably be directed against it.4The German Wehrmacht had pioneered both the design and employment of dedicated CAS aircraft in World War II.

... snip ...

Trivia: In 89/90 the commandant of Marine Corps leverages Boyd for make-over of the corps ... at a time when IBM was desperately in need of make-over (when Boyd passed in 1997, the USAF had pretty much disowned him and it was the Marines at Arlington and his effects go to Quantico). John Boyd and IBM Wild Ducks
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Boyd posts
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4300

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4300
Date: 03 Jun, 2024
Blog: Facebook
Within year of taking into to fortran/computers (univ. 709/1401), I was hired full time responsible for OS/360 (360/67 supposedly for tss/360 replacing 709/1401, but ran as 360/65) ... then before graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit); I think Renton datacenter possibly largest in the world (couple hundred million in IBM 360s), 360/65s arriving faster than they could be installed (boxes constantly staged in the hallways around the machine room). Then when I graduate, I join IBM Cambridge Science Center and work on CP67 ... one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters ... including the online sales&marketing support HONE systems (initially US, but then starting to spring up all over the world).

After decision to add virtual memory to all 370s, there is effort to morph CP67->VM370, simplifying and/or dropping lots of CP67 (including multiprocessor support). In 1974, I start adding a lot of CP67 stuff back into VM370R2 for CSC/VM. The US HONE systems were also consolidated in Palo Alto (trivia: when FACEBOOK 1st moves into silicon valley it is into new bldg built next door to former US consolidated HONE datacenter), with loosely-coupled, shared DASD complex with load-balancing and fall-over across the complex. Then in 1975, I also add tightly-coupled multiprocessor support into VM370R3-based, initially for HONE so they can add a 2nd processor to each system (for 16 total processors). Endicott also cons me into helping with ECPS microcode assist for 138/148 ... request was to identify the 6kbytes of highest executed kernel code for microcoding (running ten times faster) ... archived post with analysis (6kbytes 79.55% of kernel execution)
https://www.garlic.com/~lynn/94.html#21

This was as Future System (originally planned to completely replace 370) was imploding and mad rush to get stuff back into the 370 product pipelines ... including kicking off the quick&dirty 3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
I was also con'ed into helping with 16-processor tightly-coupled multiprocessor, and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting that remapping 168-3 logic to 20% faster chips). Everybody thought was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) had (effective) 16-processor support (POK doesn't ship a 16-processor machine until after the turn of the century, at the time MVS documentation claimed that 2-processor had 1.2-1.5 times throughput of single processor, while I was getting twice throughput of each single processor at HONE).

The head of POK was also convincing corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA. Endicott manages to save the VM370 product mission for the mid-range (138/148, 4300s), but Endicott has to recreate a development group from scratch.

I transfer out to San Jose Research and get to wander around datacenters in silicon valley, including disk engineering & product test across the street. They were running around-the-clock, prescheduled, stand alone mainframe testing, having said they had tried MVS, but it had 15min mean-time-between-failure in that environment. I offer to rewrite I/O supervisor making bullet proof and never fail so they can do any amount of ondemand, concurrent testing (greatly improving productivity). Product test (in bldg15) got early engineering models, 1st 3033 outside processor engineer and then followed by engineering 4341 (and I have my systems running on all the processors).

Branch hears that I have more access to early machines than processor development back east ... and Jan79 asked me to do a 4341 benchmark for national lab that was looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). A small cluster of 4341s had higher throughput than 3033, was much cheaper, smaller footprint, and much less power&cooling (folklore that POK felt 4341 so threatened 3033, that they managed to have allocation of critical 4341 manufacturing component cut). Note 4341s sold in the same mid-range market as DEC VAX and in about the same numbers for small unit orders, big difference was large corporations ordering hundreds at a time for placing out in (non-datacenter) departmental areas (sort of the leading edge of the coming distributed computing tsunami).
https://en.m.wikipedia.org/wiki/IBM_4300
https://bitsavers.org/pdf/ibm/4341/

Note POK had enhanced 370 architecture for 370/XA with lots of features for improving MVS operation ... Endicott had done "E-architecture" 370 enhancements tailored for improving VS/1 & DOS/VS operation. However we had VS/1 running in VM/370 virtual machines better than VSE running in E-machine mode ... with addition of having CMS interactive (so there turned out to be much less use made of 4300 E-architecture).

158 had integrated channels that were really slow and also 158 engine with the integrated channel microcode (and w/o 370 microcde) was used for the 3031, 3032, & 3033 channel director (3031 was two 158 engines, one with just 370 microcode and one with just integrated channel microcde, 3032 was 168-3 using 303x channel director, 3033 started out 168 logic remapped to 20% faster chips)

the 4341 had much faster integrated channel microcode... bldg15 used 4341 with some tweaking for 3mbyte 3380/3880 testing

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
posts getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

a few posts mentioning 3033&4341 in bldg15, disk product test
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#45 IBM DASD
https://www.garlic.com/~lynn/2023b.html#79 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022g.html#8 IBM 4341
https://www.garlic.com/~lynn/2022f.html#91 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2021k.html#107 IBM Future System
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2018f.html#93 ACS360 and FS
https://www.garlic.com/~lynn/2016f.html#85 3033
https://www.garlic.com/~lynn/2016b.html#33 IBM's 3033; "The Big One": IBM's 3033
https://www.garlic.com/~lynn/2016b.html#27 Qbasic
https://www.garlic.com/~lynn/2015d.html#39 Remember 3277?
https://www.garlic.com/~lynn/2010n.html#12 Mainframe Slang terms
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006p.html#40 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please

--
virtualization experience starting Jan1968, online at home since Mar1970

architectural goals, Byte Addressability And Beyond

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: architectural goals, Byte Addressability And Beyond
Newsgroups: comp.arch
Date: Mon, 03 Jun 2024 16:42:17 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Remember that "mainframe workloads" are primarily I/O bound, not CPU- bound. The whole concept of a "mainframe" arose in the era when CPU time was scarce and expensive, so you had all these intelligent I/O peripherals that could be given sequences of operations to perform, with minimal CPU intervention. It was all about maximizing throughput (batch operation), not minimizing latency (interactive operation).

1980, I was con'ed into helping IBM STL lab that was overcrowded and moving 300 people to offsite bldg with dataprocessing back to STL datacenter. I was asked to do channel-extender support to place "local" channel "attached" controllers at the remote bldg (cutting various protocol round-trip latencies). Part of the issue was that the mainframe 60s era also had limited memory and so there is enormous protocol round-trips utilizing data back in mainframe memory.

1988, local IBM branch asks if I could help LLNL national lab standardize some serial stuff they had been playing ... which quickly becomes fibre-channel standard (FCS). Some time later, some IBM engineers become involved with FCS and define a heavy-weight protocol that radically cuts the native throughput ... which was eventually released as FICON (used for mainframe I/O, w/extensive protocol round-trip latencies, significant impact for even short distrances at gbit rates).

Most recent public benchmark that I've found is IBM z196 "Peak I/O" benchmark which had 104 FICON (running over 104 FCS) getting 2M IOPS. About the same time, a native FCS was announced for E5-2600 blade claiming over million IOPS; two such FCS have higher throughput than 104 FICON (running over 104 FCS). Note IBM docs recommend that SAPs (CPUs dedicated for running I/O) be kept to no more than 70% CPU ... which would be more like 1.5M (rather than 2M) IOPS.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON & FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

some recen posts mentioning z196 SAPs, z196 "Peak I/O" benchmark, IOPS, FICON
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#60 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024c.html#34 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#42 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP
https://www.garlic.com/~lynn/2024b.html#9 The Attack of the Killer Micros
https://www.garlic.com/~lynn/2024.html#89 IBM 360
https://www.garlic.com/~lynn/2024.html#80 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#54 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#7 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#97 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#85 Vintage DASD
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#67 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#62 Why Do Mainframes Still Exist
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2021h.html#44 OoO S/360 descendants
https://www.garlic.com/~lynn/2021c.html#71 What could cause a comeback for big-endianism very slowly?

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Memory Paging

From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Memory Paging
Date: 04 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#97 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#98 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#99 Virtual Memory Paging

other trivia: after joining IBM was looking at scenarios where LRU becomes pathologically FIFO, and program is cycling through memory and the next page that LRU chooses to replace is (approx) the page that the program next tries to use. I came up with a slight tweak to the code resulted in (effectively) the page replacement would transition from LRU (under normal conditions) but when the (FIFO) pathological condition was starting to appear, page replacement transitions to (approximately) random (rather than FIFO), "breaking" the pathological scenario.

paging and page replacement algorithms posts
https://www.garlic.com/~lynn/subtopic.html#wsclock

some past archive posts mentioning Global LRU and FIFO
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2022d.html#46 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2018d.html#28 MMIX meltdown
https://www.garlic.com/~lynn/2017j.html#84 VS/Repack
https://www.garlic.com/~lynn/2017j.html#78 thrashing, was Re: A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017b.html#27 Virtualization's Past Helps Explain Its Current Importance
https://www.garlic.com/~lynn/2017b.html#26 Virtualization's Past Helps Explain Its Current Importance
https://www.garlic.com/~lynn/2015c.html#48 The Stack Depth
https://www.garlic.com/~lynn/2015c.html#39 Virtual Memory Management
https://www.garlic.com/~lynn/2014m.html#138 How hyper threading works? (Intel)
https://www.garlic.com/~lynn/2013f.html#42 True LRU With 8-Way Associativity Is Implementable
https://www.garlic.com/~lynn/2012m.html#18 interactive, dispatching, etc
https://www.garlic.com/~lynn/2011e.html#27 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#2 Multiple Virtual Memory
https://www.garlic.com/~lynn/2010f.html#91 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010f.html#90 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010f.html#89 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2008f.html#19 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008e.html#16 Kernels
https://www.garlic.com/~lynn/2008c.html#65 No Glory for the PDP-15
https://www.garlic.com/~lynn/2006q.html#19 virtual memory
https://www.garlic.com/~lynn/2006j.html#39 virtual memory
https://www.garlic.com/~lynn/2006j.html#25 virtual memory
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2000f.html#34 Optimal replacement Algorithm
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes

--
virtualization experience starting Jan1968, online at home since Mar1970

CP67 & VM370 Source Maintenance

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP67 & VM370 Source Maintenance
Date: 04 Jun, 2024
Blog: Facebook
CP67/CMS was full (assembler) source maintenance which was carried over to VM370/CMS through the 80s until the "OCO-wars" when IBM decided to stop shipping source.

CP67/CMS had an update program that would apply an update file to base source to generate a work file that would be assembled. As part of sequence of updates to CP67/CMS to support 370 architecture ... an exec was created that would apply specified sequence of update files, iteratively ... support which was eventually added to editors and update program.

I had maintained replicated tape archives of 60s files (from univ) and 70s at the science center ... and in mid-80s, Melinda asked if I had copies of the original exec multi-level update processes. Fortunately I was able to retrieve from the tapes (replicated tapes were all in the Almaden Research tape library) because a few weeks later, Almaden had some operational problems with random tapes being mounted as scratch and I lost all tape copies of the 60s&70s archives. Melinda's history web site
http://www.leeandmelindavarian.com/Melinda#VMHist

archived post with pieces of email exchange with melinda
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b

Note in aug1976, tymshare started offering their CMS-based online computer conferencing "free" to (IBM mainframe user group) SHARE as
http://vm.marist.edu/~vmshare
one of the OCO disucssions
http://vm.marist.edu/~vmshare/browse.cgi?fn=OCOROAD&ft=MEMO&args=object-code#hit

Note: SHARE "Univ. Waterloo Tape" had customer (assembler) program contributions; one analysis was that there was as many (or more) lines of code as in the base operating system from IBM

commercial, virtual machine based online systems
https://www.garlic.com/~lynn/submain.html#online
cambridge science posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning Almaden tape library and archive lost during period when random tapes were being mounted as scratch
https://www.garlic.com/~lynn/2024b.html#7 IBM Tapes
https://www.garlic.com/~lynn/2024.html#39 Card Sequence Numbers
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022c.html#83 VMworkshop.og 2022
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2018.html#18 IBM Profs
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#60 Bridgestone Sues IBM For $600 Million Over Allegedly 'Defective' System That Plunged The Company Into 'Chaos'
https://www.garlic.com/~lynn/2013b.html#61 Google Patents Staple of '70s Mainframe Computing
https://www.garlic.com/~lynn/2011m.html#12 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011g.html#29 Congratulations, where was my invite?
https://www.garlic.com/~lynn/2011c.html#4 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index
https://www.garlic.com/~lynn/2010d.html#65 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2009n.html#66 Evolution of Floating Point
https://www.garlic.com/~lynn/2007l.html#51 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2006w.html#42 vmshare

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Memory Paging

From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Memory Paging
Date: 04 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#97 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#98 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#99 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#102 Virtual Memory Paging

note: somewhat contributing to doing online computer conferencing in the 70s&80s was advent of TYMSHARE had provided their CMS-based online computer conferencing free to the (IBM mainframe user group) SHARE organization starting in AUG1976 ... archived here:
http://vm.marist.edu/~vmshare

I had cut a deal with TYMSHARE to get monthly tape dump of all VMSHARE (& later PCSHARE) files for putting up on internal network and systems (including online world-wide sales&marketing support HONE systems ... HONE was also one of my long time customers for my enhanced internal datacenters). One of the biggest problems was IBM lawyers concerned that internal IBM employees would be contaminated by unfiltered customer information (that were different from corporate strategic messages).

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
commercial virtual machine based online services
https://www.garlic.com/~lynn/submain.html#online
paging and page replacement algorithms posts
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
virtualization experience starting Jan1968, online at home since Mar1970

Financial/ATM Processing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Financial/ATM Processing
Date: 05 Jun, 2024
Blog: Facebook
After transferring to SJR in the late 70s, I worked with Jim Gray and Vera Watson on original SQL/relational (System/R) and when Jim left IBM for Tandem in fall of 1980, he palms off stuff on me (which I would periodically visit, after one visit spring of 1981, I distributed trip report which kicked off a large online discussion, folklore when the corporate executive committee was told, 5of6 wanted to fire me). Jim Gray while at Tandem did study of system outages finding that (even commodity) hardware had gotten so reliable that system outages were shifting to (people mistakes, software) environmental (earthquakes, floods, hurricanes, power, cooling, etc) ... requiring dispersed, replicated systems.
https://www.garlic.com/~lynn/grayft84.pdf

late 80s, We got HA/6000 project, Nick Dinofrio stopped by Austin and all of the executives were out of town. My wife did five hand drawn charts for Nick, and said that she would do it, can't be done in Austin, will be $5M (she had previous sized/estimated similar projects). Nick agreed. It was originally for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had VAXCluster support in same source base with Unix (I do a distributed lock manager with VAXCluster API semantics to simplify the move). We do a lot of studies about failure modes ... including SAIC (NYSE TANDEM transaction dataprocessing). They had TANDEM datacenter carefully selected in NYC that had water main service from four different water mains on four sides of bldg, similar telco and power service. Datacenter went down when transformers in the basement blew up. Out marketing I coined the terms disaster survivability and "geographic survivabiilty" and the IBM S/88 Product Administer started taking us around to their customers and got me to write a section for the corporate continuous availability strategy document (got pulled when both Rocheter/AS400 and POK/mainframe complained they couldn't meet the requirements).

Note my wife had been in the GBurg JES group when she was con'ed into going to POK responsible for mainframe "loosely-coupled" architecture. She didn't remain long because of 1) periodic battles with the communication group trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake (except for IMS hot-standby) until much later with sysplex and parallel sysplex. She has story about asking Vern Watts who he would ask permission for doing IMS hot-standby, he replies nobody, he would just do it and tell them when it was all done.

Early Jan1992, we have meeting with Oracle CEO where AWD/Hester tells Oracle that we would have 16-processor clusters by mid-92 and 128-processor clusters by ye-92. Then by the end of Jan1992, cluster scale-up was transferred for announce as IBM Supercomputer (technical/scientific *ONLY*) and we were told that we couldn't work on anything with more than four processors (we leave IBM a few months later), likely contributing were complaints from the mainframe DB2 group that what we were doing was far ahead of them. After leaving IBM, the guy running FEDWIRE liked us to come by and talk technology, he had three system IMS hot-standby at two different geographically dispersed locations (he credited with 100% availability for more than a decade).

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

Financial/ATM Processing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Financial/ATM Processing
Date: 05 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#105 Financial/ATM Processing

Jim Gray
https://web.archive.org/web/20091003041821/https://www.tpc.org/information/who/gray.asp
https://www.tpc.org/information/who/gray5.asp
top TPC-C 814,854,791 tpmC
https://www.tpc.org/tpcc/results/tpcc_perf_results5.asp?resulttype=all

More trivia: 1988 IBM branch had asked me if I could help LLNL (national lab) get some serial stuff they were playing with "standardized", which quickly becomes fibre channel standard ("FCS", including some stuff I had done in 1980); initially 1gibt, full-duplex, 200mbyte/sec aggregate). Then in early 90s, POK ships ESCON with ES/9000 when it is already obsolete (17mbytes/sec). Then some POK engineers become involved in FCS and define a heavy weight protocol that significantly cuts the native throughput, eventually released as FICON.

Most recent public benchmark I've found is "Peak I/O" z196 that gets 2M IOPS using 104 FICON running over 104 FCS. About the same time a FCS is announced for E5-2600 blades claiming over million IOPS, two such FCS having higher throughput than 104 FICON (running over FCS). Also note IBM pubs claim SAPs (system assist processors that actually do I/O) should be kept to no more than 70% CPU, more like 1.5M IOPS. Best numbers I can find comparison max configured z196 and z16, is z16 is around 4.5-5.0 times z196 (most of the throughput numbers now come from sequence of system announcements comparing to immediately previous IBM system) ... combination of more and faster processors.

FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

architectural goals, Byte Addressability And Beyond

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: architectural goals, Byte Addressability And Beyond
Newsgroups: comp.arch
Date: Thu, 06 Jun 2024 09:11:06 -1000
George Neuner <gneuner2@comcast.net> writes:
Can't find it now and don't remember many details, but ...

A long time ago, there was a story going around about Microsoft vs IBM regarding the day-to-day operation of their company web sites. It claimed that Microsoft was running a ~1000 machine server farm with a crew of ~100, whereas IBM was running 3 mainframes with a crew of ~10.


microsoft had hundreds of millions of customers that were more internet oriented, while IBM had thousands of customers that were much less internet oriented (and rate of changing information was much lower) ... and IBM number may have only been for the web operation, as opposed to total support people.

Jan1979, I was con'ed into doing benchmark for national lab that was looking at getting 70 4341s for compute farm (sort of leading edge of the coming cluster supercomputing tsunami). 4341s were also selling into the same mid-range market as VAX and in about same numbers for small unit orders. Big difference was large companies were ordering hundreds of vm/4341s at a time for deployment out into departmental areas (sort of the leading edge of the coming distributed computing tsunami).

The IBM batch system (MVS) was looking at the exploding distributed computing market. First problem was only disk product for non-datacenter environment was FBA (fixed-block architecture) and MVS only supported CKD. Eventually there was CKD simulation made available on FBA disks (currently no CKD disks have been made for decades, all being simulated on industry standard fixed block disks). It didn't do MVS much good because distributed operation was looking at dozens of systems per support person while MVS still required dozens of support people per system.

admittedly 14 year old comparison, max configured z196 mainframe benchmarked at 50BIPS ... still dozens of support people. Equivalent cloud megadatacenter was half million or more E5-2600 blades that each benchmarked at 500BIPS with enormous automation requiring 70-80 support people (per megadatacenter, at least 6000-7000 systems per person and each system ten times max configured mainframe) ... also the megacenter comparison was linux (not windows).

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

D-Day

From: Lynn Wheeler <lynn@garlic.com>
Subject: D-Day
Date: 06 Jun, 2024
Blog: Facebook
Part of horrific fighting on Omaha beach ... from US army war college, free PDF ... 1/3rd of total US WW2 spending was for high altitude, four engine, (B17) strategic bombers ... claiming stratgic bombing would get Germany to surrender w/o even having to invade Europe.
https://ssi.armywarcollege.edu/2011/pubs/the-european-campaign-its-origins-and-conduct/
loc2582-85:
The bomber preparation of Omaha Beach was a total failure, and German defenses on Omaha Beach were intact as American troops came ashore. At Utah Beach, the bombers were a little more effective because the IXth Bomber Command was using B-26 medium bombers. Wisely, in preparation for supporting the invasion, maintenance crews removed Norden bombsights from the bombers and installed the more effective low-level altitude sights.

... snip ...

Note: 1/3rd ot total US WW2 spending went for high-altitude, four engine (B17) strategic bombing, claims that it would get Germany to surrender w/o needing to invade Europe.

This is a a recent British book that has look exhaustively at events leading up to D-Day and after D-Day, including the logistics leading up to it (aka majority of German war effort was deployed against Russia). It also has limited German divisions diverted to deal with D-Day and Europe and were inexperienced and/or wounded/disabled (and having to rely on horses for transportation) .... mentioning that US appeared to have significantly exaggerated the German forces it faced.
https://www.amazon.com/Sand-Steel-Invasion-Liberation-France-ebook/dp/B07PPVG8HG/
pg19/loc992-98:
However, OB West's remaining twenty-three Bodenständige (static position) divisions were either immobile or reserve infantry formations, with low Kampfwert (combat effectiveness) ratings. They were assessed as incapable of taking on offensive missions, and suitable only for limited defence. For the latter's transportation needs, in Rundstedt's domain there were 115,000 military horses on strength, a stark reminder of how reliant on these creatures the German armed forces were in 1944 - by contrast, the Allies would bring with them not a single equine. 3 A year earlier, roughly twenty-five per cent of officers stationed in France had fought in Russia; by 1944, this figure had almost doubled to sixty per cent. This did not necessarily reflect a reinforcement of the west, but a higher proportion of wounded and convalescing leaders.

pg38/loc1415-18:
It still comes as a surprise to many that the German Army in Normandy was predominantly horse-drawn. When Second Lieutenant Bob Sheehan of the US 60th Chemical Company (an outfit responsible for smoke weapons) breasted a rise over the dunes of Omaha on 7 June, he saw ‘a mind-shattering sight that convinced me the war was as good as won. It was a dead horse. The poor animal was still attached to the wagon it had been pulling.

pg47/loc1600-1604:
The stature of the Nazi war machine, forged in North Africa, Italy and on the Eastern Front, was still feared in 1944, though demonstrably hollowed out. It also helped Berlin that the Western Allies, particularly the 21st Army Group, were also excessively cautious, which played to the German inclination - despite their convoluted command - of tactical speed of reaction. Finally, it also suited many Allied commanders after the war to talk up the prowess of their opponents, making the achievement of subduing them all the greater.

... snip ...

Capitalism and social democracy ... have pros & cons and can be used for checks & balances ... example, On War
https://www.amazon.com/War-beautifully-reproduced-illustrated-introduction-ebook/dp/B00G3DFLY8/
loc394-95:
As long as the Socialists only threatened capital they were not seriously interfered with, for the Government knew quite well that the undisputed sway of the employer was not for the ultimate good of the State.

... snip ...

i.e. the government needed general population standard of living sufficient that soldiers were willing to fight to preserve their way of life. Capitalists tendency was to reduce worker standard of living to the lowest possible ... below what the government needed for soldier motivation ... and therefor needed socialists as counterbalance to the capitalists in raising the general population standard of living. Saw this fight out in the 30s, American Fascists opposing all of FDR's "new deals" The Coming of American Fascism, 1920-1940
https://historynewsnetwork.org/article/172004
The truth, then, is that Long and Coughlin, together with the influential Communist Party and other leftist organizations, helped save the New Deal from becoming genuinely fascist, from devolving into the dictatorial rule of big business. The pressures towards fascism remained, as reactionary sectors of business began to have significant victories against the Second New Deal starting in the late 1930s. But the genuine power that organized labor had achieved by then kept the U.S. from sliding into all-out fascism (in the Marxist sense) in the following decades.

... snip ...

aka "Coming of America Fascism" shows socialists countered the "New Deal" becoming fascist ... which had been the objective of the capitalists ... and possibly contributed to forcing them further into the Nazi/fascist camp. When The Bankers Plotted To Overthrow FDR
https://www.npr.org/2012/02/12/145472726/when-the-bankers-plotted-to-overthrow-fdr
The Plots Against the President: FDR, A Nation in Crisis, and the Rise of the American Right
https://www.amazon.com/Plots-Against-President-Nation-American-ebook/dp/B07N4BLR77/

note that John Foster Dulles played major role rebuilding Germany economy, industry, military from the 20s up through the early 40s
https://www.amazon.com/Brothers-Foster-Dulles-Allen-Secret-ebook/dp/B00BY5QX1K/
loc865-68:
In mid-1931 a consortium of American banks, eager to safeguard their investments in Germany, persuaded the German government to accept a loan of nearly $500 million to prevent default. Foster was their agent. His ties to the German government tightened after Hitler took power at the beginning of 1933 and appointed Foster's old friend Hjalmar Schacht as minister of economics.

loc905-7:
Foster was stunned by his brother's suggestion that Sullivan & Cromwell quit Germany. Many of his clients with interests there, including not just banks but corporations like Standard Oil and General Electric, wished Sullivan & Cromwell to remain active regardless of political conditions.

loc938-40:
At least one other senior partner at Sullivan & Cromwell, Eustace Seligman, was equally disturbed. In October 1939, six weeks after the Nazi invasion of Poland, he took the extraordinary step of sending Foster a formal memorandum disavowing what his old friend was saying about Nazism

... snip ...

June1940, Germany had a victory celebration at the NYC Waldorf-Astoria with major industrialists. Lots of them were there to hear how to do business with the Nazis
https://www.amazon.com/Man-Called-Intrepid-Incredible-Narrative-ebook/dp/B00V9QVE5O/
loc1925-29:
One prominent figure at the German victory celebration was Torkild Rieber, of Texaco, whose tankers eluded the British blockade. The company had already been warned, at Roosevelt's instigation, about violations of the Neutrality Law. But Rieber had set up an elaborate scheme for shipping oil and petroleum products through neutral ports in South America.

... snip ...

Later somewhat replay of the 1940 celebration, conference of 5000 industrialists and corporations from across the US at the Waldorf-Astoria, except in part because they had gotten such a bad reputation for the depression and supporting Nazis/fascism, so attempting to refurbish their horribly corrupt and venal image, they approved a major propaganda campaign to equate Capitalism with Christianity.
https://www.amazon.com/One-Nation-Under-God-Corporate-ebook/dp/B00PWX7R56/
part of the result by the early 50s was adding "under god" to the pledge of allegiance. slightly cleaned up version
https://en.wikipedia.org/wiki/Pledge_of_Allegiance

Corporatism is an American, Bipartisan Scourge. Matt Stoller's Goliath recalls when workers' rights became 'consumer advocacy,' and we all lost the language of anti-monopoly.
https://www.theamericanconservative.com/articles/corporatism-is-an-american-bipartisan-scourge/
Stoller also delves into the secret production compacts between American and Nazi producers delivering a timeless lesson that corporate giants will nearly always pursue profit above morality in their dealings with authoritarian regimes.

Goliath: The 100-Year War Between Monopoly Power and Democracy
https://www.amazon.com/Goliath-Monopolies-Secretly-Took-World-ebook/dp/B07GNSSTGJ/

Gangsters of Capitalism
https://www.amazon.com/Gangsters-Capitalism-Smedley-Breaking-Americas-ebook/dp/B092T8KT1N/
Smedley Butler was the most celebrated warfighter of his time. Bestselling books were written about him. Hollywood adored him. Wherever the flag went, "The Fighting Quaker" went--serving in nearly every major overseas conflict from the Spanish War of 1898 until the eve of World War II. From his first days as a 16-year-old recruit at the newly seized Guantanamo Bay, he blazed a path for empire: helping annex the Philippines and the land for the Panama Canal, leading troops in China (twice), and helping invade and occupy Nicaragua, Puerto Rico, Haiti, Mexico, and more. Yet in retirement, Butler turned into a warrior against war, imperialism, and big business, declaring: "I was a racketeer for capitalism."

... snip ...

Smedley Butler
https://en.wikipedia.org/wiki/Smedley_Butler
Business Plot
https://en.wikipedia.org/wiki/Business_Plot
War Is a Racket
https://en.wikipedia.org/wiki/War_Is_a_Racket
Confessions of an Economic Hit Man
https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit_Man
War profiteering
https://en.wikipedia.org/wiki/War_profiteering

thing to remember about CIA SR71/A14 was that USAF was lobbying Eisenhower for 30% increase in Pentagon budget for new generation of heavy bombers based claims about "bomber gap" with the Soviets, however CIA photorecon showed that it didn't exist, purely fabrication by USAF ... part of motivation for Eisenhower's warning about the military-industrial(-congressional) complex in his goodby speech (at the end of his presidency).

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

--
virtualization experience starting Jan1968, online at home since Mar1970

Old adage "Nobody ever got fired for buying IBM"

From: Lynn Wheeler <lynn@garlic.com>
Subject: Old adage "Nobody ever got fired for buying IBM"
Date: 06 Jun, 2024
Blog: Facebook
... shortly after press articles that I86 server chip vendors were shipping at least half their products directly to cloud megadatacenters, IBM sells off its I86 server business ... since around turn of century, cloud operators have been saying they assemble their own servers for 1/3rd the cost of brand name servers.

trivia: early 80s, I had done article that since 360 was announced, relative system disk performance had declined by an order of magnitude (disks had gotten 3-5 times faster, systems had gotten 40-50 times faster). Disk/GPD division executive took exception and assigned the division performance group to refute the claims. After a few weeks, they came back and said I had slightly understated the problem. They then respin the analysis for a SHARE presentation on how to configure disks for system throughput (16Aug1984, SHARE 63, B874).

Also see my FCS/FICON z196 "Peak I/O" comment/reply in this thread
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#34 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#36 Old adage "Nobody ever got fired for buying IBM"

trivia: there are articles that cache-miss/memory latency when measured in count of processor cycles is similar to 60s disk latency when measured in count of 60s processor cycles ... aka memory is the new disk.

getting to play disk engineering posts
https://www.garlic.com/~lynn/subtopic.html#disk
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

some posts mentioning SHARE B874
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#92 IBM DASD 3380
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#36 360/85
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#0 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2021j.html#78 IBM 370 and Future System
https://www.garlic.com/~lynn/2021i.html#23 fast sort/merge, OoO S/360 descendants
https://www.garlic.com/~lynn/2021g.html#44 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021e.html#33 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021.html#79 IBM Disk Division
https://www.garlic.com/~lynn/2021.html#59 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s

--
virtualization experience starting Jan1968, online at home since Mar1970

Anyone here (on news.eternal-september.org)?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Anyone here (on news.eternal-september.org)?
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jun 2024 07:36:37 -1000
re: online discussions ... in Aug1976, TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
provide their (VM370/CMS based) online computer conferencing to (IBM mainframe user group) SHARE
https://www.share.org/
for free ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE to get monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on internal network and systems (biggest problem were lawyers concerned that internal employees would be contaminated by exposure to unfiltered customer information). I was then blamed for online computer conferencing on the internal network in the late 70s and early 80s ... which really took off spring of 1981 when I distributed a trip report to see Jim Gray at Tandem; only about 300 directly participated but claims upwards of 25,000 were reading. We then printed six copies of 300 pages and along with an executive summary and summary of the summary, placed in TANDEM 3-ring binders sent to corporate executive committee (folklore is that 5of6 wanted to fire me). A related post on linkedin
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
from when IBM Jargon was young
https://comlay.net/ibmjarg.pdf
and TANDEM memos was new:

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

after leaving IBM in early 90s, became active on usenet and started archiving posts ... i.e.
https://www.garlic.com/~lynn/93.html

more recent (alt.folklore.computers) post about early programming in the 60s
https://www.garlic.com/~lynn/2019b.html#51

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
virtual machine based commercial online company posts
https://www.garlic.com/~lynn/submain.html#online
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Anyone here (on news.eternal-september.org)?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Anyone here (on news.eternal-september.org)?
Newsgroups: alt.folklore.computers
Date: Sat, 08 Jun 2024 11:35:08 -1000
Ahem A Rivet's Shot <steveo@eircom.net> writes:
Well yes, but I do recall the fate of anyone foolish enough to trouble us with a question about messy-dos, they tended to learn quite a bit about DOS/VSE.

re:
https://www.garlic.com/~lynn/2024c.html#110 Anyone here (on news.eternal-september.org)?

... and before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, Kildall worked on IBM CP/67-CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

CP-67
https://en.wikipedia.org/wiki/CP-67
CP/CMS
https://en.wikipedia.org/wiki/CP/CMS
History of CP/CMS
https://en.wikipedia.org/wiki/History_of_CP/CMS
Cambridge Scientific Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
Melinda's history
http://www.leeandmelindavarian.com/Melinda#VMHist

Note GML
https://en.wikipedia.org/wiki/IBM_Generalized_Markup_Language

was invented at the science center in 1969, a decade later it morphs into ISO Standard SGML and after another decade morphs into HTML at CERN. First webserver in the US was on Stanford SLACs VM370 (followon to CP67):
https://ahro.slac.stanford.edu/wwwslac-exhibit

Account by one of the GML inventors "real" job (before inventing GML)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

CP67-based wide-area network, was done by another CSC member,
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

his CP67 wide-area network morphs into corporate internal network ... and the technology also used for the corporate sponsored univ. BITNET:
https://en.wikipedia.org/wiki/BITNET

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Multithreading

From: Lynn Wheeler <lynn@garlic.com>
Subject: Multithreading
Date: 08 Jun, 2024
Blog: Facebook
took two credit hr intro to fortran/comupters and end of semester was hired to rewrite 1401 MPIO in assembler for 360/30, The univ was getting 360/67 for tss/360 replacing 709/1401 and temporarily got 360/30 replacing 1401 (as part of getting 360 experiences). Univ. shutdown datacenter over the weekend and I had it dedicated, although 48hrs w/o sleep made monday classes hard; I got a lot of hardware & software manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and within a few weeks had 2000 card assembler program. Then the 360/67 came in within a year of taking intro class, tss/360 never came to fruition and I was hired fulltime responsible for os/360.

CSC came out in Jan1968 to install CP67/CMS (3rd install after CSC itself and MIT Lincoln Labs) ... and I mostly played with it during my weekend dedicated time ... rewriting pathlengths for running OS/360 in virtual machines. Benchmark originally was 322secs bare machine, initially in virtual machine 858secs, CP67 CPU 534secs ... after 6months got CP67 CPU down to 113secs for the benchmark.

Then I start on I/O optimized ... ordered seek for movable arm DASD and chained page requests maximizing transfers/rotation when no arm motion required (got fixed head 2301 drum from about 70/secs to peak of 270/sec). I then rewrite paging, page replacement, dynamic adaptive resource management, scheduling, dispatching, etc. Later I found that the original CP67 scheduling/dispatching code looked similar to MIT/CTSS and early Unix.

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computing Services (consolidate all dataprocessing into independent business unit). Later when I graduate, I join the science center.

Some 20yrs later (after undergraduate rewriting lots of CP67, then 74/75 added a lot back into VM370), got email from the OS2 group, originally ask the Endicott (VM group), then sent to Kingston (VM group), then sent to me ... asking why VM does it so much better.

Date: 11/24/87 17:35:50
To: wheeler
FROM: ????
Dept ???, Bldg ??? Phone: ????, TieLine ????
SUBJECT: VM priority boost

got your name thru ??? ??? who works with me on OS/2. I'm looking for information on the (highly recommended) VM technique of goosting priority based on the amount of interaction a given user is bringing to the system. I'm being told that our OS/2 algorithm is inferior to VM's. Can you help me find out what it is, or refer me to someone else who may know?? Thanks for your help.

Regards, ???? (????? at BCRVMPC1)


... snip ... top of post, old email index

Date: Fri, 4 Dec 87 15:58:10
From: wheeler
Subject: os2 dispatching

fyi ... somebody in boca sent a message to endicott asking about how to do dispatch/scheduling (i.e. how does vm handle it) because os2 has several deficiencies that need fixing. VM Endicott forwarded it to VM Kingston and VM Kingston forwarded it to me. I still haven't seen a description of OS2 yet so don't yet know about how to go about solving any problems.


... snip ... top of post, old email index

Date: Fri, 4 Dec 87 15:53:29
From: wheeler
To: somebody at bcrvmpc1 (i.e. internal vm network node in boca)
Subject: os2 dispatching

I've sent you a couple things that I wrote recently that relate to the subject of scheduling, dispatching, system management, etc. If you are interested in more detailed description of the VM stuff, I can send you some descriptions of things that I've done to enhance/fix what went into the base VM system ... i.e. what is there now, what its limitations are,and what further additions should be added.


... snip ... top of post, old email index

CP-67
https://en.wikipedia.org/wiki/CP-67
CP/CMS
https://en.wikipedia.org/wiki/CP/CMS
History of CP/CMS
https://en.wikipedia.org/wiki/History_of_CP/CMS
Cambridge Scientific Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
Melinda's history
http://www.leeandmelindavarian.com/Melinda#VMHist

One of my hobbies after joining IBM was enhanced production systems for internal datacenters ... including online sales&marketing support HONE systems (the science center had also ported APL\360 to CP67/CMS for CMS\APL and nearly all HONE applications were originally done in CMS\APL). HONE originally US, but then clones propagated all over the world and mid-70s all US HONE systems consolidated in Palo Alto (just off page mill, trivia: when FACEBOOK 1st moves into silicon valley, it is into a new bldg built next door to the former US HONE consolidated datacenter). US HONE consolidated all the high-end mainframes into loosely-coupled, large shared DASD farm, single-system image complex with fall-over and load balancing across the complex. Then I port CP67 tightly-coupled multiprocessor to VM370R3-based system, originally for US HONE so they could add a 2nd processor to each system (16 processors total) ... US HONE largest single-system image complex, and world-wide HONE largest ever APL use.

Late 80s did get HA/6000 project, originally for NYTimes to port their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster support in same source base with Unix (I did a distributed lock manger with VAXCluster API semantics to simplify the ports). Early Jan1992, there is cluster scale-up meeting, where AWD/Hester tells Oracle CEO that there would be 16-processor clusters by mid-92 and 128-proceessor cluster by ye-92. Then late Jan92, cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (Mainframe DB2 group had also been complaining if we were allowed to proceed it would be years ahead of them) ... we leave IBM shortly later.

Computerworld news 17feb1992 (from wayback machine) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
cluster supercomputer for technical/scientific *ONLY*
https://www.garlic.com/~lynn/2001n.html#6000clusters1
more news 11may1992, IBM "caught" by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2
and 15jun1992, Foray into Mainstream for Parallel Computing
https://www.garlic.com/~lynn/2001n.html#6000clusters3

CSC posts:
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts:
https://www.garlic.com/~lynn/submisc.html#cscvm
dynamic adaptive resource management, dispatching, scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare SMP,
tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

CAI, IBM 1500

From: Lynn Wheeler <lynn@garlic.com>
Subject: CAI, IBM 1500
Date: 08 Jun, 2024
Blog: Facebook
topic drift; my wife worked for the naval academy (annapolis) 67-68, programming IBM's 1500/coursewriter
https://en.wikipedia.org/wiki/IBM_1500
Seeded by a research grant in 1964 from the U.S. Department of Education to the Institute for Mathematical Studies in the Social Sciences at Stanford University, the IBM 1500 CAI system was initially prototyped at the Brentwood Elementary School (Ravenswood City School District) in East Palo Alto, California by Dr. Patrick Suppes of Stanford University. The students first used the system in 1966.[1][2]

The first production IBM 1500 system was shipped to Stanford University in August 1967.


... snip ...

a couple past ibm 1500 refs:
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2017j.html#94 IBM does what IBM does best: Raises the chopper again

--
virtualization experience starting Jan1968, online at home since Mar1970

Disconnect Between Coursework And Real-World Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disconnect Between Coursework And Real-World Computers
Newsgroups: alt.folklore.computers
Date: Sun, 09 Jun 2024 15:29:07 -1000
jgd@cix.co.uk (John Dallman) writes:
I didn't get to do that. In first year, we learned Pascal and an artificial assembly language created for teaching. In second year, Algol-68R, FORTRAN and COBOL. So I did one term of COBOL, learned the basics, and wanted nothing further to do with it.

re:
https://www.garlic.com/~lynn/2024c.html#110 Anyone here (on news.eternal-september.org)?
https://www.garlic.com/~lynn/2024c.html#111 Anyone here (on news.eternal-september.org)?

Los Gatos lab used Metaware's TWS for various VLSI tools, including a (mainframe) pascal (later released as vs/pascal). Besides some number of VLSI tools, it was also used for implementing IBM's mainframe TCP/IP. I used it for prototype rewrite of VM/370's spool file system running in virtual address space. I had HSDT project, T1 and faster computer links which I could drive with TCP/IP ... but VM/370's VNET/RSCS used (diagnose) synchronous interface which was somewhat limited to around 32kbyte-40kbyte aggregate (degrading if contention by others using spool concurrently) and I needed more like 300kbytes per T1 link ... so needed asynchronous interface, contiguous allocation, multiple block transfers, read-ahead, and write-behind ... with other enhancements including fast filesystem recovery in case of restart after ungraceful crash.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Also communication group was fighting hard to releasing the TCP/IP product, but when that got reversed, they changed their strategy ... aka since they had corporate strategic ownership of everything that crossed datacenter walls, it had to be released through them; what shipped got 44kbyte/sec aggregate using nearly whole 3090 processor. I then changes for RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes used per instruction executed).

rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

Late 80s, got HA/6000 project, originally for porting NYTimes newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had VAXCluster support in same source base with Unix. Then late Jan92, cluster scale-up was transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors ... and we leave IBM a few months later.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

IBM then has worst loss in the history of US companies and was being reorganized into the 13 baby blues in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left the company, but get a call from bowels of (corporate hdqtrs) Armonk asking us to help with the corporate breakup. However before we get started, the board brings in former president of AMEX as CEO who (somewhat) reverses the breakup ... although there is still quite a bit of divesture and offloading ... which included a lot of VLSI tools to industry VLSI tools company. Now the VLSI industry standard platform was SUN and so all the tools had to be 1st ported to SUN. I get a contract to port a (Los Gatos) 50,000 Pascal statement VLSI application to SUN. Eventually I got the impression that SUN Pascal hadn't been used for little other than educational purposes and it would have been easier to port from IBM Pascal to SUN C. Also while SUN hdqtrs was just up the road, they had outsourced Pascal to operation on the opposite side of the world (Space City) ... reporting problems and help with work arounds had lots of delays.

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

posts mentioning Metaware TWS
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2024.html#8 Niklaus Wirth 15feb1934 - 1jan2024
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#68 Assembler & non-Assembler For System Programming
https://www.garlic.com/~lynn/2023c.html#98 Fortran
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2022g.html#6 "In Defense of ALGOL"
https://www.garlic.com/~lynn/2022f.html#13 COBOL and tricks
https://www.garlic.com/~lynn/2022d.html#82 ROMP
https://www.garlic.com/~lynn/2021j.html#23 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#45 not a 360 either, was Design a better 16 or 32 bit processor
https://www.garlic.com/~lynn/2021d.html#5 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#95 What's Fortran?!?!
https://www.garlic.com/~lynn/2021.html#37 IBM HA/CMP Product
https://www.garlic.com/~lynn/2018e.html#63 EBCDIC Bad History
https://www.garlic.com/~lynn/2017k.html#41 CMS style XMITMSG for Unix and other platforms
https://www.garlic.com/~lynn/2017j.html#18 The Windows 95 chime was created on a Mac
https://www.garlic.com/~lynn/2017f.html#94 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#24 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016c.html#62 Which Books Can You Recommend For Learning Computer Programming?
https://www.garlic.com/~lynn/2015g.html#52 [Poll] Computing favorities
https://www.garlic.com/~lynn/2013m.html#36 Quote on Slashdot.org
https://www.garlic.com/~lynn/2013l.html#59 Teletypewriter Model 33
https://www.garlic.com/~lynn/2012m.html#21 The simplest High Level Language
https://www.garlic.com/~lynn/2011m.html#32 computer bootlaces
https://www.garlic.com/~lynn/2010n.html#54 PL/I vs. Pascal
https://www.garlic.com/~lynn/2009o.html#11 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009l.html#36 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2008j.html#77 CLIs and GUIs
https://www.garlic.com/~lynn/2007j.html#14 Newbie question on table design
https://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004d.html#71 What terminology reflects the "first" computer language ?
https://www.garlic.com/~lynn/2002q.html#19 Beyond 8+3

--
virtualization experience starting Jan1968, online at home since Mar1970

Puritans

From: Lynn Wheeler <lynn@garlic.com>
Subject: Puritans
Date: 09 Jun, 2024
Blog: Facebook

"The Puritans nobly fled from a land of despotism to a land of freedom, where they could not only enjoy their own religion, but could prevent everybody else from enjoying theirs"
- Artemus Ward (1834-1867).

"The Puritan's idea of Hell is a place where everybody has to mind his own business"
- Wendell Phillips (1811-1884).

Puritan migration to New England (1620-1640)
https://en.wikipedia.org/wiki/Puritan_migration_to_New_England_(1620%E2%80%931640)

... one of my wife's uncles had been contacted to contribute for a Salem witch trial monument ... he wrote back that the family had already contributed an ancester for the entertainment at the witch trials. Salem Witch trials 1692-1693
https://www.smithsonianmag.com/history/a-brief-history-of-the-salem-witch-trials-175162489/

Note the English had started out using the Spanish model for new world immigration, send over soldiers with the expedition to enslave the local inhabitants to support the colony. Jamestown found the North American inhabitants weren't as pliable and almost starved.
https://historicjamestowne.org/history/history-of-jamestown/

The English then started sending over some from other Great Britain ethnic groups with expeditions. "Why Nations Fail", the English Crown charters listed them as "leet-man"
https://www.amazon.com/Why-Nations-Fail-Origins-Prosperity-ebook/dp/B0058Z4NR8/
pg27:
The clauses of the Fundamental Constitutions laid out a rigid social structure. At the bottom were the "leet-men," with clause 23 noting, "All the children of leet-men shall be leet-men, and so to all generations."

... snip ...

My wife's father was presented with a set of 1880 history books for some distinction at West Point by the Daughters Of the 17th Century
http://www.colonialdaughters17th.org/

which refer to if it hadn't been for the influence of the Scottish settlers from the mid-Atlantic states, the northern/english states would have prevailed and the US would look much more like England with monarch and strict class hierarchy. His Scottish ancestors came over after their clan was "broken". Blackadder WW1 episode had "what does English do when they see a man in a skirt?, they run him through and nick his land". Other history was the Scotts were so displaced that about the only thing left for men, was the military.

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

posts mentioning Salem Witch Trials
https://www.garlic.com/~lynn/2021d.html#31 Does HBO's QAnon Documentary Reveal Who Q Is?
https://www.garlic.com/~lynn/2011d.html#48 The first personal computer (PC)
https://www.garlic.com/~lynn/2009q.html#60 Did anybody ever build a Simon?
https://www.garlic.com/~lynn/2004g.html#56 War

some posts mentioning "Why Nations Fail" & 1880 history books
https://www.garlic.com/~lynn/2024b.html#6 "Under God" (1954), "In God We Trust" (1956)
https://www.garlic.com/~lynn/2023g.html#93 Why Nations Fail
https://www.garlic.com/~lynn/2019e.html#10 The 1619 Project
https://www.garlic.com/~lynn/2019.html#40 Indian Wars
https://www.garlic.com/~lynn/2018d.html#95 More Immigration
https://www.garlic.com/~lynn/2018b.html#45 More Guns Do Not Stop More Crimes, Evidence Shows
https://www.garlic.com/~lynn/2017i.html#40 Equality: The Impossible Quest

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe System Meter

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe System Meter
Date: 09 Jun, 2024
Blog: Facebook
... back in 60s w/360, ibm rented/leased, charges based on the system meter which ran whenever any processor and/or channel was operating ... everything had to be totally idle for at least 400ms before meter would stop. Somewhat as part of CP67 7x24 available and possibly low usage during offshift operation ... there was both automated operator (for dark room operation) and channel programs that would allow system meter to stop (both minimizing costs associated with 7x24 during possibly low usage period) ... but immediately active when characters arriving (carried over to vm/370). Note: long after IBM had converted from rent/lease system meter charges to sales, MVS still had a timer event that woke up every 400ms guaranteeing system meter would never stop.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning system meter and work for 7x24 operation
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023d.html#78 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2022g.html#93 No, I will not pay the bill
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#115 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#98 Mainframe Cloud
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#2 IBM Games
https://www.garlic.com/~lynn/2022d.html#108 System Dumps & 7x24 operation
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#26 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022c.html#25 IBM Mainframe time-sharing
https://www.garlic.com/~lynn/2022.html#27 Mainframe System Meter
https://www.garlic.com/~lynn/2021k.html#53 IBM Mainframe
https://www.garlic.com/~lynn/2021k.html#42 Clouds are service
https://www.garlic.com/~lynn/2021i.html#94 bootstrap, was What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021b.html#3 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2019d.html#19 Moonshot - IBM 360 ?
https://www.garlic.com/~lynn/2019b.html#66 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#111 Online Timsharing
https://www.garlic.com/~lynn/2018f.html#16 IBM Z and cloud
https://www.garlic.com/~lynn/2018.html#4 upgrade
https://www.garlic.com/~lynn/2017i.html#65 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017g.html#46 Windows 10 Pro automatic update
https://www.garlic.com/~lynn/2017.html#21 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016h.html#47 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016b.html#86 Cloud Computing
https://www.garlic.com/~lynn/2016b.html#17 IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?
https://www.garlic.com/~lynn/2015c.html#103 auto-reboot
https://www.garlic.com/~lynn/2015b.html#18 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2014l.html#56 This Chart From IBM Explains Why Cloud Computing Is Such A Game-Changer
https://www.garlic.com/~lynn/2014k.html#16 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014h.html#19 weird trivia
https://www.garlic.com/~lynn/2014g.html#85 Costs of core
https://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
https://www.garlic.com/~lynn/2014e.html#4 Can the mainframe remain relevant in the cloud and mobile era?
https://www.garlic.com/~lynn/2012l.html#47 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2010l.html#30 How much is the Data Center charging for each mainframe user?

--
virtualization experience starting Jan1968, online at home since Mar1970

Disconnect Between Coursework And Real-World Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disconnect Between Coursework And Real-World Computers
Newsgroups: alt.folklore.computers
Date: Mon, 10 Jun 2024 16:50:27 -1000
Lars Poulsen <lars@beagle-ears.com> writes:
I remember WATFOR ... and its successor WATFIV (which AFAICR was a more complete implementation of Fortran IV).

The Waterloo Fortran was a godsend for introductory programming classes on OS/360. The "normal" FORTCLG procedure invoked the job step setup overhead 3 times. The Waterloo package stayed in memory all day processing one student trial compilation after another. The tradeoff was that you did not get billing records written to SMF for each job, so the installation I knew, limited each mini-job to 5 seconds, and did not bill for them.


re:
https://www.garlic.com/~lynn/2024c.html#114 Disconnect Between Coursework And Real-World Computers

I took two credit hr intro to computers/fortran and at end of semester was hired to rewrite 1401 MPIO (unit record front end for 709) in assembler for 360/30. Univ was getting 360/67 for TSS/360 replacing 709/1401 and pending availability of 360/67, got a 360/30 temporarily replacing 1401 (for getting 360 experience). The univ. shutdown the datacenter over the weekend and I would have it dedicated, although 48hrs w/o sleep made monday classes hard. They gave me a bunch of hardware & software manuals and I got to design my own monitor, device drivers, interrupt handler, error recovery, storage management, etc ... with a few weeks, I had 2000 card program.

Within a year of intro class, 360/67 arrives and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition and ran as 360/65). 709 ran student fortran tape->tape in less than second, initially os/360 they ran over minute. I install HASP which cuts the time in half. I then redo STAGE2 SYSGEN, carefully placing datasets and PDS members to optimize arm seek and multi-track search cutting another 2/3rds to 12.9secs (nearly all job setup, 3step fortgclg, 4.3secs/step).

Student fortran never got better than 709 until I install Univ. of Waterloo WATFOR. WATFOR on 360/65 ran about 20,000 cards/min (333/sec) .... univ. ran about tray of cards/batch ... 2000-2500 cards ... 6sec-7.5secs plus 4.3secs for job step, or 10.3--11.8sec per tray, typically 40-60 cards/job ... .2-.3sec/student-job.

posts mentioning HASP, ASP, JES2, JES3, and/or NGE/NGI
https://www.garlic.com/~lynn/submain.html#hasp

some posts mentioning 709, fortgclg, and watfor
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#60 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#54 REX, REXX, and DUMPRX
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#90 Vintage IBM HASP
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#29 Univ. Maryland 7094
https://www.garlic.com/~lynn/2021f.html#43 IBM Mainframe
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2015h.html#21 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2015b.html#15 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2015.html#51 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2013l.html#18 A Brief History of Cloud Computing
https://www.garlic.com/~lynn/2013h.html#4 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013g.html#39 Old data storage or data base
https://www.garlic.com/~lynn/2013.html#24 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012d.html#7 PCP - memory lane
https://www.garlic.com/~lynn/2012.html#36 Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#5 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2003c.html#51 HASP assembly: What the heck is an MVT ABEND 422?

--
virtualization experience starting Jan1968, online at home since Mar1970

How the IRS went soft on billionaires and corporate tax cheats

From: Lynn Wheeler <lynn@garlic.com>
Subject: How the IRS went soft on billionaires and corporate tax cheats
Date: 11 Jun, 2024
Blog: Facebook
How the IRS went soft on billionaires and corporate tax cheats
https://www.icij.org/inside-icij/2024/06/how-the-irs-went-soft-on-billionaires-and-corporate-tax-cheats/
Newly obtained data shows the IRS division that audits corporations and the ultrarich flagged no more than 22 possible tax crimes over the past five years -- roughly 40 times fewer criminal referrals than from the unit covering small businesses.

... snip ...

.... note 2009, IRS had press that they were going after 52,000 wealthy Americans that owed $400B in taxes in trillions illegally stashed offshore. 2011, new speaker of the house stated that he was cutting the budget for the unit responsible for going after the 52,000 and the $400B. since then there was a little press on a couple billion in fines for the financial institutions that aided in the tax evasion ... but nothing about the $400B (plus fines).

2002 congress lets the fiscal responsibility act lapse (spending couldn't exceed revenue, on its way to eliminating all federal debt). 2005 the US Comptroller General was including in speeches that there was nobody in congress capable of middle school arithmetic (for how badly they were savaging the budget). 2010, CBO had report that 2003-2009, spending increased by $6T and tax revenue reduced by $6T for $12T gap compared to fiscal responsible budget (1st time taxes cut to not pay for two wars). Sort of confluence of special interests wanting huge tax cut, military-industrial complex wanting huge spending increase, and Too-Big-To-Fail (& FEDRES) wanting huge debt increase

tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
comptroller general posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
too-big-to-fail, too-big-to-prosecute, too-big-to-jail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail

--
virtualization experience starting Jan1968, online at home since Mar1970

Financial/ATM Processing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Financial/ATM Processing
Date: 11 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#105 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#106 Financial/ATM Processing

Little over decade ago, I was asked to track down decision to add virtual memory to all 370s; basically MVT storage management was so bad that regions were being specified four times larger than used, result was that typical 1mbyte 370/165 only could run four concurrent regions, insufficient to keep system busy and justified. Going to 16mbyte virtual memory (VS2/SVS) allowed number of concurrently running increased by factor of four (caped at 15 because of storage protect keys) with little or no paging (similar to running MVT in CP67 16mbyte virtual machine). VS2/SVS had a little bit of code for managing 16mbyte virtual address space, but major piece was EXCP/SVC0 code to make copies of passed channel programs, substituting real address for virtual (CCWTRANS borrowed from CP67).

Decision to add virtual memory to all 370s also resulted in complete rewrite for (virtual machine) VM370, simplifying and/or dropping lots of CP67 features. Then in 1974 I start migrating lots of features from my internal CP67 (aka one of my hobbies after joining IBM was enhanced production operating systems, include the online sales&marketing HONE system was long time customer) to VM370 Release2-based "CSC/VM". I had done automated benchmarking system which was initial feature migrated to VM370 (running same simulated multi-user CMS benchmarks) ... being able to specify number of users, kinds of benchmark, CPU intensive, I/O intensive, paging intensive, etc ... and/or combinations. However, VM370 would constantly crash w/o being able to complete doing standard benchmark series. The next feature migrated had to be the CP67 internal kernel serialization/synchronization mechanism (in order to complete set of benchmarks). Then I start on migrating lots of performance work I did for CP67 as undergraduate in the 60s ... as well as kernel organization for tightly-coupled multiprocessor support (but not initially the actual SMP support).
http://www.jfsowa.com/computer/memo125.htm

IBM 23jun1969 unbundling announcement started charging for SE services, maintenance, and (application) software (but managed to make the case that kernel software should still be saved). Later 1st half of the 70s, IBM starts the Future System effort (to completely replace 370 and totally different). During FS, internal politics was off killing off 370 efforts (and lack of new 370 during the period is credited with giving clone 370 makers their market foothold). When FS finally implodes, there was mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081. Also with the rise of clone 370 makers, there was a decision to transition to charging for kernel software ... and some of my CSC/VM was selected for the initial guinea pig ... and I had to spend a lot of time with lawyers and business people on kernel software charging policies.

I also migrate CP67 multiprocessor support to VM370 Release3-based CSC/VM, initially for US HONE complex (to add 2nd processor to each system). With HONE systems cropping up all over the world, the US HONE datacenters were also consolidated in Palo Alto (trivia: later when FACEBOOK 1st moves into Silicon Valley, it was into a new bldg, built next door to the former US HONE consolidated datacenter). Initially consolidation of US HONE systems included all systems into the largest "loosely-coupled", shared DASD complex, "single-system image" complex with load balancing and fall-over across the complex.

Then there was decision to release multiprocessor support with VM/370 Release 4 ... however it required the multiprocess kernel re-org that I had included in my charged-for "Resource Manager" ... and kernel charging policies required that hardware support would (initially) still be free (and couldn't require charged-for software as pre-req). Eventual the resolution was moving a lot of code from my (charged for) "Resource Manager" into the "free" R4 software base (w/o changing the price for Release 4 "Resource Manager").

I had also been talked into helping with a 370 16-processor tightly-coupled SMP and we con'ed the 3033 processor engineers into working on it in their spare time. Everybody thought it was great until somebody tells head of POK it could be decades before the POK favorite son operating system (MVS) had (effective) 16-processor support. At the time MVS document was that 2-processor MVS had 1.2-1.5 times the throughput of single processor; and their SMP overhead increased non-linear as number increased, POK doesn't ship 16-processor system until after the turn of the century). Some of us was then invited to never visit POK again (and 3033 processor engineers directed to heads down on 3033) ... and I transfer out to SJR on the west coast. During this period, the head of POK also convinces corporate to kill the VM370 product, shutdown the development group, transfer all the people to POK for MVS/XA. Endicott does manage to eventually save the VM370 product mission for the mid-range but had to reconstitute a development group frame scratch (however, POK executives were also brow-beating internal datacenters that they had to migrate to MVS, one case was the HONE complex complained about it, and the POK executive had to return to say that HONE misunderstood what he said).

At SJR, I get to wander around datacenters in silicon valley, including bldgs14&15 (disk engineering and product test) across the street. They are running 7x24, pre-scheduled, stand-alone mainframe testing ... and mentioned that they had recently tried MVS, but it had 15min "mean-time-between failure" in that environment. I offer to rewrite I/O supervisor so that it was bullet proof and never fail, allowing any amount of on-demand, concurrent testing (greatly improving productivity). I do an (internal only) Research Report on the "VM/370 I/O Reliability Enhancements" work and happen to mention the MVS 15min MTBF .... bringing down the wrath of the MVS organization on my head (especially after highlighting that it would be decades before MVS had effective 16-way support and recently head of POK having killed VM370, supposedly at least for the high-end POK machines).

Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
Automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
dynamic adaptive resource management & scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
23june1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Disconnect Between Coursework And Real-World Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disconnect Between Coursework And Real-World Computers
Newsgroups: alt.folklore.computers
Date: Wed, 12 Jun 2024 07:05:51 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Interesting. When John McCarthy (the father of Lisp) became director of the Stanford AI Lab (I think it was), IBM donated a machine for him to work with. But they already had a Burroughs machine. McCarthy, an IBM man, went out of his way to make use of the Burroughs as unattractive as possible, so users would switch to the IBM instead.

re:
https://www.garlic.com/~lynn/2024c.html#114 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#117 Disconnect Between Coursework And Real-World Computers

I only remember SAIL having PDP machines.

IBM had "Future System" effort early 70s that was going to completely replace 370 and completely different (internal politics during FS was killing 370 efforts and the lack of new 370s is credited with giving clone 370 makers their market foothold). I continued to work on 370 stuff all during FS and would periodically ridicule what they were doing (which wasn't exactly career enhancing activity).
https://en.wikipedia.org/wiki/IBM_Future_Systems_project

When FS finally implodes there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081
http://www.jfsowa.com/computer/memo125.htm

I got talked into helping with a 16-processor 370 multiproceessor and we con'ed the 3033 processor engineers into helping (a lot more interesting that remapping 168 logic to 20% faster chips). Everybody thot it was great until somebody tells the head of POK that it could be decades before POK's favorite son operating system (MVS) had (effective) 16-processor support (MVS documentation at the time said that 2-processor MVS had 1.2-1.5 the throughput of single processor, and MVS multiprocessor support increased non-linear with increase in number of processors, POK doesn't ship a 16-processor machine until after the turn of the century). The head of POK then invites some of us to never visit POK again and directs the 3033 processor engineers, heads down on 3033 and don't be distracted).

I then transfer out to SJR on the west coast and worked with Jim Gray
https://en.wikipedia.org/wiki/Jim_Gray_(computer_scientist)
and Vera Watson
https://en.wikipedia.org/wiki/Vera_Watson
on original SQL/relational, System/R
https://en.wikipedia.org/wiki/IBM_System_R
Vera was married to John McCarthy
https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)

Fall of 1980, Jim leaves IBM for Tandem and foists some stuff on me. Late 70s and early 80s, I was also getting blamed for online computer conferencing on the IBM internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s). It somewhat started when TYMSHARE:
https://en.wikipedia.org/wiki/Tymshare
starts offering their VM370/CMS-based online computer conferencing system in Aug1976, free to (IBM mainframe user group) SHARE
https://www.share.org/
as VMSHARE, archives here:
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE for a monthly tape dump of all VMSHARE files for putting up on the IBM internal network and systems. One of visits to TYMSHARE they demo'ed ADVENTURE ported to CMS (they had found on SAIL PDP10 machine), I got full distribution for making available internally in IBM.

The internal computer conferencing really takes off spring of 1981 after I distributed a trip report of visit to Jim at Tandem. Only about 300 directly participated but claims were that 25,000 were reading. We printed six copies with about 300 pages and a executive summary and summary of summary and packaged them in Tandem 3-ring binders and sent to corporate executive committee (folklore is 5of6 wanted to fire me). From when ibm jargon was young and "Tandem Memos" was new ...
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... and
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

One of the outcomes was researcher was paid to sit in the back of my office for nine months studying how I communicated, taking notes on face-to-face and telephone, got copies of all my incoming and outgoing email and logs of all instant messages. The material was used for papers, conference talkes, books and Stanford Phd (joint with language and computer ai, winograd was advisor on ai side).
https://en.wikipedia.org/wiki/Terry_Winograd

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, next, index - home