List of Archived Posts

2025 Newsgroup Postings (01/01 - 03/01)

IBM APPN
IBM APPN
IBM APPN
IBM Tape Archive
Dataprocessing Innovation
Dataprocessing Innovation
IBM 37x5
Dataprocessing Innovation
IBM OS/360 MFT HASP
John Boyd and Deming
IBM 37x5
what's a segment, 80286 protected mode
IBM APPN
IBM APPN
Dataprocessing Innovation
Dataprocessing Innovation
On-demand Supercomputer
On-demand Supercomputer
Thin-film Disk Heads
Virtual Machine History
Virtual Machine History
Virtual Machine History
IBM Future System
IBM NY Buildings
IBM Mainframe Comparison
Virtual Machine History
Virtual Machine History
360/65 and 360/67
IBM 3090
IBM 3090
3270 Terminals
On-demand Supercomputer
IBM 3090
IBM ATM Protocol?
The Greatest Capitalist Who Ever Lived: Tom Watson Jr. and the Epic Story of How IBM Created the Digital Age
IBM ATM Protocol?
IBM ATM Protocol?
IBM Mainframe
Multics vs Unix
Multics vs Unix
Multics vs Unix
Multics vs Unix
Multics vs Unix
Multics vs Unix
vfork history, Multics vs Unix
Multics vs Unix
Multics vs Unix
Multics vs Unix
Multics vs Unix
Multics vs Unix
The Paging Game
The Paging Game
Canned Software and OCO-Wars
Canned Software and OCO-Wars
Multics vs Unix
IBM Management Briefings and Dictionary of Computing
IBM CMSBACK, WDSF, ADSM, TSM
Multics vs Unix
Multics vs Unix
Multics vs Unix
old pharts, Multics vs Unix
old pharts, Multics vs Unix
Grace Hopper, Jean Sammat, Cobol
old pharts, Multics vs Unix
old pharts, Multics vs Unix
old pharts, Multics vs Unix
Multics vs Unix
old pharts, Multics vs Unix
old pharts, Multics vs Unix
old pharts, Multics vs Unix
VM370/CMS, VMFPLC
VM370/CMS, VMFPLC
old pharts, Multics vs Unix
old pharts, Multics vs Unix vs mainframes
old pharts, Multics vs Unix vs mainframes
IBM Mainframe Terminals
old pharts, Multics vs Unix vs mainframes
IBM Mainframe Terminals
old pharts, Multics vs Unix vs mainframes
360/370 IPL
IBM Bus&TAG Cables
IBM Bus&TAG Cables
Online Social Media
Online Social Media
IBM Special Company 1989
Stress-testing of Mainframes (the HASP story)
Big Iron Throughput
Big Iron Throughput
Wang Terminals (Re: old pharts, Multics vs Unix)
Wang Terminals (Re: old pharts, Multics vs Unix)
Online Social Media
IBM Computers
IBM Suggestion Plan
Public Facebook Mainframe Group
old pharts, Multics vs Unix
IBM Token-Ring
IBM Token-Ring
IBM Token-Ring
IBM Tom Watson Jr Talks to Employees on 1960's decade of success and the 1970s
FAA And Simulated IBM Mainframe
Clone 370 Mainframes
Clone 370 Mainframes
Large IBM Customers
Mainframe dumps and debugging
Mainframe dumps and debugging
Giant Steps for IBM?
Giant Steps for IBM?
IBM 370 Virtual Storage
IBM 370 Virtual Storage
IBM Process Control Minicomputers
DOGE Claimed It Saved $8 Billion in One Contract. It Was Actually $8 Million
Computers, Online, And Internet Long Time Ago
2301 Fixed-Head Drum
2301 Fixed-Head Drum
IBM 370 Virtual Memory
2301 Fixed-Head Drum
CMS 3270 Multi-user SPACEWAR Game
Consumer and Commercial Computers
Consumer and Commercial Computers
Consumer and Commercial Computers
Microcode and Virtual Machine
Clone 370 System Makers
Clone 370 System Makers
PowerPoint snakes
The joy of FORTRAN
The joy of FORTRAN
The Paging Game
3270 Controllers and Terminals
The Paging Game
The Paging Game
Online Social Media
The joy of FORTRAN
The joy of FORTRAN
The joy of FORTRAN
IBM Token-Ring
3270 Controllers and Terminals

IBM APPN

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM APPN
Date: 01 Jan, 2025
Blog: Facebook
For awhile I reported to the same executive that the person responsible for AWP164 (turns into APPN for AS/400) ... I would chide him to come over and work on real networking (TCP/IP) because the SNA organization would never appreciate him. Then SNA vetoes the original AS/400 APPN 1986 announcement ... in the escalation process, the announcement letter was carefully rewritten to not imply any relationship between APPN and SNA.

trivia: back in the 70s period when SNA 1st appeared, my (future) wife was co-author of AWP39, networking architecture ... they had to qualify it "Peer-to-Peer Networking Architecture" because SNA had misused the term "Network" (when it wasn't). she was then con'ed into going to POK to be responsible for loosely-coupled system architecture where she did Peer-Coupled Shared Data Architecture".

She didn't remain long, in part because of little uptake (until much later for SYSPLEX, except for IMS hot-standby) and in part because of periodic battles with SNA group trying to force her into using VTAM for loosely-coupled operation.

Peer-Coupled Shared Data Architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM APPN

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM APPN
Date: 01 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#0 IBM APPN

My wife tells of asking Vern Watts who he was going to ask permission to do IMS hot-standby, he says "nobody" ... he would just tell them when it was all done.
https://www.vcwatts.org/ibm_story.html
The Tale of Vern Watts. The long, inspired career of an IBM Distinguished Engineer and IMS inventor

SNA organization was fighting off release of mainframe TCP/IP support, when they lost ... they changed their tactic and said that since they had corporate responsibility for everything that cross datacenter walls, it had to be released through them. What shipped got 44kbytes aggregate using nearly whole 3090 processor. It was then released for MVS by simulating VM370 diagnose API .... which further aggravated CPU use (in MVS).

I then add RFC1044 support and in some tuning tests at Cray Research between Cray and VM/4341, getting sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

A univ. did a study comparing MVS VTAM LU6.2 pathlength (160K instructions and 15 buffer copies) compared to UNIX TCP pathlength (5k instructions and five buffer copies). The SNA organization fought me being on the XTP technical advisory board ... working on HSP (high-speed protocol) standard that included direct TCP transfers to/from application space (with no buffer copies), with scatter/gather (unix "chained data").

Later in the 90s, SNA organization hires a silicon valley contractor (former Amdahl employee that I had known from SHARE meetings since early 70s, who recently passed) to implement TCP/IP directly in VTAM. What he demo'ed had TCP/IP significantly higher throughput than LU6.2. He was then told that everybody knows that a "proper" TCP/IP implementation is much slower than LU6.2, and they would only be paying for a "proper" implementation

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM APPN

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM APPN
Date: 01 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#0 IBM APPN
https://www.garlic.com/~lynn/2025.html#1 IBM APPN

Co-worker at science center was responsible for the CP67-based wide-area network from 60s, that morphs into the corproate internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s, about the time the SNA-org forced the internal network to be converted to SNA/VTAM). Account by one of the inventors of GML at the science center in 1969:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

Edson (passed Aug2020)
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

We then transfer out to san jose research in 1977 and in early 80s, I get HSDT, T1 and faster computer links, some amount of conflict with SNA-org (note in 60s, IBM had 2701 telecommunication controller that supported T1 links, however IBM's move to SNA in the mid-70s and associated issues seem to have capped links at 56kbits/sec). Was working with the NSF director and was suppose to get $20M to interconnect the NSF Supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

SJMerc article about Edson, "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine),
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

trivia: HSDT first long-haul T1 link was between IBM Los Gatos lab (on west coast) and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston, NY (on the east coast) ... where he had a whole boat load of Floating Point Systems boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Tape Archive

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Tape Archive
Date: 01 Jan, 2025
Blog: Facebook
I had archive tape of files and email from time at univ in 60s through 1977 at IBM science center, starting with 800bpi, copied to 1600bpi, then to 6250 and finally to 3480 cartridge ... triple replicated in the IBM Almaden Research tape library. Mid-80s, Melinda asked if I had the original implementation of multi-level CMS source update (done in exec iterating using temp files and sequence of CMS update command). I managed to pull it off tape and email it to her.
https://www.leeandmelindavarian.com/Melinda#VMHist
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908

That was fortunate, shortly later Almaden started experiencing operational problems mounting random tapes as scratch ... and eventually found I lost nearly a dozen tapes (including all triple replicated 60s&70s archive).

other Melinda email from the era (and I had learned to not keep replicated copies in Almaden)
https://www.garlic.com/~lynn/2007b.html#email860111
https://www.garlic.com/~lynn/2011b.html#email860217
https://www.garlic.com/~lynn/2011b.html#email860217b

csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning almaden tape library
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024d.html#51 Email Archive
https://www.garlic.com/~lynn/2024c.html#103 CP67 & VM370 Source Maintenance
https://www.garlic.com/~lynn/2024b.html#7 IBM Tapes
https://www.garlic.com/~lynn/2024.html#39 Card Sequence Numbers
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022c.html#83 VMworkshop.og 2022
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2021k.html#51 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2018e.html#65 System recovered from Princeton/Melinda backup/archive tapes
https://www.garlic.com/~lynn/2017i.html#76 git, z/OS and COBOL
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014e.html#28 System/360 celebration set for ten cities; 1964 pricing for oneweek
https://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013f.html#73 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013e.html#61 32760?
https://www.garlic.com/~lynn/2013b.html#61 Google Patents Staple of '70s Mainframe Computing
https://www.garlic.com/~lynn/2012i.html#22 The Invention of Email
https://www.garlic.com/~lynn/2011g.html#29 Congratulations, where was my invite?
https://www.garlic.com/~lynn/2011f.html#80 TSO Profile NUM and PACK
https://www.garlic.com/~lynn/2011c.html#4 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2006w.html#42 vmshare
https://www.garlic.com/~lynn/2006t.html#20 Why these original FORTRAN quirks?; Now : Programming practices

--
virtualization experience starting Jan1968, online at home since Mar1970

Dataprocessing Innovation

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Dataprocessing Innovation
Date: 01 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#111 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation

some CICS history ... website gone 404, but lives on at the wayback machine
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

post about taking 2hr intro to fortran/computers, univ was getting 360/67 (for tss/360) replacing 709/1401 and within yr of taking intro class, it comes in and I'm hired fulltime responsible for os/360 (tss/360 doesn't come to fruition)
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
more reference in thread
https://www.garlic.com/~lynn/2024g.html#107 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage

Univ. library gets ONR grant to do an online catalog, some of the money goes for a 2321/datacell ... effort was also selected as betatest for the CICS product and CICS was added to my tasks. One of the 1st problems was CICS wouldn't come up ... eventually track to CICS had some hard-coded BDAM options/features ... that weren't documented/specified and the library had created BDAM files with different set of options.

posts mentioning CICS and/or BDAM
https://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

Dataprocessing Innovation

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Dataprocessing Innovation
Date: 02 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#111 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#4 Dataprocessing Innovation

some related CSC and virtual machine work ...

Some of the MIT CTSS/7094 people go to the 5th flr and Multics and others go to the 4th flr and the IBM Cambridge Scientific Center. CSC was expecting the MULTICS be awarded to IBM (CSC), but instead it goes to GE.

Melinda Varian's history
https://www.leeandmelindavarian.com/Melinda#VMHist
https://www.leeandmelindavarian.com/Melinda/neuvm.pdf
from above, Les Comeau has written (about TSS/360):
Since the early time-sharing experiments used base and limit registers for relocation, they had to roll in and roll out entire programs when switching users....Virtual memory, with its paging technique, was expected to reduce significantly the time spent waiting for an exchange of user programs.

What was most significant was that the commitment to virtual memory was backed with no successful experience. A system of that period that had implemented virtual memory was the Ferranti Atlas computer, and that was known not to be working well. What was frightening is that nobody who was setting this virtual memory direction at IBM knew why Atlas didn't work.35

... snip ...

A motivation for CSC to do virtual memory hardware mods to 360/40 and (virtual machine/memory) CP40/CMS, was to study virtual memory operation.

Atlas reference (gone 403?, but lives free at wayback):
https://web.archive.org/web/20121118232455/http://www.ics.uci.edu/~bic/courses/JaverOS/ch8.pdf
from above:
Paging can be credited to the designers of the ATLAS computer, who employed an associative memory for the address mapping [Kilburn, et al., 1962]. For the ATLAS computer, |w| = 9 (resulting in 512 words per page), |p| = 11 (resulting in 2024 pages), and f = 5 (resulting in 32 page frames). Thus a 220-word virtual memory was provided for a 214- word machine. But the original ATLAS operating system employed paging solely as a means of implementing a large virtual memory; multiprogramming of user processes was not attempted initially, and thus no process id's had to be recorded in the associative memory. The search for a match was performed only on the page number p.
... snip ...

... referencing ATLAS used paging for large virtual memory ... but not multiprogramming (multiple concurrent address spaces). Cambridge had modified 360/40 with virtual memory and associative lookup that included both process-id and page number (aka both virtual memory and multiple concurrent processes).
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

IBM does 360/67 standard with virtual memory for TSS/360. When 360/67 becomes available, CSC morphs CP40/CMS into CP67/CMS. At the time TSS/360 was decomitted there were 1200 people involved with TSS/360 and 12 people in the CP67/CMS group.

As an undergraduate in 60s, I had been hired fulltime for OS/360 running on 360/67 (as 360/65, originally was suppose to be for TSS/360). The univ shutdown datacenter on weekends and I would have it dedicated (although 48hrs w/o sleep made Monday classes difficult). CSC then came out to install CP/67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly played with it during my dedicated time ... spent the 1st six months or so redoing pathlengths for running OS/360 in virtual machine. OS/360 benchmark was 322secs on bare machine, initially 856secs in virtual machine (CP67 CPU 534secs), got CP67 CPU down to 113secs (from 534secs).

I redid (dynamic adaptive resource management) scheduling&paging algorithms and added ordered seek for disk i/o and chained page requests to maximize transfers/revolution (2301 fixed-head drum from peak 70/sec to peak 270/sec). CP67 page replacement to global LRU (at a time when academic literature was all about "local LRU"), which I also deployed at Cambridge after graduating and joining IBM. IBM Grenoble Scientific Center modified CP67 to implement "local" LRU algorithm for their 1mbyte 360/67 (155 page'able pages after fixed memory requirements). Grenoble had very similar workload as Cambridge but their throughput for 35users (local LRU) was about the same as Cambrige 768kbyte 360/67 (104 page'able pages) with 80 users (and global LRU) ... aka global LRU outperformed "local LRU" with more than twice the number of users and only 2/3rds the available real memory.

other trivia: there were a couple of CP67 online commercial spinoffs in the 60s (that specialized in services for financial industry) and one of the Multic's people from 5th flr joins one as part of "First Financial Language" offering on CP67/CMS (and later VM370/CMS). A decade later he joins with another person to found Visicalc:
https://en.wikipedia.org/wiki/VisiCalc

GML was invented at CSC in 1969 and a decade later morphs into SGML and after another decade morphs into HTML at CERN and the first webserver in the US is on the Stanford SLAC VM370 system:
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

Account by one of the GML inventors about CP67-based wide-area network. Later mophs into the corporate internal network (larger than arpanet/internet from just about the start until sometime mid/late 80s, about the time the internal network was forced to convert to SNA/VTAM).
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

Technology also used for the corporate sponsored univ. BITNET:
https://en.wikipedia.org/wiki/BITNET

Person responsible (passes Aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

newspaper article about Edson's battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Previously mentioned in this post/thread, nearly the whole first decade of SQL/relational System/R work was all done on VM370.

the first commercial relational (but not SQL) RDBMS was by the MULTICS group on 5th flr (followed by Oracle, Ingres, etc), in part the IBM System/R group faced enormous hurdles inside IBM. Was eventually able to do tech transfer ("under the radar while company was focused on IMS-followon "EAGLE") to Endicott for SQL/DS. When "EAGLE" implodes there is request for how fast can System/R be ported to MVS ... eventually released as DB2 (initially for decision-support *ONLY*)

and finally ... before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, kildall worked on IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, XML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
Internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
online commercial (virtual machine) offerings
https://www.garlic.com/~lynn/submain.html#online
first sql/relational RDBMS, System/R
https://www.garlic.com/~lynn/submain.html#systemr
virtual memory and paging posts
https://www.garlic.com/~lynn/subtopic.html#clock
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 37x5

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 37x5
Date: 02 Jan, 2025
Blog: Facebook
When I joined IBM science center ... person responsible for science centers' CP67-based wide-area network, had been trying to convince CPD that they should use the much more capable (Series/1) Peachtree processor (rather than the really anemic UC) for 37xx boxes. Reference by one of the inventors of GML at the science center in 1969:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

The CP67-based wide-area network then morphs into the corporate internal network, larger than the arpanet/internet from just about the start until sometime mid/late 80s (about the time it was forced to convert to SNA/VTAM) ... technology had also been used for the corporate sponsored univ BITNET
https://en.wikipedia.org/wiki/BITNET

Edson (passed aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet
https://www.garlic.com/~lynn/subnetwork.html#bitnet

We transfer from CSC out to San Jose Research on the west coast in 1977 ... and in the early 80s, I get HSDT project, T1 and faster computer links (note in the 60s, IBM had 2701 telecommunication controllers that supported T1, but in the transition to SNA/VTAM and 37xx boxes in the mid-70s, issues seemed to cap links at 56kbits/sec). Part of HSDT funding was based on being able to show some IBM content and I eventually found the FSD Series/1 T1 Zirpel card that was done for government customers that still had 2701 controllers (that were all in the process of failing apart). I then went to order half dozen S/1 and was told that IBM had recently bought ROLM (which was data general shop) and ROLM had made a large S/1 order that created a year's backlog (I eventually cut a deal with ROLM for some of their order positions, I would help them with some of their testing operations).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

trivia: As undergraduate I had been hired fulltime responsible OS/360 (on 360/67 run as 360/65). Had 2741 & tty/ascii terminal support with a hack for dynamically determining terminal type (using SAD CCW to change line port scanner type). I then want to have a single dial-in number ("hunt group"), but IBM had taken short-cut and hardwired line speed. Then start univ project to build our own clone controller, build channel interface board for Interdata/3 programmed to emulate IBM controller with addition could do automatic line baud rate, then upgraded to Interdata/4 for channel interface and clusters of Inerdata/3s for port interfaces (Interdata and later Perkin-Elmer sells as clone controller and four of use get written up responsible for some part of clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

360 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

Then before I graduate, I'm hired into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing into an independent business unit. I think Renton largest in the world, 360/65s arriving faster than could be installed ... some joke that Boeing was installing 360/65s like other companies installed keypunches.

When I graduate, I join IBM CSC (instead of staying with Boeing CFO). One of my hobbies was enhanced production operating systems, and the online sales&marketing support HONE systems was first (and long time customer). With the announce of virtual memory for all 370s, the decision was made to do CP67->VM370 morph ... which simplified and/or dropped a lot of CP67 features. In 1974, I start moving a lot of CP67 features to a VM370R2-base for my CSC/VM .... about the same time all the US HONE datacenters were consolidated in 1501 California (reconfigured into largest IBM single-system-image, loosely-coupled, shared DASD operation with load-balancing and fall-over across the complex). I then put SMP, tightly-coupled, multiprocessor support into VM370R3-based CSC/VM, originally for HONE so they could add a 2nd processor to each of systems. After transfer to SJR, I could easily commute up to HONE a few times a month. As an aside, when FACEBOOK 1st moved to silicon valley, it was into new bldg built next door to the former US HONE complex.

CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp

Mid-80s, I was also coned into a project to take a VTAM/NCP emulator that one of the baby bells had done on Series/1 and turn it into a type-1 product. Part of the process was using HONE 3725 "configurator" to size a 3725-operation compared to their live S/1 operation which I presented at a Raleigh SNA ARB meeting ... parts of that presentation:
https://www.garlic.com/~lynn/99.html#67
and part of baby bell presentation at IBM COMMON user group conference:
https://www.garlic.com/~lynn/99.html#70

Raleigh would constantly claim that the comparison was invalid but never were able to explain why. A significant amount of effort went into walling the project off from Raleigh influence in corporate politics, but what was done next to kill the project can only be described as truth is stranger than fiction.

--
virtualization experience starting Jan1968, online at home since Mar1970

Dataprocessing Innovation

From: Lynn Wheeler <lynn@garlic.com>
Subject: Dataprocessing Innovation
Date: 03 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#111 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#4 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#5 Dataprocessing Innovation

Jim was leaving for Tandem fall of 1980 and wanted to palm off some of the system/r stuff on me. One was Bank of America System/R study ... which was getting 60 VM/4341s for putting out in branches running System/R.

This was part of the leading edge of coming distributed computing tsunami ... large corporations were ordering hundreds of vm/4341s at a time for putting out in departmental areas (inside IBM, conference rooms were starting to be in short supply, having been converted to vm/4341 rooms).

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

some recent posts mentioning coming distributed computing tsunami
https://www.garlic.com/~lynn/2024g.html#81 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#60 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#55 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024f.html#95 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#70 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#64 Distributed Computing VM4341/FBA3370
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024e.html#46 Netscape
https://www.garlic.com/~lynn/2024e.html#16 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#85 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#30 Future System and S/38
https://www.garlic.com/~lynn/2024d.html#15 Mid-Range Market
https://www.garlic.com/~lynn/2024c.html#107 architectural goals, Byte Addressability And Beyond
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#87 Gordon Bell
https://www.garlic.com/~lynn/2024c.html#29 Wondering Why DEC Is The Most Popular
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#23 HA/CMP
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#65 IBM Mainframes and Education Infrastructure
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#51 VAX MIPS whatever they were, indirection in old architectures
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#61 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#55 Vintage IBM 5100
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#21 IBM Change
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2023.html#1 IMS & DB2

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM OS/360 MFT HASP

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM OS/360 MFT HASP
Date: 04 Jan, 2025
Blog: Facebook
SHARE history of HASP & JES2 (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20041026022852/http://www.redbug.org/dba/sharerpt/share79/o441.html
Meanwhile, the operating system weenies kept spinning their own wheels, and so eventually MFT-II and MVT were released. These were able to do multi-jobbing. Because such a large percentage of the OS/MFT community was already dependent on HASP, the staff in Washington adapted HASP to the new OS releases and introduced HASP-II version 1.
... snip ...

Other trivia: HASP NJE (later morphing into JES2 NJE) originally had "TUCC" in cols. 68-71 of the assembler source code ... part of the issue was in used free entires in the 255 entry pseudo device table for network node definition ... typicall 160-180 entries ... when the internal (RSCS/VNET) network had long before passed 255 nodes. A NJE emulation driver was done for RSCS/VNET allowing MVS nodes to be connected, but they tended to be restriced to boundary nodes (behind RSCS/VNET) since NJE would discard traffic when either the origin or destination nodes weren't in local table. While RSCS/VNET had well structured implementation, NJE had somewhat intermixed networking and job control fields and NJE traffic from MVS/JES2 had a habit of crashing destination MVS/JES2 at a different release level. As a result a large library of code appeared for RSCS/VNET NJE emulation that could transpose fields for traffic for a directly connected MVS/JES2. There was infamous case of San Jose origin MVS/JES2 traffic crashing a Hursley destination MVS/JES2 systems ... and it was blamed on the Hursley RSCS/VNET (because its NJE emulator hadn't been updated for the latest San Jose JES2 changes).

I was undergraduate in 60s and had taken a two credit intro to fortran/computers. The univ. was getting 360/67 for tss/360, replacing 709/1401. The 360/67 arrived within a year of my taking the intro class and I was hired fulltime responsible for os/360 (tss/360 never came to fruition so ran as 360/65). Student Fortran had run under second on 709 (tape->tape, 1401 unit record front-end), but over a minute on 360/65 OS/360 MFTR9.5 (before first MVTR12). I install HASP and it cuts time in half. Then I start redoing STAGE2 SYSGEN, to be able to run in production HASP (instead of starter system) and carefully place datasets and PDS members to optimize seeks and multi-track searchers, cutting another 2/3rds to 12.9secs. Student fortran was never better than 709 until I install Univ. of Waterloo WATFOR.

While MVT option was available, but next release sysgen did MFTR14. I didn't do a MVT SYSGEN until combined MVTR15/16 ... which also included being able to specify cylinder location of VTOC (to reduce avg. arm seek).

CSC came out Jan1968 to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly got to play with it during my dedicated weekend time (univ shutdown datacenter over the weekend and I had place dedicated, although 48hrs w/o sleep made Monday classes hard). I initially rewrite a lot of pathlengths to running OS/360 in virtual machine. My OS/360 test stream ran 322secs on real machine but initially 856secs virtually (CP67 CPU 534secs). with a couple months had CP67 CPU down to 113secs). I was invited to Mar1868 SHARE meeting for IBM CP67 annnounce (and also participated in 1st SHARE HASP project).

Related history, early last decade I had been asked to track down decision to add virtual memory to all 370s and found staff member to executive making decision, pieces of email exchange (including some HASP/SPOOL history) in this archived post
https://www.garlic.com/~lynn/2011d.html#73

For MVTR18, I remove 2780 RJE support out of HASP (to reduce real storage) and put in 2741 and TTY/ASCII terminal support and editor supporting CP67/CMS EDIT syntax (totaly different code since program environments so different) for CRJE.

HASP/ASP, JES2/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

other trivia: posts mentioning Boeing Huntsville modify MVTR13 with virtual memory support
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#90 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2002j.html#22 Computer Terminal Design Over the Years

--
virtualization experience starting Jan1968, online at home since Mar1970

John Boyd and Deming

From: Lynn Wheeler <lynn@garlic.com>
Subject: John Boyd and Deming
Date: 05 Jan, 2025
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2024g.html#64 John Boyd and Deming
https://www.garlic.com/~lynn/2024g.html#74 John Boyd and Deming
https://www.garlic.com/~lynn/2024g.html#103 John Boyd and Deming

80s, when foreign auto manufacturers were setting up factories in the US heartland, they found that they had to require JR college degree in order to get workers with high school level education. The finding was possibly some of the motivation for states to start requiring proficiency tests for high school diploma ... although there were press coverage of state legislature battles over whether requiring 7th grade level math&reading proficiency was too high a qualification for high school diploma.

c4 task force posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce

past posts mention literacy/competency
https://www.garlic.com/~lynn/2017h.html#17 OFF TOPIC: University of California, Irvine, revokes 500 admissions
https://www.garlic.com/~lynn/2017d.html#27 US Education
https://www.garlic.com/~lynn/2012j.html#39 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2009f.html#47 TARP Disbursements Through April 10th
https://www.garlic.com/~lynn/2008k.html#5 Republican accomplishments and Hoover
https://www.garlic.com/~lynn/2007u.html#80 Education ranking
https://www.garlic.com/~lynn/2007k.html#30 IBM Unionization
https://www.garlic.com/~lynn/2003j.html#28 Offshore IT

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 37x5

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 37x5
Date: 06 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5

not assigned to Endicott ... cambridge had joint distributed project w/endicott to add 370 virtual memory support to cp67 (for emulated 370 virtual machines) ... and then modifications that ran CP67 on 370 hardware (in regular use a year before 1st engineering 370/145 virtual memory was operational ... and was used as test case for the machine).

also after FS implodes was asked to help with ECPS microcode assist for 138/148 (also used later for 4300). select 6kbytes highest executed VM370 kernel pathlengths for reWriting in microcode (for 10:1 speedup), archived post with initial analysis (6kbytes kernel instructions 79.55% kernel execution time):
https://www.garlic.com/~lynn/94.html#21

Then got coned into running around helping present 138/148 business case to planners in US regions and World Trade countries.

Then tried to convince corporate to allow VM370 preinstall on every 138/148 (vetoed in part head of POK had recently convinced corporate to kill vm370 product, shutdown development group and transfer all the people to POK for MVS/XA ... Endicott did manage to save VM370 product mission for midrange ... but had to recreate VM370 development group from scratch)

one of the things going around presenting 138/148 business case ... was in WT, country forecasts turned into plant orders ... and deliveries made to countries to sell (and held accountable) ... while US regional forecasts tend to conform to corporate strategic positions ... and any problems fell back on plants (as a result, plants tend to redo US regional forecasts since they could have little to do with actual expected sales).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning CP67L (runs on real 360/67), CP67H (run in 360/67 virtual machine, added emulated 370 virtual machine), CP67I (runs on 370 machine), CP67SJ (CP67I with 3330 & 2305 device drivers)
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2013.html#71 New HD
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2007i.html#16 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future

--
virtualization experience starting Jan1968, online at home since Mar1970

what's a segment, 80286 protected mode

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: what's a segment, 80286 protected mode
Newsgroups: comp.arch
Date: Mon, 06 Jan 2025 17:28:11 -1000
John Levine <johnl@taugh.com> writes:
What you're describing is multi-level page tables. Every virtual memory system has them. Sometimes the operating systems make the higher level tables visible to applications, sometimes they don't. For example, in IBM mainframes the second level page table entries, which they call segments, can be shared between applications.

initial adding virtual memory to all IBM 370s was similar to 24bit 360/67 but had options for 16 1mbyte segments or 256 64kbyte segments and either 4kbyte or 2kbyte pages. Initial mapping of 360 MVT to VS2/SVS was single 16mbyte address space ... very similar to running MVT in a CP/67 16mbyte virtual machine.

The upgrade to VS2/MVS gave each region its own 16mbyte virtual address space. However, OS/360 MVT API heritage was pointer passing API ... so they mapped a common 8mbyte image of the "MVS" kernel into every 16mbyte virtual address space (leaving 8mbytes for application code), kernel API call code could still directly access user code API parameters (basically same code from MVT days).

However, MVT subsystems were also moved into their separate 16mbyte virtual address space ... making it harder to access application API calling parameters. So they defined a common segment area (CSA), 1mbyte segment mapped into every 16mbyte virtual address space, application code would get space in the CSA for API parameter information calling subsystem.

Problem was the requirement for subsystem API parameter (CSA) space was proportional to number of concurrent applications plus number of subsystems and quickly exceed 1mbyte ... and it morphs into multi-megabyte common system area. By the end of the 70s, CSAs were running 5-6mbytes (leaving 2-3mbytes for programs) and threatening to become 8mbytes (leaving zero mbytes for programs)... part of the mad rush to XA/370 and 31-bit virtual addressing (as well as access registers, and multiple concurrent virtual address spaces ... "Program Call" instruction had a table of MVS/XA address space pointers for subsystems, the PC instruction whould move the caller's address space pointer to secondary and load the subsystem address space pointer into primary ... program return instruction reversed the processes and moved the secondary pointer back to primary).

some recent posts mentioning explision from "common segment" to "common system" CSA, xa/370, access registers:
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2020.html#36 IBM S/360 - 370
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2014k.html#36 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2013m.html#71 'Free Unix!': The world-changing proclamation made 30 years agotoday

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM APPN

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM APPN
Date: 08 Jan, 2025
Blog: Facebook

https://www.garlic.com/~lynn/2025.html#0 IBM APPN
https://www.garlic.com/~lynn/2025.html#1 IBM APPN
https://www.garlic.com/~lynn/2025.html#2 IBM APPN

Eartly 80s, got HSDT project, T1 and faster computer links (both terrestrial and satellite) ... resulting in some battles with communication group (note: in 60s, IBM had 2701 telecommunication controller supporting T1, then with IBM transition to SNA, issues appeared to cap controller links at 56kbits). HSDT first long haul T1 was between Los Gatos lab (on the west coast) and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (on the east coast) ... had a while bunch of Floating Point Systems boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems including with
40mbyte/sec disk arrays (to keep the boxes fed).

Had also been working with NSF director and was supposed to get $20m to interconnect the NSf Supercomputer Centers, then congress cuts the budget, some other things happen and finally an RFP was released (in part based on what we already had running). IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

Somebody had collected a lot of executive email with NSFNET misinformation and forwarded it to us ... old archived post, heavily clipped and redacted to protect the guilty
https://www.garlic.com/~lynn/2006w.html#21

Communication group was fiercely fighting off client/server and distributed computing, trying to keep mainframe TCP/IP from being released. When that was turned around, they changed their strategy; that since they had corporate strategic ownership for everything that cross datacenter walls, it had to be shipped through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor (also ported to MVS by simulating VM370 "diagnose" instructions). I then did RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

My wife had been asked to be co-author for response to a 3-letter gov. agency request for super-secure, large campus-like operation ... where she included 3-tier networking ... and then we were out making customer executive presentations on 3-tier networking, high-performance tcpip/routers/ethernet, etc. and internally taking all sorts of misinformation arrows in the back.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
3-tier, middle layer, saa posts
https://www.garlic.com/~lynn/subnetwork.html#3tier

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM APPN

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM APPN
Date: 08 Jan, 2025
Blog: Facebook

https://www.garlic.com/~lynn/2025.html#0 IBM APPN
https://www.garlic.com/~lynn/2025.html#1 IBM APPN
https://www.garlic.com/~lynn/2025.html#2 IBM APPN
https://www.garlic.com/~lynn/2025.html#12 IBM APPN

IBM was claiming that it would support (ISO) OSI ... but there was a joke that while (TCP/IP) IETF standards body had a requirement that there be at least two interoperable implementations to progress in standards process .... ISO didn't have a requirement that a standard be implementable.

OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ...

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Dataprocessing Innovation

From: Lynn Wheeler <lynn@garlic.com>
Subject: Dataprocessing Innovation
Date: 09 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#111 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#4 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#5 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#7 Dataprocessing Innovation

old email about Jim leaving for tandem (and palming stuff on to me)
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016
in this a.f.c. newsgroup archived post
https://www.garlic.com/~lynn/2007.html#1

also mentions his parting "MIPENVY" tome
http://jimgray.azurewebsites.net/papers/mipenvy.pdf

and from IBMJargon
MIP envy - n. The term, coined by Jim Gray in 1980, that began the Tandem Memos (q.v.). MIP envy is the coveting of other's facilities - not just the CPU power available to them, but also the languages, editors, debuggers, mail systems and networks. MIP envy is a term every programmer will understand, being another expression of the proverb The grass is always greener on the other side of the fence.
... snip ...

note: I was blamed for online computer conferencing on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s, about the time internal network forced conversion to SNA/VTAM) in the late 70s and early 80s. "Tandem Memos" actually took off spring of 1981 after I distributed a trip report of visit to Jim at Tandem (only about 300 actively participated, but claims 25,000 were reading; folklore is when corporate executive committee was told, 5of6 wanted to fire me).

some more about "Tandem Memos" in long-winded (2022 linkedin) post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing, tandem memo posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Dataprocessing Innovation

From: Lynn Wheeler <lynn@garlic.com>
Subject: Dataprocessing Innovation
Date: 09 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#111 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#4 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#5 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#7 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#14 Dataprocessing Innovation

Early last decade a customer asked if I could track down the decision to add virtual memory to all 370s and found staff member to executive making decision; basically MVT storage management was so bad that regions had to be specified four times larger than used ... as a result there were insufficient concurrently running regions to keep typical 1mbyte 370/165 busy and justified. Going to running MVT in 16mbyte virtual memory allowed number of regions to be increased by a factor of four times (cap of 15 with 4bit storage protect keys) with little or no paging, aka VS2/SVS (similar to running MVT in CP67 16mbyte virtual machine). Ludlow was doing initial implementation on 360/67 (pending availability of engineering 370s with virtual memory support) ... a little bit of code for setting up 16mbyte virtual memory tables and simple paging. Biggest task was EXCP/SVC0, similar to channel programs passed to CP67, aka the channel needed real addresses and all the passed channel programs had virtual addresses; need to make a channel program copy, replacing virtual addresses with real ... and Ludlow borrows CP67 CCWTRANS to craft into EXCP/SVC0. Old archived post with pieces of the email exchange:
https://www.garlic.com/~lynn/2011d.html#73

Problem was as systems got bigger, needed ever increasing number of concurrently running regions ... more than 15 ... so moved to VS2/MVS, giving each region its own separate 16mbyte virtual address space (execution addressing separation protection, in place of storage protect keys). However the strong OS/360 API convention of pointer passing (rather than argument passing) resulted in placing a 8mbyte image of the MVS kernel into every 16mbyte virtual address space (leaving eight for program execution) ... kernel calls could directly access the caller's virtual addresses. However, subsystems were also giving their own separate 16mbyte virtual address spaces ... and subsystem API also need API argument addressing ... and thus was invented 1mbyte common segment ("CSA" or common segment area) in every address space for the allocation of space for API arguments (both user programs and subsystems). However, common segment space requirements were somewhat proportional to number of subsystems and number of concurrently executing programs ... and CSA quickly mophs into multi-megabyte Common System Area and was frequently 5-6mbytes (leaving 2-3mbytes for applications) and threatening to be become 8mbytes (leaving zero for user programs). This was part of the mad rush to 370/XA, 31bit addressing, access register and program call/return. MVS/XA builds system table of subsystems that contains subsystem address space pointer. Applications does a program call, the hardware moves the application address space to secondary address space pointer and places the subsystem address space pointer in the primary (the subsystem now has addressing to both the subsystem primary address space and the application secondary address space).

Note I was also related to this in another way ... in the 70s I was pontificating that systems were getting faster than disks were getting faster. Writing a tome in the early 80s, I claimed that disk relative system throughput had declined by an order of magnitude since 360 announce; aka system throughput increased 40-50 times while disk throughput only increased 3-5 times. A GPD disk division executive took exception and assigned the division performance group to refute the claim. After a few weeks the performance group came back and effectively said I had slightly understated the problem. The performance group then respins the analysis and turns it into a SHARE presentation on configuring disks (& filesystem) for improved system throughput (higher filesystem throughput with more concurrently executing disk I/O ... rather than strictly faster disk I/O).

trivia: within year of taking 2 credit hr intro to fortran/computers, the 709/1401 configuration was replaced with 360/67 (originally for tss/360 but got used as 360/65) and I was hired fulltime responsible for OS/360. Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing into independent business unit (I think Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room). Also the Boeing Huntsville 360/67 duplex (two processor) was brought up to Seattle. Huntsville had got it for TSS/360 with lots of 2250 graphic displays (for CAD/CAM work) ... but partitioned into two 360/65s systems running OS/360 MVTR13. They had run into MVT storage management problem early and had added virtual memory table to MVTR13 (but w/o paging), manipulating virtual address space as partial countermeasure to MVT storage management problems.

posts getting to play disk engineer in disk bldgs 14/engineering and 15/product-test
https://www.garlic.com/~lynn/subtopic.html#disk

misc. posts mentioning decision to add virtual memory to all 370s, as well as boeing computer services and the boeing huntsville work adding virtual memory to MVTR13
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2021g.html#39 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)

--
virtualization experience starting Jan1968, online at home since Mar1970

On-demand Supercomputer

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: On-demand Supercomputer
Date: 10 Jan, 2025
Blog: Facebook
Early on cloud megadatacenters with half million or more blade servers allowed on-demand automated spin-up of supercomputer (ranking in top 40 in the world) for a few hrs with just a credit-card ... cost was less if done during off-peak hrs.

from 2011: Amazon Builds World's Fastest Nonexistent Supercomputer
https://www.wired.com/2011/12/nonexistent-supercomputer/
Yes, beneath Amazon's virtual supercomputer, there's real hardware. When all is said and done, it's a cluster of machines, like any other supercomputer. But that virtual layer means something. This isn't a supercomputer that Amazon uses for its own purposes. It's a supercomputer that can be used by anyone.
... snip ...

Amazon takes supercomputing to the cloud. If you have $1,279, you can buy an hour of supercomputing time on Amazon's cloud.
https://www.cnet.com/tech/services-and-software/amazon-takes-supercomputing-to-the-cloud/

Back when mainframe published industry standard benchmarks (number program iterations compared to benchmark reference platform) ... max-configured z196 at 50BIPS and standard cloud blade E5-2600 server at 500BIPS

This was not long before IBM sold off its blade server business (shortly after industry article that server chip makers were shipping half their product directly to cloud megadatacenters where they assembled at 1/3 cost of brand name servers). At the time max configured z196 went for $30M while IBM base price for E5-2600 blade was $1815.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

a few past posts mentioning on-demand supercomputer spin-up
https://www.garlic.com/~lynn/2024c.html#36 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#24 HA/CMP
https://www.garlic.com/~lynn/2023.html#100 IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia
https://www.garlic.com/~lynn/2021f.html#18 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021b.html#5 Availability
https://www.garlic.com/~lynn/2018.html#26 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017c.html#6 How do BIG WEBSITES work?
https://www.garlic.com/~lynn/2015g.html#19 Linux Foundation Launches Open Mainframe Project
https://www.garlic.com/~lynn/2013f.html#74 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013f.html#35 Reports: IBM may sell x86 server business to Lenovo

--
virtualization experience starting Jan1968, online at home since Mar1970

On-demand Supercomputer

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: On-demand Supercomputer
Date: 10 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#16 On-demand Supercomputer

Trivia: the (non-IBM) solid-state 2305 emulation internally carried "1655" model number. Some could be configured for pure 2305 CKD emulation running at 1.5mbytes/sec ... but also FBA emulation running at 3mbytes/sec (easy for VM370, but MVS never bothered doing FBA support, even when all disk technology was moving to FBA .... even "CKD" 3380 ... can be seen in records/track formulas where record size needed to rounding up to multiple of fixed "cell size").

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

I got into small dustup with the POK VULCAN group in the 70s, where I wanted to do some enhancements to 3350 fixed-head option for paging and VULCAN got it vetoed, planning on shipping an electronic paging device. But they got canceled when told that IBM was shipping every memory chip it could make for processor memory (at higher markup).

posts mentioning getting to play disk engineer in disk bldg14/engineering and bldg15/product-test
https://www.garlic.com/~lynn/subtopic.html#disk

more trivia: 1988, IBM branch asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre channel standard ("FCS", initially 1gbit, full-duplex, aggregate 200mbytes/sec, including some stuff I had done in 1980). Then in 1990s, some serial stuff that POK had been working with for at least the previous decade is released as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define heavy weight protocol that significantly reduces ("native") throughput, which ships as "FICON".

Latest public benchmark I've seen was z196 "Peak I/O" getting 2M IOPS using 104 FICON. About the same time a FCS is announced for E5-2600 blades claiming over a million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommended that SAPs (system assist processors that do actual I/O) be held to 70% CPU (or around 1.5M IOPS).

channel-extender support posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Thin-film Disk Heads

From: Lynn Wheeler <lynn@garlic.com>
Subject: Thin-film Disk Heads
Date: 11 Jan, 2025
Blog: Facebook
first thin film head was 3370
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

then used for 3380; original 3380 had 20 track spacings between each data track, then cut the spacing in half for double the capacity, then cut the spacing again for triple the capacity (3380K). The "father of risc" then talks me into helping with a "wide" disk head design, read/write 16 closely spaced data tracks in parallel (plus two servo tracks, one on each side of 16 data track groupings). Problem was data rate would have been 50mbytes/sec at a time when mainframe channels were still 3mbytes/sec. However 40mbyte/sec disk arrays were becoming common and Cray channel had been standardized as HIPPI.
https://en.wikipedia.org/wiki/HIPPI

1988, IBM branch asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre channel standard ("FCS", initially 1gbit, full-duplex, got RS/6000 cards capable of 200mbytes/sec aggregate for use with 64-port FCS switch). In 1990s, some serial stuff that POK had been working with for at least the previous decade is released as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define heavy weight protocol that significantly reduces ("native") throughput, which ships as "FICON". Latest public benchmark I've seen was z196 "Peak I/O" getting 2M IOPS using 104 FICON. About the same time a FCS is announced for E5-2600 blades claiming over a million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommended that SAPs (system assist processors that do actual I/O) be held to 70% CPU (or around 1.5M IOPS).
https://en.wikipedia.org/wiki/Fibre_Channel

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

posts mentionin disk thin film heads
https://www.garlic.com/~lynn/2024g.html#60 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#58 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2024g.html#54 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#3 IBM CKD DASD
https://www.garlic.com/~lynn/2024c.html#59 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#25 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2022g.html#9 3880 DASD Controller
https://www.garlic.com/~lynn/2022c.html#74 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2022.html#64 370/195
https://www.garlic.com/~lynn/2021c.html#53 IBM CEO
https://www.garlic.com/~lynn/2021.html#62 Mainframe IPL
https://www.garlic.com/~lynn/2021.html#56 IBM Quota
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2018.html#41 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2016f.html#39 what is 3380 E?
https://www.garlic.com/~lynn/2014l.html#78 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2012o.html#60 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2011f.html#47 First 5.25in 1GB drive?
https://www.garlic.com/~lynn/2009k.html#75 Disksize history question
https://www.garlic.com/~lynn/2008r.html#32 What if the computers went back to the '70s too?
https://www.garlic.com/~lynn/2007j.html#64 Disc Drives
https://www.garlic.com/~lynn/2007j.html#13 Interrupts
https://www.garlic.com/~lynn/2007i.html#83 Disc Drives
https://www.garlic.com/~lynn/2007e.html#43 FBA rant
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2004b.html#15 harddisk in space
https://www.garlic.com/~lynn/2002h.html#28 backup hard drive
https://www.garlic.com/~lynn/aepay6.htm#asrn5 assurance, X9.59, etc

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Machine History

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Machine History
Date: 12 Jan, 2025
Blog: Facebook
Original 360/370 virtual memory was science center hardware modifications to 360/40 used for implementing (virtual machine) CP/40 ... later CP/40 morphs into CP/67 when 360/67 standard with virtual memory becomes available. Early last decade, I was asked to track down decision to add virtual memory to all 370s ... giving rise to CP67->VM370, MVT->VS2 (SVS&MVS), MFT->VS1, DOS->DOS/VS. Found staff to executive making decision, pieces of email exchange here:
https://www.garlic.com/~lynn/2011d.html#73

recent reference
https://www.garlic.com/~lynn/2025.html#5 Dataprocessing Innovation
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Melinda Varian's history
https://www.leeandmelindavarian.com/Melinda#VMHist

Endicott cons me into help with ECPS for 138/148 (also used for 4331/4341) ... they wanted highest executed 6kbytes kernel paths ... for move to microcode at 10:1 performance ... initial analysis, 6kbytes accounted for 79.55% of kernel execution ... old archived post:
https://www.garlic.com/~lynn/94.html#21

... early 80s, I got permission to give talk on how ECPS was done at user group meetings ... including monthly BAYBUNCH meetings hosted at Stanford SLAC ... after the meetings would normally adjourn to local waterhole ... and the Amdahl people grilled me for more information.

They said that they had done MACROCODE (sort of 370 instruction subset running in microcode mode drastically cutting time & effort to do new microcode) and were in process of implementing hypervisor ("multiple domain"), VM370 subset all done in microcode (IBM doesn't respond with LPAR & PR/SM for 3090 until nearly decade later).

trivia: ECPS was being done in the wake of FS implosion ... about the same time I was asked to help with a 16-CPU 370 shared memory multiprocessor and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting that remapping 168 logic to 20% faster chips) .... for various reasons it never was brought to fruition (IBM doesn't ship 16-CPU until after the turn of the century). Once the 3033 was out the door, the 3033 processor engineers start on trout/3090 ... and would continue doing stuff with them off and on. In the past I've posted old email to usenet comp.arch, a.f.c., ibm-main groups about how SIE for 3090 was much different than SIE for 3081 (archived here)
https://www.garlic.com/~lynn/2006j.html#email810630
https://www.garlic.com/~lynn/2003j.html#email831118

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Machine History

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Machine History
Date: 12 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#19 Virtual Machine History

In the wake of FS (during which 370 projects were being kill/suspended) implosion there is mad rush to get stuff back into the 370 product pipelines ... including kicking off quick&dirty 3033 & 3081 efforts in parallel ... lots more information
http://www.jfsowa.com/computer/memo125.htm

about the same time the head of POK managed to convince corporate to kill the vm370 project, shutdown the development group and transfer all the people to POK for MVS/XA. (Endicott managed to save the VM370 product mission for the mid-range, but had to recreate a development group from scratch). Some of the people that went to POK developed the primitive virtual machine VMTOOL (in 370/xa architecture, required the SIE instruction to move in/out virtual machine mode) in support of MVS/XA development.

Then customers weren't moving to MVS/XA as fast as predicted, however Amdahl was having better success being able to run both MVS and MVS/XA concurrently on the same machine with their (microcode hypervisor) "Multiple Domain". As a result, VMTOOL was packaged 1st as VM/MA (migration aid) and then VM/SF (system facility) able to run MVS and MVS/XA concurrently on 3081. However, because originally VMTOOL and SIE was never intended for production operation and limited microcode memory, SIE microcode had to be "paged" (part of the 3090 claim that 3090 SIE was designed for performance from the start).

Then there is proposal to have a couple hundred person organization to upgrade VMTOOL to VM/370-level feature, function, and performance ... for VM/XA. Endicott counter was an internal IBM (Rochester) sysprog had added full 370/XA support to VM/370 ... however POK prevails.

With regard to 16-CPU effort ... in the morph of CP67->VM370 some amount of features were simplified or dropped (including multiprocessor support). I then integrate multiprocessor support into VM370R3, originally for the online branch office sales & marketing support HONE system (allowing them to add a 2nd CPU to each system) and was able to get twice the throughput of single CPU system.

At the time, MVS documentation was MVS 2-CPU support only had 1.2-1.5 times throughput of single CPU system. Everybody had thought the 16-CPU project was really great until somebody told the head of POK that it could be decades before the POK favorite son operating system ("MVS") had (effective) 16-CPU support (i.e. POK doesn't ship 16-CPU system until after turn of the century, more than two decades later).

trivia: some of the people involved in Amdahl "Multiple Domain" (hypervisor) were involved in implementing 370/370xa virtual machine support on WINTEL platforms ... akin to:
https://en.wikipedia.org/wiki/Hercules_(emulator)
http://www.hercules-390.org/
also
https://en.wikipedia.org/wiki/PC-based_IBM-compatible_mainframes#z/Architecture_and_today
also (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20241009084843/http://www.funsoft.com/
https://web.archive.org/web/20240911032748/http://www.funsoft.com/index-technical.html

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Machine History

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Machine History
Date: 12 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#19 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
also
https://www.garlic.com/~lynn/2025.html#16 On-demand Supercomputer
https://www.garlic.com/~lynn/2025.html#17 On-demand Supercomputer

industry benchmark ... number of program iterations compared to reference platform (that had been calibrated as to number of instructions). Early mainframe actual benchmarks ... later mainframe numbers derived from pubs with percent difference change from previous generation:

Early 90s:
eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
RS6000/990 : 126MIPS; 16-way cluster: 2016MIPS, 128-way cluster: 16,128MIPS (16.128BIPS)


Then somerset/AIM (apple, ibm, motorola), power/pc single chip as well as motorola 88k bus supporting cache consistency for multiprocessor. i86/Pentium new generation where i86 instructions are hardware translated to RISC micro-ops for actual execution (negating RISC system advantage compared to i86).
1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000 z900 processor)
1999 single Pentium3 hits 2,054MIPS (twice PowerPC and 13times each Dec2000 z900 processor).


Mainframe this century:
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS (1000MIPS/proc), Sep2019
z16, 200 processors, 222BIPS (1111MIPS/proc), Sep2022


Note z196 max configured (& 50BIPS) went for $30M, at same time E5-2600 server blade (& 500BIPS) had IBM base list price of $1815 (before industry press announced server chip vendors were shipping half the product directly to cloud megadatacenters for assembly at 1/3rd price of brand name systems, and IBM sells off server blade business)

Large cloud operations can have score of megadatacenters around the world, each with half million or more server blades (2010 era megadatacenter: processing equivalent around 5M max-configured z196 mainframes).

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

a few past posts mentioning IBM sells off its i86 server business
https://www.garlic.com/~lynn/2024e.html#90 Mainframe Processor and I/O
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#34 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#42 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#107 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2022f.html#12 What is IBM SNA?
https://www.garlic.com/~lynn/2022b.html#63 Mainframes
https://www.garlic.com/~lynn/2022.html#96 370/195
https://www.garlic.com/~lynn/2021f.html#18 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2019e.html#102 MIPS chart for all IBM hardware model

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Future System
Date: 14 Jan, 2025
Blog: Facebook
some future sys refs on the web
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

Future System ref, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive
... snip ...

... one of the final nails in the FS coffin was analysys by the IBM Houston Scientific Center that if 370/195 applications were redone for FS machine made out of fastest available technology, it would have throughput of 370/145 (about factor of 30times slowdown, aka 1/30th throughput).

well, virtual memory for 370 pub leaked to industry press. Then there was hunt for the source of the leak ... afterwards all IBM copiers were retrofitted with identification under the glass that would appear on every copy made.

going into FS, they tried to move all documentation to softcopy ... specially modified VM370 systems that required special password for accessing the documentation which could only be displayed on specifically designated 3270 terminals.

I continued to work on 360/370 stuff all during FS and given FS presentations would periodically ridicule ... drawing analogy to long playing cult film down at central square. At the time my wife reported to head of one the FS sections and has commented that most of the people were so wrapped up in the blue sky descriptions ... but had no idea how it might be implemented.

trivia: After joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters (online branch office sales&marketing support HONE was early and long time customer). In the morph of CP67->VM370 they simplified and/or dropped a lot of stuff. In 1974, I started moving lots of (missing) CP67 stuff to VM370R2 base and had some weekend test time on virtual memory 370/145 at VM370 development. I stopped by Friday afternoon to make sure everything was in place. They started in on how the special security modification for FS documents met that even if I was left alone all weekend in the machine room, I couldn't access them. After awhile I got tired of hearing it and asked them to disable all access to the system, then 30secs fiddling on the machine console and showed I had access to everything.

1995 l'Ecole de Paris The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

mentions that FS started out was a countermeasure to clone controllers ... controllers & interface so complex that the clone makers couldn't keep up (but as seen, even got too complex for IBM). During FS, internal politics was killing off 370 efforts and lack of new 370 during the period gave clone 370 makers their market foothold ... aka countermeasure for clone controllers enabling clone 370 system makers.

somewhat aided and abetted by killing Amdahl's ACS/360 ... aka Amdahl had won the battle to make ACS, 360 compatible. Then when ACS/360 was killed, he leaves IBM (before advent of FS) to start his own computer company.
https://people.computing.clemson.edu/~mark/acs_end.html

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

one was single-level-store ... sort of from Multics and IBM TSS/360 ... but lacked a lot of concurrent performance optimization ... I had joked I learned what not to do for page-mapped filesystem from TSS/360.

I did page-mapped filesystem for CP67/CMS and then moved to VM370/CMS ... for moderate I/O apps would run at least three times faster than standard CMS filesystem. In addition to memory mapping ... also provided API that could provide "hints" used to simulate asynchronous I/O, multiple buffering, read-ahead, write-behind etc. instead synchronous faults.

For CMS virtual machines instead of channel program simulation ... could consolidate/optimize all requests ... so scale-up throughput increased as number of users increased.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
page-mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning 370 virtual memory leaking and FS document security
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2022h.html#69 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2021i.html#81 IBM Downturn
https://www.garlic.com/~lynn/2021e.html#57 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2019c.html#1 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2015e.html#87 These hackers warned the Internet would become a security disaster. Nobody listened
https://www.garlic.com/~lynn/2010q.html#4 Plug Your Data Leaks from the inside
https://www.garlic.com/~lynn/2010j.html#32 Personal use z/OS machines was Re: Multiprise 3k for personal Use?
https://www.garlic.com/~lynn/2009k.html#5 Moving to the Net: Encrypted Execution for User Code on a Hosting Site

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM NY Buildings

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM NY Buildings
Date: 14 Jan, 2025
Blog: Facebook
Mid-70s an Endicott manager cons me into helping with ECPS for 138/148 and then running around the world helping present the business case to planners/forecasters. Late 80s last product we did at IBM was HA/CMP (before leaving IBM in early 90s). My wife had been con'ed into being co-author of IBM response to gov. request for super-secure, large campus-like networking ... where she included 3-tier architecture. We were then doing customer executive presentations on internet, tcp/ip, high-speed routers, 3-tier, ethernet, (and HA/CMP) and taking misinformation attacks from the SAA, SNA, Token-ring forces. The former Endicott manager was then head of SAA and had large top-floor corner office in Somers and we would drop in periodically to complain about his people.

We had some meetings In Purchase before leaving IBM, rumor was IBM paid 10cents on the dollar for the bldg. After leaving IBM, we were doing work in financial industry and at one point was having meetings with MasterCard executives in Manhattan. Then the MasterCard meetings shifted to the Purchase bldg. They said they acquired it from IBM for less than they paid to have the hardware on all internal doors changed.

posts mentioning HA/CMP
https://www.garlic.com/~lynn/subtopic.html#hacmp
3-tier, middle layer, saa posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

a couple posts mentioning Somers and Purchase
https://www.garlic.com/~lynn/2016g.html#80 IBM Sells Somers Site for $31.75 million
https://www.garlic.com/~lynn/2016g.html#81 IBM Sells Somers Site for $31.75 million

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe Comparison

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe Comparison
Date: 14 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#21 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#19 Virtual Machine History
and
https://www.garlic.com/~lynn/2025.html#17 On-demand Supercomputer
https://www.garlic.com/~lynn/2025.html#16 On-demand Supercomputer

1988, IBM branch office asks if I could help LLNL (national lab) with getting some serial stuff they are working with standardized, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980). We get very early FCS cards (before standard finalized) with 1gbit transfer, full-duplex, aggregate 200mbytes/sec and 64-port FCS switch for RS/6000. Then POK announces their serial stuff in the 90s as ESCON (17mbytes/sec, when it is already obsolete). Then some POK engineers become involved with FCS and define heavy weight protocol that significantly reduces throughput, eventually announced/ships as FICON. Latest public benchmark I've found is z196 "Peak I/O" getting 2M IOPS with 104 FICON. About the same time a FCS is announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). IBM pubs also recommended that SAPs (system assist processors that do actual I/O) be kept to 70% CPU ... which would be more like 1.5M IOPS. Also no CKD DASD have been made for decades (all being simulated on industry standard fixed-block disks, as far back as CKD 3380s in process of moving to fixed-block, can be seen in records/track formula where record length is rounded up to multiple of fixed "cell" size).

some FCS history
https://www.networxsecurity.org/members-area/glossary/f/fibre-channel.html

Last product did at IBM was HA/CMP. Started out as HA/6000 for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs (including LLNL, FCS, and their supercomputer "LINCS" filesystem) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster support in same source base with Unix. I also coin the terms disaster survivability and geographic survivability when out marketing HA/CMP. The IBM S/88 product administrator then starts taking us around to their customers and gets me to write a section for corporate continuous availability strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain that they couldn't meet the objectives).

Early Jan1992, meeting with Oracle CEO, IBM/AWD executive Hester tells Ellison that we would have 16-system cluster by mid92 and 128-system cluster by ye92. Late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later). Possibly contributing was mainframe DB2 had been complaining that if we were allowed to continue, it would be years ahead of them.

FCS&FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
DASD, CKD, FBA, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
getting to play disk engineer in gpd/disk bldg14/enginnering and bldg15/product test
https://www.garilc.com/~lynnsubtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Machine History

From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Machine History
Date: 14 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#21 Virtual Machine History

... there is story towards the end of the first decade in the new century, the head of Microsoft tells Intel that they have to go back to doubling single processor performance every 18-24months (instead of combination of multiple cores and faster processor) because multi-thread/parallel programming is too hard. Intel replies that it isn't possible.

1999, single processor Pentium3 2BIPS to 2010 E5-2600 server blade (two 8core XEON chips) for 500BIPS, is approx. doubling throughput every 18months.

SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some past posts referencing the story
https://www.garlic.com/~lynn/2023e.html#5 Boyd and OODA-loop
https://www.garlic.com/~lynn/2023d.html#49 Computer Speed Gains Erased By Modern Software
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#51 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2018e.html#50 OT: Trump
https://www.garlic.com/~lynn/2018d.html#57 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017f.html#14 Fast OODA-Loops increase Maneuverability
https://www.garlic.com/~lynn/2017e.html#52 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#7 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016f.html#14 New words, language, metaphor
https://www.garlic.com/~lynn/2016f.html#10 Boyd OODA-loop Innovation
https://www.garlic.com/~lynn/2016d.html#2 Does OODA-loop observation carry a lot of baggage
https://www.garlic.com/~lynn/2016c.html#60 Which Books Can You Recommend For Learning Computer Programming?
https://www.garlic.com/~lynn/2016c.html#56 Which Books Can You Recommend For Learning Computer Programming?
https://www.garlic.com/~lynn/2014m.html#118 By the time we get to 'O' in OODA
https://www.garlic.com/~lynn/2014d.html#85 Parallel programming may not be so daunting
https://www.garlic.com/~lynn/2012m.html#28 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012l.html#91 Difference between fingerspitzengefuhl and Coup d'oeil?
https://www.garlic.com/~lynn/2012j.html#44 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012e.html#15 Why do people say "the soda loop is often depicted as a simple loop"?
https://www.garlic.com/~lynn/2010k.html#67 Idiotic programming style edicts
https://www.garlic.com/~lynn/2009s.html#62 Problem with XP scheduler?
https://www.garlic.com/~lynn/2008f.html#74 Multicore boom needs new developer skills
https://www.garlic.com/~lynn/2008f.html#42 Panic in Multicore Land
https://www.garlic.com/~lynn/2008d.html#90 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008d.html#81 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2007m.html#2 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007i.html#78 John W. Backus, 82, Fortran developer, dies

--
virtualization experience starting Jan1968, online at home since Mar1970

Virtual Machine History

From: Lynn Wheeler <lynn@garlic.com>
Subject: Virtual Machine History
Date: 16 Jan, 2025
Blog: Facebook

https://www.garlic.com/~lynn/2025.html#19 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#21 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#25 Virtual Machine History

370/158 VMASSIST was microcode tweaks for some privilege instructions, load VMBLOK pointer into CR6 when running virtual machine ... and instead of privilege instruction interrupt when running problem mode ... it would emulate the instruction if the virtual machine was in (virtual) supervisor mode ... it avoided task switch into VM370 kernel (interrupt, save registers, emulate instruction) and then switch back to virtual machine (restore registers, etc).

I was con'ed into helping with Endicott with ECPS for 138/148. In addition to adding a couple more instructions to what was VMASSIST, it also move 6kbytes of highest executed vm370 pathlengths into microcode ... initial analysis found 6kbytes that accounted for 79.55% of kernel execution (low-end and mid-range machine emulated 370 with virtual microcode, avg. ten microcode instructions per 370 instruction ... managed to come close to 10:1 speedup). This was things like I/O simulation (SIO/SIOF) which required making copy of the channel program ,,,, replacing virtual addresses with real.

Look at doing something similar to ECPS for 3033 didn't show similar speedup. 85/168-1/168-3/3033 ... where horizontal microcode and throughput measured in processor cycles per 370 instruction. 168-1 microcode took avg 2.1cycles per 370 instruction, microcode was improved to 1.6 cycles per 370 instruction for 168-3. 3033 started out remapping 168-3 logic to 20% faster chips ... and then optimizing microcode to get to avg of one machine cycle per 370 instruction (improving 3033 throughput to about 50% better than 168-3). Straight remapping of VM370 kernel instruction to microcode on 3033 didn't run any faster ... it was only elimination of things like task switch into vm370 kernel and back into virtual machine that saw benefit (and channel program copying substituting real addresses for virtual basically was much more complex operation).

Endicott then moved ECPS to 4300s (that were in same era as 3033). Jan1979, I was asked to do a VM/4341 benchmark for national lab that was looking to get 70 for a compute fram (sort of leading edge of the coming "cluster" supercomputing tsunami). Also, small cluster of 4341s had higher aggregate throughput than 3033, much less expensive, as well as smaller cooling, power, and floor footprint. Then large corporations were ordering hundreds of VM/4341s at a time for placing out in (non-datacenter) departmental areas (sort of the leading edge of the coming "distributed" computing tsunami) ... inside IBM, conference rooms were becoming scarce, being converted into distributed vm/4341 rooms.

Original ECPS analysis
https://www.garlic.com/~lynn/94.html#21

Original virtual machine was the science center did hardware mods to 360/40 to add virtual memory and then implemented CP/40 (& CMS). When 360/67 becomes available standard with virtual memory CP/40 morphs into CP/67. 360/67 originally was for tss/360 ... but most ran as 360/65 with os/360 ... and then lots moved to CP/67.

Melinda's VM370 histories
https://www.leeandmelindavarian.com/Melinda#VMHist

As undergraduate, the univ had hired me fulltime responsible for os/360 running on 360/67 (replacing 709/1401). Student fortran ran under second on 709 (tape->tape), but initially over a minute with OS/360. I install HASP cutting time in half. I then do a lot of OS/360 optimization, including redoing STAGE2 SYSGEN to run in production jobsteam (instead of starter system) and place datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs (never got better than 709 until I install Univ. of Waterloo Watfor).

Then CSC comes out to install CP67 at the univ (3rd after CSC itself and MIT Lincol Labs) and I mostly play with it during my weekend dedicated time (univ datacenter shutdown on weekends and I had it dedicated for 48hrs straight, although monday classes were difficult). First couple months, I rewrite lots of CP67 pathlengths to optimize running OS/360 in virtual machine. OS/360 test jobstream ran 322secs stand alone and initially 856secs in virtual machine (CP67 CPU 534secs), got CP67 CPU down to 113secs (from 534secs) and then was asked to participate in SHARE meeting for official CP67 announce.

As other CP67 pathlengths were being optimized, kernel dynamic storage management was increasing as percent of total kernel CPU (to over 20%). Eventually CSC redoes CP67 kernel storage management with subpools (14 instruction pathlength, including register save/restore calling FREE/FRET) ... cutting kernel storage management back to a 1-3% of total CP67 kernel CPU.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning CSC CP/67 subpool
https://www.garlic.com/~lynn/2024f.html#22 stacks are not hard, The joy of FORTRAN-like languages
https://www.garlic.com/~lynn/2019e.html#9 To Anne & Lynn Wheeler, if still observing
https://www.garlic.com/~lynn/2010h.html#21 QUIKCELL Doc
https://www.garlic.com/~lynn/2008h.html#53 Why 'pop' and not 'pull' the complementary action to 'push' for a stack
https://www.garlic.com/~lynn/2007q.html#15 The SLT Search LisT instruction - Maybe another one for the Wheelers
https://www.garlic.com/~lynn/2006r.html#8 should program call stack grow upward or downwards?
https://www.garlic.com/~lynn/2006j.html#21 virtual memory
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2000d.html#47 Charging for time-share CPU time
https://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?

Some posts mentioning OS/360 and CP/67 work as undergraduate
https://www.garlic.com/~lynn/2025.html#8 IBM OS/360 MFT HASP
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#15 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#60 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360
https://www.garlic.com/~lynn/2023e.html#29 Copyright Software
https://www.garlic.com/~lynn/2023e.html#10 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#62 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2022h.html#110 CICS sysprogs
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022c.html#70 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2021e.html#19 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2019c.html#28 CICS Turns 50 Monday, July 8
https://www.garlic.com/~lynn/2014f.html#76 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2004b.html#53 origin of the UNIX dd command
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)

--
virtualization experience starting Jan1968, online at home since Mar1970

360/65 and 360/67

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/65 and 360/67
Date: 17 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67

... aka 360/65, not just IBM ... from ACS/360 "END"
https://people.computing.clemson.edu/~mark/acs_end.html
Of the 26,000 IBM computer systems in use, 16,000 were S/360 models (that is, over 60%). [Fig. 1.311.2]

Of the general-purpose systems having the largest fraction of total installed value, the IBM S/360 Model 30 was ranked first with 12% (rising to 17% in 1969). The S/360 Model 40 was ranked second with 11% (rising to almost 15% in 1970). [Figs. 2.10.4 and 2.10.5]

Of the number of operations per second in use, the IBM S/360 Model 65 ranked first with 23%. The Univac 1108 ranked second with slightly over 14%, and the CDC 6600 ranked third with 10%. [Figs. 2.10.6 and 2.10.7]

... snip ...

also lists some of the ACS/360 features that showup more than 20yrs later with ES/9000.

Early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM. One of his stories was that he was very vocal that electronics across the trail wouldn't work ... and possibly as punishment, he is put in command of "spook base" (about same time I was at Boeing). One of his biographies was "spook base" was $2.5B "wind fall" for IBM (ten times Renton). Reference gone 404, but lives on at wayback machine
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
also
https://en.wikipedia.org/wiki/Operation_Igloo_White

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html

other recent Boeing CFO/BCS/Renton posts
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#70 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#40 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#58 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3090

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3090
Date: 17 Jan, 2025
Blog: Facebook
Sometime 1986, the 3090 product administrator tracks me down. The problem was that there was industry service that collected EREP data from IBM and clone systems and published summaries. The 3090 channels were designed to have an 3-5 channel errors over a year's service aggregate for all machines ... and it was reported instead as 20.

Turns out in 1980, STL bldg was bursting at the seams and they were moving 300 people (and 3270s) from the IMS group to off-site bldg with dataprocessing back to STL datacenter. They had tried "remote 3270" support and found human factors totally unacceptable. I then get con'ed into doing channel-extender support so they can place channel attached 3270 controllers at the off-site bldg (with no perceived difference in human factors between offsite and in STL). For certain type of transmission errors, I would simulate unit check "CC" (channel check) driving channel program retry. Then the hardware vendor tries to get IBM to release my support, but there is group in POK playing with some serial stuff (and afraid if my stuff was in the market, it would be harder to release their stuff) and get it vetoed. The hardware vendor then releases a simulation of my support and that was what was responsible for the 15 additional channel checks. I double check operating system error recovery code and effectively "IFCC" errors result in same process as "CC" ... and get the vendor to make the change.

Channel-extender had side-effect of improving STL mainframe system throughput by 10-15%. STL spread the (really slow) 3270 channel attached controllers across all channels with 3330 DASD ... and the 3270 channel busy was interfering with DASD throughput. The channel-extender hardware (now interfacing directly to the mainframe channels) had significantly lower channel busy (for the same amount of 3270 activity). There was consideration moving all STL 3270 channel attached controllers to channel extenders ... to improve throughput of all their systems.

trivia: In 1988, IBM branch office asks if I could help LLNL (national lab) standardize some serial stuff that they were working with, which quickly becomes fibre-channel support (FCS, including some stuff I had done in 1980) and get some 1gbit, full-duplex (200mbytes/sec aggregate) cards for RS/6000. In 90s, POK finally gets their stuff shipped with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy-weight protocol that significantly reduces throughput that is released as FICON. Latest public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 FICON. About the same time a FCS is announced for E5-2600 server blades claiming over million IOPS (two such FCS with higher throughput than 104 FICON).

Note also, IBM pubs that SAPs (system assist processors that do the actual I/O) be kept to 70% CPU ... which would make z196 more like 1.5M IOPS.

Aside no CKD DASD has been made for decades, all being simulated on industry standard fixed block disks. Note: even by 3380, disk technology was moving to "fixed-block", can be seen in 3380 formulas for records/track, requiring record length to be rounded up to multiple of fixed "cell" size.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

some recent posts mentioning 3090, channel busy, and 3880/3380
https://www.garlic.com/~lynn/2024f.html#46 IBM TCM
https://www.garlic.com/~lynn/2024f.html#5 IBM (Empty) Suits
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#35 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024d.html#95 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#48 Vintage 3033
https://www.garlic.com/~lynn/2024.html#80 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#26 Vintage 370/158
https://www.garlic.com/~lynn/2023f.html#62 Why Do Mainframes Still Exist
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#103 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#61 mainframe bus wars, How much space did the 68000 registers take up?
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#45 IBM DASD
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3090

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3090
Date: 17 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#28 IBM 3090

when I 1st transferred to San Jose Research in the 70s, I got to wander around silicon valley (ibm & non-ibm) datacenters in silicon valley including disk bldg14/engineering and bldg15/product test across the street. They were running 7x24, pre-scheduled stand alone testing ... and mentioned that they had recently tried MVS, but it had 15min MTBF (in that environment, requiring manual re-ipl). I offer to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand concurrent testing, greatly improving productivity. Bldg15 then gets the 1st engineering 3033 (outside 3033 processor engineering in POK) and because testing was only taking a person or two of CPU, we scrounge up a 3830 controller and 3330 string, setting up our own private online service (including running 3270 coax under the street to my office in 28. Air-bearing simulation (for 3370 thin-film head design) was getting only a couple turn-arounds a month on SJR 370/195. We set it up on bldg15 3033 and was able to get several turn-arounds a day.

I then get a irate call one monday morning asking what I had done to the bldg15 3033 system, interactive response got significantly worse and throughput degraded. I said nothing, they say nothing, then discover that the 3830 controller had been replaced with new, development 3880 controller. While special hardware path as added to 3880 to handle 3380 3mbyte/sec transfer, the 3830 fast horizontal microprocessor had been replaced in the 3880 with a really slow vertical microprocessor ... significantly driving up elapsed time and channel busy for everything else. Was eventually was able to start presenting operation end interrupt early ... sort of masking but couldn't do much about channel busy. 3090 had configured number of channels (to meet target system throughput) assuming that 3880 was same as 3830 with 3mbyte/sec transfer. When they found out how bad 3880 channel busy was, they realized that they had to significantly increase the number of channels (requiring additional TCM, semi-facetiously claiming they would bill the 3880 group for the additional TCM 3090 manufacturing cost) to make target throughput. Marketing eventually respins the big increase in number of 3090 channels as wonderful I/O machine.

trivia: I had done some work with the 3033 processor engineers (before transferring out to SJR) and once 3033 was out the door, they start on 3090.

posts getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some posts mentioning having to add channels to 3090 to compensate for 3880 performance issues and marketing respinning it:
https://www.garlic.com/~lynn/2024f.html#46 IBM TCM
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#35 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#66 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021j.html#92 IBM 3278
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019.html#51 3090/3880 trivia
https://www.garlic.com/~lynn/2013h.html#86 By Any Other Name

--
virtualization experience starting Jan1968, online at home since Mar1970

3270 Terminals

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3270 Terminals
Date: 18 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024f.html#12 3270 Terminals

trivia: HONE started as CP67 virtual machine after the 1969 unbundling announce to give branch office SEs a way of practicing with guest operating systems (2741 terminals). Science center had also ported APL\360 to CMS as CMS\APL ... and HONE started using it for CMS\APL-based sales&marketing support AIDs ... which came to dominate all HONE use (and SE using it for guest operating system practice evaporated).

After joining IBM one of my hobbies at the science center was enhanced production operating systems for internal datacenters and HONE was my 1st customer (and long time customer, converting from CP67/2741s to VM370 & eventually 3270s) ... including HONE clones starting to prop up all over the world. Later in the early 80s, "VMIC" (vm/4341s) started appearing at region and larger branches for non-AIDS CMS application use (like email/PROFS).

hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle

--
virtualization experience starting Jan1968, online at home since Mar1970

On-demand Supercomputer

From: Lynn Wheeler <lynn@garlic.com>
Subject: On-demand Supercomputer
Date: 18 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#16 On-demand Supercomputer
https://www.garlic.com/~lynn/2025.html#17 On-demand Supercomputer

Early 80s, got HSDT project, T1 and faster computer links (terrestrial and satellite) and various arguments with the communication group (note 60s, IBM 2701 telecommunication controllers that supported T1 ... but transition to SNA/VTAM/37xx in the 70s saw controllers capped at 56kbit/sec). Although I was in research, had part of a wing offices/labs in Los Gatos ... and first long haul T1 was between Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston that had lots of floating point systems boxes that included 40mbyte/sec disk arrays.
https://en.wikipedia.org/wiki/Floating_Point_Systems
Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.
... snip ...

I was also working w/NSF director and HSDT was suppose to get $20M to interconnect the NSF Supercomputing datacenters. Then congress cuts the budget, some other things happen and finally an RFP is released (in part based on what we already had running). Internal politics prevented us from bidding and the NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

During this time, I also had proposal to see how many 370s (Boeblingen with "ROMAN" 3chip, 3MIPS "168-3" 370") & 801/RISCs (Los Gatos was working on single chip "Blue Iliad", 1st 32bit 801/RISC would run 20MIPs) that could be crammed into racks. I was suppose to give update to NSF director on supercomputer interconnect, but then YKT wanted me to hold meetings on rack computing ... and I had to find somebody to give update for the NSF director; archived posts with old email
https://www.garlic.com/~lynn/2011b.html#50
https://www.garlic.com/~lynn/2011b.html#email850312
https://www.garlic.com/~lynn/2011b.html#email850313
https://www.garlic.com/~lynn/2011b.html#email850314

https://www.garlic.com/~lynn/2015c.html#22
https://www.garlic.com/~lynn/2015c.html#email850321

https://www.garlic.com/~lynn/2011b.html#55
https://www.garlic.com/~lynn/2011b.html#email850325
https://www.garlic.com/~lynn/2011b.html#email850325b
https://www.garlic.com/~lynn/2011b.html#email850326
https://www.garlic.com/~lynn/2011b.html#email850402

https://www.garlic.com/~lynn/2011c.html#6
https://www.garlic.com/~lynn/2011c.html#email850425
https://www.garlic.com/~lynn/2011c.html#email850425b
https://www.garlic.com/~lynn/2011c.html#email850426

https://www.garlic.com/~lynn/2021.html#65
https://www.garlic.com/~lynn/2021.html#email860527
https://www.garlic.com/~lynn/2021.html#email860527b

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3090

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3090
Date: 19 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#28 IBM 3090
https://www.garlic.com/~lynn/2025.html#29 IBM 3090

Shortly after graduating and joining IBM science center, I was asked to help with multithreading the 370/195 ... this webpage on the death of ACS/360 (Amdahl had won battle to make ACS, 360-compatible, folklore ACS/360 was killed, afraid that it would advance the state of art too fast and IBM would loose control of the market; Amdahl then leaves to form his own computer company) has information ("Sidebar: Multithreading") about IBM multithreading patents
https://people.computing.clemson.edu/~mark/acs_end.html

Issue was that 370/195 had 64 instruction pipeline that could do out-of-order execution, but no branch prediction (or speculative execution), so conditional branches drained the pipeline ... and lots of codes ran at only half 195 rated MIPS. Going to a pair of I-streams (simulating multiprocessor computer) could achieve keeping all the execution units fully busy, modulo MVT (through MVS) multiprocessor support/overhead documented as two-proceessor only able to achieve 1.2-1.5 times throughput of single processor. In any case when it was decided to add virtual memory to all 370s, it was deemed not practical to add virtual memory to 195 ... and all further work on 195 was halted.

trivia: with the death of FS
https://people.computing.clemson.edu/~mark/fs.html
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
http://www.jfsowa.com/computer/memo125.htm

there was mad rush to get stuff back into the 370 product pipelines (during FS, internal politics was killing off 370 efforts), including kicking off Q&D 3033&3081 efforts in parallel. I got con'ed into helping with a 370 16-CPU multiprocessor and we con the 3033 processor engineers into working on it in their spare time. Everybody thought it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system ("MVS") had (effective) 16-CPU support. Then some of us were told to never visit POK again and the 3033 processor engineers were told to keep heads down on 3033 (and no distractions). Note: POK doesn't ship a 16-CPU multiprocessor until after the turn of the century.

other trivia: once 3033 was out the door, the 3033 processor engineers start on trout/3090.

SMP, tightly-couplded, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

some IBM and non-IBM performance numbers
https://www.garlic.com/~lynn/2025.html#21 Virtual Machine History
https://www.garlic.com/~lynn/2024f.html#96 Cloud Exit: 42% of Companies Move Data Back On-Premises
https://www.garlic.com/~lynn/2024f.html#3 Emulating vintage computers
https://www.garlic.com/~lynn/2024f.html#0 IBM Numeric Intensive
https://www.garlic.com/~lynn/2024e.html#130 Scalable Computing
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#34 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#93 PC370
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#57 Vintage RISC
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ATM Protocol?

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ATM Protocol?
Date: 20 Jan, 2025
Blog: Facebook
Ed & I had transferred from CSC to SJR in 1977. Ed had been responsible for the science center CP67-based wide-area network which morphs into the corporate internal network (larger than arpanet/iinternet from just about the beginning until sometime mid/late 80s ... about the time the internal network was forced to convert to SNA/VTAM) and was also used for the corporate sponsored univ. BITNET.
https://en.wikipedia.org/wiki/BITNET
Edson (passed aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

Early 80s I got HSDT project, T1 and faster computer links (both terrestrial and satellite) and some amount of battles with the communication group. Note 60s, IBM had the 2701 telecommunication controller that supported T1, but then move to SNA/VTAM/37x5 in the 70s seemed to cap their links at 56kbits/sec. Was working with the NSF director and HSDT was suppose to get $20M to interconnect the NSF Supercomputing datacenters. Then congress cuts the budget, some other things happen and finally an RFP is release (in part based on what we already had running). Internal politics prevented us from bidding and the NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

Although I was in research, had part of a wing offices/labs in Los Gatos ... and first long haul T1 was between Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston that had lots of floating point systems boxes that included 40mbyte/sec disk arrays.
https://en.wikipedia.org/wiki/Floating_Point_Systems
Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.
... snip ...

newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Mid-80s, the IBM communication group was fiercely fighting off client/server and distributed computing and attempting to block the release of mainframe TCP/IP support ... when that failed, they said that since they had corporate strategic responsibility for everything that crossed datacenter walls and it had to be released through them; what shipped got aggregate of 44kbytes/sec using nearly whole 3090 processor. I then did the changes for RFC1044 and in some tuning tests at Cray Research between Cray and (ibm 370) 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

IBM was claiming that it would support (ISO) OSI ... but there was a joke that while (TCP/IP) IETF standards body had a requirement that there be at least two interoperable implementations to progress in standards process .... ISO didn't have a requirement that a standard be implementable.

OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ...

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

The Greatest Capitalist Who Ever Lived: Tom Watson Jr. and the Epic Story of How IBM Created the Digital Age

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Greatest Capitalist Who Ever Lived: Tom Watson Jr. and the Epic Story of How IBM Created the Digital Age
Date: 20 Jan, 2025
Blog: Facebook
The Greatest Capitalist Who Ever Lived: Tom Watson Jr. and the Epic Story of How IBM Created the Digital Age
https://www.amazon.com/Greatest-Capitalist-Who-Ever-Lived-ebook/dp/B0BTZ257NJ/

recent refs:
https://www.garlic.com/~lynn/2024f.html#81 The Rise and Fall of the 'IBM Way'
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023g.html#75 The Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us

revenue from Endicott machines
https://people.computing.clemson.edu/~mark/acs_end.html

1972, Learson tries (and fails) to block the bureaucrats, careerists, and MBAs from destroying Watson culture/legacy (lots of refs):
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

two decades later (1992), IBM has one of the largest losses in the history of US corporations and was being re-orged into the 13 "baby blues" (take-off on the "baby bell" breakup a decade earlier) in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ATM Protocol?

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ATM Protocol?
Date: 20 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#33 IBM ATM Protocol?

In late 80s, my wife did short stint as chief architect for Amadeus (EU airline res system built off the old Eastern "System One") and sided with EU about X.25 (instead of SNA) and the communication group got her replaced. It didn't do them much good because EU went with X.25 anyway, and their replacement got replaced.

In the mid-80s, the communication group prepared analysis for the corporate executive committee why T1 wouldn't be needed for customers until well into the 90s. They did survey of customer "fat pipes" (37x5 parallel 56kbit/sec links treated as single logical link), showing number of 2, 3, ..., 6, 7 parallel 56kbit link "fat pipes" ... and number of installs dropped to zero by seven. What they didn't know (or didn't want to show the corporate executive committee) was that typical T1 telco tariff was about the same as five or six 56kbit links. HSDT did simple customer survey finding 200 T1 links ... where customers moved to non-IBM controller and software.

Then the communication group was forced into providing the 3737 ... a box with boatload of memory and Motorola 68k processors, that simulated a mainframe VTAM over CTCA. It would immediately ACK transmission to the host VTAM before transmitting (non-SNA) over the T1 link to the remote 3737 (that would then forward it to the remote mainframe VTAM). Over short haul terrestrial T1 it maxed out around 2mbits/sec aggregate (aka in part, 3737 was countermeasure to VTAM SNA window pacing algorithm that would max out w/T1, before it started getting return ACKs, problem worsened as round-trip latency increased, even with 3737 attempting to mask round-trip latency).

HSDT very early on went to dynamic adaptive rate-based pacing

some recent posts mentioning Amadeus res system
https://www.garlic.com/~lynn/2024e.html#92 IBM TPF
https://www.garlic.com/~lynn/2023g.html#90 Has anybody worked on SABRE for American Airlines
https://www.garlic.com/~lynn/2023g.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#18 Vintage X.25
https://www.garlic.com/~lynn/2023f.html#9 Internet
https://www.garlic.com/~lynn/2023d.html#80 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#35 Eastern Airlines 370/195 System/One
https://www.garlic.com/~lynn/2023c.html#48 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#47 IBM ACIS
https://www.garlic.com/~lynn/2023c.html#8 IBM Downfall
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022h.html#97 IBM 360
https://www.garlic.com/~lynn/2022h.html#10 Google Cloud Launches Service to Simplify Mainframe Modernization
https://www.garlic.com/~lynn/2022c.html#76 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#75 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

archived post mentioning 3737
https://www.garlic.com/~lynn/2011g.html#77
with 3737 description
https://www.garlic.com/~lynn/2011g.html#email880130
archived post mentioning 3737
https://www.garlic.com/~lynn/2011g.html#75
with more 3737 description
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2011g.html#email881005

recent posts mentioning 3737
https://www.garlic.com/~lynn/2024f.html#116 NASA Shuttle & SBS
https://www.garlic.com/~lynn/2024e.html#95 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#91 When Did "Internet" Come Into Common Use
https://www.garlic.com/~lynn/2024e.html#28 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#71 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2024b.html#56 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023d.html#120 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023c.html#57 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#77 IBM HSDT Technology
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#103 IBM ROLM
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021j.html#32 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#31 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021d.html#14 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#97 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ATM Protocol?

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ATM Protocol?
Date: 21 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#33 IBM ATM Protocol?
https://www.garlic.com/~lynn/2025.html#35 IBM ATM Protocol?

Late 80s, a univ. did analysis comparing mainframe VTAM/LU6.2 with standard BSD (Tahoe/Reno) TCPIP implementation ... VTAM/LU6.2 had 160k instruction pathlength and 15 buffer copies ... while BSD TCP had 5k instruction pathlength and five buffer copies.

The communication group fought to block me from XTP technical advisory board member... where we were working on TCP "high speed" protocol that could move data directly to/from application space using scatter/gather I/O (akin to mainframe chain data CCWs) and trailer (rather than header) protocol (aka CRC rather than done/checked in the processor and for the header, was done outboard in TCP/IP protocol adapter as the data flowed through and added/checked to/with trailer) ... eliminating buffer copies and further reducing pathlength (even PCs could be full-fledged, peer-to-peer TCP/IP network nodes rather than hobbled SNA/VTAM stations). Also doing dynamic adaptive rate-based pacing (HSDT implemented early on) as opposed to window-based pacing (for collision/overrun avoidance).

After leaving IBM in the early 90s, I was brought in as consultant into small client/server startup that wanted to do payment transactions on what they called "commerce server", the startup had also invented this technology they called "SSL" they wanted to use, it is now frequently called "electronic commerce".

I had completely authority for everything between "web servers" and gateways to the financial industry payment networks. Payment network trouble desks had 5min initial problem diagnoses ... all circuit based. I had to do a lot of procedures, documentation and software to bring packet-based internet up to that level. I then did a talk "Why Internet Wasn't Business Critical Dataprocessing" (based on the work) ... which Postel (Internet standards editor) sponsored at ISI/USC.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

some recent posts mentioning "Why Internet Wasn't Business Critical Dataprocessing"
https://www.garlic.com/~lynn/2024g.html#80 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024f.html#47 Postel, RFC Editor, Internet
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021e.html#7 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe
Date: 21 Jan, 2025
Blog: Facebook
upthread processing power
https://www.garlic.com/~lynn/2025.html#21 Virtual Machine History
also references I/O throughput (DASD CKD haven't been built for decades, just emulated on industry standard fixed-block disks, same as all the other platforms with an CKD emulation layer added)
https://www.garlic.com/~lynn/2025.html#24 IBM Mainframe Comparison

IOPS & some disk subsystem ratings
https://en.wikipedia.org/wiki/IOPS
How should I interpret disk IOPS listed by cloud hosting providers vs. those listed by drive manufacturers?
https://serverfault.com/questions/1023707/how-should-i-interpret-disk-iops-listed-by-cloud-hosting-providers-vs-those-lis

the last product we did at IBM was HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

then the executive we reported to, moves over to head up somerset for AIM
https://en.wikipedia.org/wiki/AIM_alliance
& single-chip power/pc (with some Motorola 88k design)
https://en.wikipedia.org/wiki/PowerPC
AS/400
https://en.wikipedia.org/wiki/IBM_AS/400
1980 era, plan to move controllers, entry/mid-370s, s/36+s/38 to 801/risc, for various reasons they flounder and fall back to CISC
https://en.wikipedia.org/wiki/IBM_AS/400#Fort_Knox
AS/400 (later also) moves to power/pc
https://en.wikipedia.org/wiki/IBM_AS/400#The_move_to_PowerPC

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Tue, 21 Jan 2025 17:27:13 -1000
antispam@fricas.org (Waldek Hebisch) writes:
Lynn gave figures that suggest otherwise: that on similar hardware VM could handle more users than competition. Looking for reasons I can see the following: - VM quite early got good paging algorithm - VM had relatively small amout of non-pagable memory - IIUC VM supported memory mapping of disk files/areas. Using this in principle one could get similar amount of memory sharing as good non-VM system

Some of the MIT CTSS/7094 people go to the 5th flr for Multics. Others go to the ibm cambridge science center on the 4th flr and do virtual machines, internal network, lots of performance and interactive apps, invent GML in 1969, etc. There was some amount of friendly rivalry between 4th and 5th flrs.

Then some CSC come out to the univ to install CP67 (3rd after CSC itself and MIT Lincoln Labs).

as undergraduate I did global LRU and some other paging related stuff (thrashing control, etc), dynamic adaptive resource managerment and scheduling, optimzing pathlengths, etc as undergraduate ... circa 1968 ... when there was academic literature about local LRU and other kinds of implementations. CMS filesystem was 360 channel program I/O ... and I modify CMS & CP67 to provide a virtual machine simplfied I/O interface that significantly cuts the emulation pathlength.

when I graduate and join IBM CSC and get much of undergaduate CP67 stuff up and running on CSC, 768kbyte 360/67 (104 pageable pages after fixed storage). Then IBM Grenoble Science Center has a 1mbyte 360/67 (155 pageable pages after fixed storage) and modify CP67 according to the 60s ACM academic literature. I have 75-80 users on the CSC system with better response and throughput then Grenoble with 35 users (all users running similar workloads).

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters. Somewhat the 4th/5th flr rivalry I then redo CMS filesystem to be page/memory mapped with lots of new features (and further vastly simplified API and pathlength implementation, I joke that I learned what not to do from observing the IBM TSS/360 single-level-store implementation). Then early 70s, IBM had Future System project that was total different from 360/370 and was going to completely replaced ... and had adapted a TSS/360-like single-level-store. When FS implodes, a side-effect was it gave page-oriented filesystems a bad reputation. As a result while I deployed my implementation internally on lots of systems, never got approval to ship to customers.

Then at Dec81 ACM, Jim Gray asks me to help a Tandem co-worker of his get his Stanford phd ... which heavily involved global LRU and the local LRU forces from the 60s were lobbying Stanford hard to not award Phd involving global LRU. Jim knew I had a large amount of performance data from both the Cambridge and Grenoble systems giving close A:B comparison on otherwise very similar hardware and software.

Back in the early 70s, after the IBM decision to add virtual memory to all 370s, the initial morph of CP67->VM370 dropped/simplified features. Starting w/VM370R2, I start moving lots of CP67 to VM370R2 for my internal CSC/VM system.

Later in the 70s, decision was made to release some number of my CSC/VM features to customers (not including CMS page-mapped filesystem)

In he mid-70s, Endicott (mid-range 370s) cons me helping with ECPS, moving lots of VM370 into 138/148. Mid-range averaged 10 native instruction for every 370 instructions, and most of the vm370 kernel 370 instructions move to native on 1-for-1 ... giving a 10:1 performance improvement. This was carried over to the 4300. In Jan1979, branch office cons me into doing a vm/4341 benchmark for national lab that was looking at getting 70 for compute farm (sort of leading edge of the coming cluster supercomputing tsunami). Then in the 80s, large corporations were ordering hundreds of vm/4341s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami).

This is archived (afc) post with a decade of VAX sales, sliced&diced by year, model, us/non-us
https://www.garlic.com/~lynn/2002f.html#0
vax and 4300s sold in the same mid-range market and 4300s sold in similar numbers as VAX in small unit orders ... big difference the large orders of scores and hundreds of 4300s.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
paging, replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#wsclock
dynamic adaptive resource management and scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Wed, 22 Jan 2025 08:07:39 -1000
antispam@fricas.org (Waldek Hebisch) writes:
Do you remember what kind of backing store was in use (drum, disk ?) on those machines?

re:
https://www.garlic.com/~lynn/2025.html#38

360/67 had 2301 fixed head drums for primary paging (similar to 2303 drum but transferred four heads in parallel, four times the data rate, 1/4th the number of "track", each "track" four times larger"). Overflow paging, spool file system, and CMS filesystem (as well as os/360 dasd) were all 2314.

when CP67 was initially installed at univ ... all queued I/O was FIFO and paging were single transfer channel programs. I redid 2314 to be ordered seek and page I/Os were single channel program for all queued requests for 2301 and for same 2314 arm position/cylinder (optimized for max transfers/revolution). 2301 originally had throughput of around 70 page requests/sec ... my rewrite improved it to 270/sec peak.

after joining CSC, CSC was in processing of adding 2314s strings, eventually with five 8+1 drive strings and one 5 drive string (45 2314 total).

after decision to add virtual memory to all 370s, did a enhancement with CP67 option to emulate 370 (virtual) machines. had to demonstrate extremely strong security ... since 370 virtual memory hadn't been announced and was still classified ... and CSC also had profs, staff, and students from Boston/Cambridge institutions using the system.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Wed, 22 Jan 2025 08:11:42 -1000
Peter Flass <peter_flass@yahoo.com> writes:
VM/CMS was fast because it was so simple. VM did the minimum necessary to ensure separation of users, allowing almost no sharing. CMS was about as sophisticated as PCDOS. Basically a loader for a single program.

re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39

got more sophisticated over time ... sort of started off as upgrade of CTSS.

trivia ... before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, kildall worked on IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Wed, 22 Jan 2025 08:16:19 -1000
re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40

other trivia: tss/360 was suppose to be the official system for 360/67, at the time tss/360 was "decomitted", there were 1200-some people involved with tss/360 and 12 people at CSC in the combined CP67/CMS group.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Wed, 22 Jan 2025 12:52:16 -1000
antispam@fricas.org (Waldek Hebisch) writes:
AFAIK Unix before Berkeley added it had no "copy on the write". IIUC early IBM mainframes also had no "copy on the write" and it was awkward to implement (rather late, when they wanted Posix compatibility they added appropriate hardware extention).

re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41

decade+ ago I was asked to track down executive decision to add virtual memory to all 370s ... found staff to executive making the decision, basically (OS/360) MVT storage management was so bad that region sizes had to be specified four times larger than used used ... as result common 1mbyte 370/165 only ran four regions concurrently, insufficient to keep 165 busy and justified. Mapping MVT to a 16mbyte virtual memory allowed number of concurrent executing regions to be increased by a factor of four (modulo capped at 15 for 4bit storage protect keys) ... similar to running MVT in a CP67 16mbyte virtual machine. Archive of a.f.c. 2011 post with pieces of email exchange
https://www.garlic.com/~lynn/2011d.html#73

VM370 was planning on taking advantage of 370 virtual memory R/O segment protect for sharing common CMS code. Then the 165 engineers were complaining that if they had to retrofit full 370 virtual memory architecture to 165, it would slip 370 virtual memory by six months. It was then decided to regress 370 virtual memory to 165 subset (and r/o segment protect was one of the casualties).

All other 370s that had already retrofitted full 370 virtual memory architecture to the 165 subset and any software written to use full architecture had to be redone for the 165 subset. VM370 to simulate R/O segment protect for CMS ... then had to deploy a hack using (360) storage protect keys ... CMS with shared segments ran with a PSW that never had key=0 (can store anywhere) and all CMS non-shared pages had non-zero protect key and CMS shared pages had zero protect key (i.e. CMS PSW key and non-shared page protect keys matched for stores, but shared pages keys never matched.

Then comes VM370R2 where 158 & 168 got VMASSIST ... load VMBLOK pointer into CR6 and if virtual PSW was in virtual supervisor mode, the machine would directly emulate priviliged mode for some number of privileged mode (w/o requiring simulation by vm370). The problem was VMASSIST didn't support the LPSW, ISK, & SSK hack for CMS shared R/O protect. For VM370R3, they come up with a hack that allowed CMS w/share R/O protect to be run in VMASSIST mode (by eliminating the protect key hack). When ever there was a task switch (from CMS with R/O protect) all the associated shared pages in memory, would be scanned for changed (aka some store) pages. Any changed (shared pages) would be marked as no longer shared ... and associated shared pages would be flagged as not in memory ... the next reference results in page fault and a (non-changed) copy be refreshed from backing store (selective copy on write).

Then comes VM370R4 with release of multiprocessor support (in the original morph of CP67->VM370 lots of stuff was simplified and/or dropped ... including multiprocessor support). For my VM370R2-based internal release had done the kernel reorg needed by multiprocessor support (but not actual SMP support) ... and then for VM370R3-based internal release included multiprocessor support ... initially for the internal branch office sales&marketing online HONE system (one my original internal customers, enhanced operating systems) ... and the US HONE datacenters had been recently consolidated in Palo Alto (trivia when FACEBOOK 1st moved into silicon valley, it was into new bldg built next door to the former US consolidated HONE datacenter). Initial had eight single processor systems configured in one of the largest single-system image, shared DASD operation with load-balancing and fall-over across the complex. Then I add multiprocessor back in so they can add 2nd processor to each system.

At least for my internal VM370R3-based release, I revert to the storage protect key hack, because scanning for changed, shared pages only showed VMASSIST benefit justified for original base VM370 with 16 shared pages ... and I had increased the number of shared pages by factor of at least 2-3 times ... and depending on what CMS was doing could be a great deal more (greatly increasing shared page scan overhead swamping the VMASSIST benefit).

The VMASSIST hack also got worse w/multiprocessor, which required unique set of share pages for each processor and each time a CMS user with shared pages, was dispatched, had to make sure their page table pointers to shared pages had to match the processor being dispatched on.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Wed, 22 Jan 2025 14:06:27 -1000
Peter Flass <peter_flass@yahoo.com> writes:
That's what happens with VM the tail to the MVS dog.

re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42

after decision add virtual memory to all 370s ... came the future system effort, completely different than 370 and was going to replace 370 (during FS period, internal politics was shutting down 370 projects, claim was that lack of new 370 during FS period gave the clone 370 makers their market foothold). then when FS implodes there is mad rush to get stuff back into the product pipelines, including kicking off quick&dirty 3033&3081 in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
https://en.wikipedia.org/wiki/IBM_Future_Systems_project

also the head of pok (high-end 370) manages to convince corporate to kill the vm370 product, shutdown the development group and transfer all the people to pok to work on MVS/XA (370xa, 31bit and various MVS specific features). Endicott (midrange 370) manages to save the vm370 product mission but has to recreate a development group from scratch.

Besides endicott con'ing me into (VM370) ECPS (reference upthread, initially 138&148 and then 4331&/4341), I also get con'ed into working on a 16-cpu 370 multiprocessor (having done multiprocessor for HONE on VM370R3 base, reference upthread, ... HONE 2-cpu 370 systems getting twice the throughput of single CPU system) and we con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips).

Everybody thought it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system ("MVS") had (effective) 16-cpu support (IBM documents at the time said MVS 2-cpu support had 1.2-1.5 times the throughput of single CPU system) and some of us were invited to never visit POK again and 3033 processor engineers directed to heads down on 3033 and no distractions (note POK doesn't ship a 16-cpu system until after the turn of the century).

trivia: along with ECPS, Endicott tried to get corporate to approve preinstalling VM370 on every machine shipped (but POK influence trying to totally kill VM370, wouldn't allow it) ... somewhat analogous to Amdahl;s microcode hypervisor (multiple domain, no vm370 software)
https://ieeexplore.ieee.org/document/7054
https://www.computinghistory.org.uk/det/15016/amdahl-580-Multiple-Domain-Feature-Overview/
https://archive.org/details/bitsavers_amdahl580AainFeatureOverviewJul86_3237564
and nearly decade later IBM 3090 microcode LPAR&PR/SM
https://en.wikipedia.org/wiki/Logical_partition

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
360/370 microcode (& ECPS) posts
https://www.garlic.com/~lynn/submain.html#360mcode

--
virtualization experience starting Jan1968, online at home since Mar1970

vfork history, Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: vfork history, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Wed, 22 Jan 2025 18:15:23 -1000
John Levine <johnl@taugh.com> writes:
In practice, most of the time if a program touches a stack page at all it's going to write to it, so copy-on-touch would have worked about as well without needing read-only stack pages. IBM mainframes didn't do page fault on write until quite late (S/390?) so some of their systems did CoT.

re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42
https://www.garlic.com/~lynn/2025.html#43

... but 360 key protect had both (real memory) store (& 360 optional fetch) protect ... which would be applied to all processes (the original 370 virtual memory architecture before regression to 165 subset, had segment r/o protect ... so could have a mixture of virtual address spaces some with no protect and some with r/o protect). VM370 was originally going to use the original segment protect for share segments, but with the retrenching to 165 subset, initially had to fall back to the storage key protect hack.

original 360 was key protect was 2kbytes but later changed to 4k, a later feature was special storage r/o page protect ... to prevent even authorized kernel (key=0) from storing
https://www.ibm.com/docs/en/zos-basic-skills?topic=integrity-what-is-storage-protection

360 program interrupt .... "04" storage key protection (instead of page fault interrupt)
https://en.wikipedia.org/wiki/IBM_System/360_architecture#Program_interruption

possible to trap on program interrupt (and simulate copy-on-write).

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Wed, 22 Jan 2025 18:30:55 -1000
antispam@fricas.org (Waldek Hebisch) writes:
Well, but is it essentially different than CMS, DOS or loading shared libraries on Unix? In each case code and data is loaded, run und then unloaded. Loading of new thing means that old thing does not run. Multics documentation is not entirely clear on this, but it seems that at given protection level there is single stack, which would make things like coroutines hairy.

re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42
https://www.garlic.com/~lynn/2025.html#43
https://www.garlic.com/~lynn/2025.html#44

when I originally did page-mapped CMS filesystem ... it included support for independent location segments ... but was quite crippled because CMS heavily borrowed OS/360 assemblers and compilers. While OS/360 made reference to program relocation ... address referenses had to be changed to fixed/absolute before execution. As a result, default program segments required fixed address loading.

TSS/360 had kept "addresses" as offset (from the segment base address). To get address independnt segments (including shared segments where the same shared segment could appear concurrently at different virtual addresses in different virtual address spaces), I had to hack the (application) code to something that resembled the TSS/360 convention.

A small subset of this was released in VM370R3 as DCSS (but without the CMS page-mapped filesystem and w/o the independent address location support) ... where the shared segment images were saved in special defined VM370 disk areas

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
page-mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap
segment adcon issue & location independent code
https://www.garlic.com/~lynn/submain.html#adcon

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Thu, 23 Jan 2025 08:34:23 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
How much of this was enforced by the OS? I have a suspicion that there is a dependence on well-behaved applications here, to maintain the integrity of the whole system.

re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42
https://www.garlic.com/~lynn/2025.html#43
https://www.garlic.com/~lynn/2025.html#44
https://www.garlic.com/~lynn/2025.html#45

well supervisor/problem separation to protect the OS.

Note original 801/RISC had cp.r operating system and pl.8 programming language and claimed it needed no hardware supervisor/problem mode separation ... the pl.8 language would only generate valid programs and cp.r would only load valid pl.8 programs for execution (as a result inline library code could execute all operations, w/o requiring supervisor calls).

The ROMP chip (w/cp.r & pl.8) was originally going to be be used for the followon to the displaywriter. When that got canceled they decided to pivot to the unix workstation market and got the company that had done PC/IX for the IBM/PC to do port for ROMP ... becoming PC/RT and AIX. However ROMP chip had to have hardware supervisor/problem mode for the UNIX paradigm

CMS XMAS tree exec ... displayed xmas greeting and (on 3270 terminal) blinking light xmas tree ... something like this simulation
https://www.garlic.com/~lynn/2007v.html#54

but the exec also resent a copy to everybody in the person's name/contact file
https://en.wikipedia.org/wiki/Christmas_Tree_EXEC
... swamping bitnet
https://en.wikipedia.org/wiki/BITNET

... this was dec87, a year before the 88 morris worm
https://en.wikipedia.org/wiki/Morris_worm

At the 1996 m'soft MDC at Moscone ... all the banner's said "Internet" ... but the constant refrain in all the sessions was "preserve your investment" ... which was visual basic automagic execution in data files ... including email messages ... giving rise to enormous explosion in virus attacks.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
Risk, Fraud, Exploits, Threats, Vulnerabilities
https://www.garlic.com/~lynn/subintegrity.html#fraud
--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Thu, 23 Jan 2025 09:00:18 -1000
cross@spitfire.i.gajendra.net (Dan Cross) writes:
CMS and Unix are rather different, as Lynn has described. But for the others, I'd say, yes, quite, because all of those things run at the same protection level in a single address space. That is the critical part missing in other systems.

re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42
https://www.garlic.com/~lynn/2025.html#43
https://www.garlic.com/~lynn/2025.html#44
https://www.garlic.com/~lynn/2025.html#45
https://www.garlic.com/~lynn/2025.html#46

note: OS/360 used 360 privileged instruction and supervisor/problem mode and for concurrent application program separation with (4bit) storage protection keys (for up to 15 concurrent applications plus kernel/supervisor).

This initial move of OS/360 MVT to VS2 was into a single 16mbyte virtual address space (something like running MVT in a CP67 16mbyte virtual machine). However, as systems got larger and more powerful ... they need to go past 15 cap. The VS2/SVS to VS2/MVS was to give each concurrent executing program its own 16mbyte virtual address space.

However, the OS/360 heritage was heavily pointer-passing API. In order for the kernel calls to access program parameter list they mapped an 8mbyte image of the MVS kernel into every 16mbyte virtual address space (so kernel call processing was done in the same virtual address space as the caller).

VS2/MVS also mapped kernel subsystems into their own separate 16mbyte virtual address spaces ... so application calls to subsystems, paraemters were now in different address space. For this they created the Common Segment Area ("CSA") that was mapped into every 16mbyte virtual adress space for subsystem API call parameters. The issue was subsystem parameter list space requirement was somewhat proportional to number subsystems and number of concurrently executing programs ... and by the time of 370 3033 processor CSA had morphed into 5-6mbyte "Common System Area" (CSA) leaving only 2-3 mbytes for programs, but was threatening to become 8mbytes (leaving zero for applictions).

This was part of POK (high-end 370s) VS2/MVS mad rush to 370/XA architecture, 31-bit addressing, access register, Program Call and Program Return instructions.

Program Call was akin to kernel call ... table of all the MVS subsystems and their associated address space pointers. A Program Call would select the specific subsystem entry, move the caller's address space to secondary and load the subsystem's address space as primary. A semi-privileged subsystem would access parameter list in the caller's "secondary address space" (no longer needing "CSA" space). Program Return would move the caller's secondary address space pointer to primary and return (task switches would have to save/restore both the primary and any secondary address space pointers).

assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance
Risk, Fraud, Exploits, Threats, Vulnerabilities
https://www.garlic.com/~lynn/subintegrity.html#fraud

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Thu, 23 Jan 2025 09:22:52 -1000
Lynn Wheeler <lynn@garlic.com> writes:
This was part of POK (high-end 370s) VS2/MVS mad rush to 370/XA architecture, 31-bit addressing, access register, Program Call and Program Return instructions.

re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42
https://www.garlic.com/~lynn/2025.html#43
https://www.garlic.com/~lynn/2025.html#44
https://www.garlic.com/~lynn/2025.html#45
https://www.garlic.com/~lynn/2025.html#46
https://www.garlic.com/~lynn/2025.html#47

... VS2/MVS was also getting quite bloated and getting to 370/XA was taking too long ... so there were pieces retrofitted to 3033.

VS2/MVS needed more real storage and cramped in 16mbyte real. The 16bit page table entry had 12bit real page numbers (for 16mbyte) ... and flag bits ... but there were two unused bits. They did a 3033 hack where the two unused bits could be prepended to the 12bit real page number for 14bits (64mbyte) mapping 16mbyte virtual address spaces into 64mbyte real (all instructions were still 24bit/16mbyte).

370 IDALs (full word) had I/O channel program extension for storage address ... so it was possible to use extend 3033 I/O to 31bit.

there was still periodic problem where kernel code had to access virtual pages at real addresses ... so there was a "bring down" process where a virtual page was moved from a real address above the 16mbyte line to real address "below the line".

some posts mentioning 3033 64mbyte real and dual-address space mode
https://www.garlic.com/~lynn/2024c.html#67 IBM Mainframe Addressing
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#40 Mainframe Family tree and chronology 2
https://www.garlic.com/~lynn/2016b.html#57 Introducing the New z13s: Tim's Hardware Highlights
https://www.garlic.com/~lynn/2016b.html#35 Qbasic
https://www.garlic.com/~lynn/2015b.html#46 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2014f.html#22 Complete 360 and 370 systems found
https://www.garlic.com/~lynn/2014d.html#62 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2012h.html#57 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2010c.html#41 Happy DEC-10 Day
https://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'
https://www.garlic.com/~lynn/2007o.html#10 IBM 8000 series
https://www.garlic.com/~lynn/2007g.html#59 IBM to the PCM market(the sky is falling!!!the sky is falling!!)
https://www.garlic.com/~lynn/2005p.html#19 address space
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Thu, 23 Jan 2025 16:19:10 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
So each (physical?) page belonged to exactly one user process (or the kernel)?

How did this handle shared sections accessible by more than one process?


re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42
https://www.garlic.com/~lynn/2025.html#43
https://www.garlic.com/~lynn/2025.html#44
https://www.garlic.com/~lynn/2025.html#45
https://www.garlic.com/~lynn/2025.html#46
https://www.garlic.com/~lynn/2025.html#47
https://www.garlic.com/~lynn/2025.html#48

PSW key=0 (nominally kernel/supervisor) allowed access to all pages, nonzero PSW only could store (and/or fetch if the fetch protect feature was installed and fetch set) to pages with matching storage protect key.

shared code, tended to be only store protected (but not fetch protected) ... and run with the PSW key of the invoking process (so could store in the invoking process pages).

originally 360 this were 2kbytes ... (in 4k paging environment ... pairs of 2k storage keys were managed) ... after 370, moved to 4k.

os/360 running concurrent "regions" could do up to 15 ... protected from each other. VS2 started with with single 16mbyte address space (VS2/SVS) ... but had to move unique address space (for each "region") to move past 15 and provide separation.

Storage Protect
https://en.wikipedia.org/wiki/IBM_System/360_architecture#Storage_protection
If the storage protection feature[2]17-17.1 is installed, then there is a 4-bit storage key associated with every 2,048-byte block of storage and that key is checked when storing into any address in that block by either a CPU or an I/O channel. A CPU or channel key of 0 disables the check; a nonzero CPU or channel key allows data to be stored only in a block with the matching key.

Storage Protection was used to prevent a defective application from writing over storage belonging to the operating system or another application. This permitted testing to be performed along with production. Because the key was only four bits in length, the maximum number of different applications that could be run simultaneously was 15.

An additional option available on some models was fetch protection. It allowed the operating system to specify that blocks were protected from fetching as well as from storing.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

The Paging Game

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Paging Game
Date: 23 Jan, 2025
Blog: Facebook
my softcopy starts
.hy off;.ju off :h3.The Paging Game :p.Jeff Berryman, University of British Columbia :h4.Rules
... snip ...

prior post mentioning "The Paging Game"
https://www.garlic.com/~lynn/2004b.html#60 Paging

I was undergraduate at univ and fulltime responsible for OS/360 on 360/67 (running as 360/65). Three people from CSC (two of them later went to one of the CSC CP67 spinoffs) came out to install CP67 (3rd installation after CSC itself and MIT Lincoln Labs) and I mostly got to play with it on weekends. I rewrote lots of the pathlengths to cut overhead of running os/360 in virtual machine. Then redid page replacement, dispatching, scheduling, all I/O (including paging). Six months later (after CSC install at univ) CSC was having CP67/CMS class in LA, I arrive Sunday night and asked to teach the CP67 class ... the people that were going to teach it, had given their notice the Friday before, leaving for NCSS.

Dec81 at ACM conference, Jim Gray asks me to help one of his co-workers at Tandem get their Stanford Phd ... which involved Global LRU page replacement and the forces from the 60s involved in Local LRU page replacement (and number of academic papers) were lobbying Stanford not to grant Phd for anything involving Global LRU.

Jim knew that after graduating and joining CSC, I had most of my stuff (including Global LRU) on the CSC 768kbyte (104pageable pages after fixed storage) 360/67 CP67 and had lots of performance data from both CSC and IBM Grenoble science center. Grenoble had modified CP67 to conform to the academic local LRU from the 60s for their 1mbyte 360/67 (155pageable pages). The user CMS workloads was similar except CSC had better interactive response and throughput for 75-80 users (104pages) than Grenoble had for 35 users (155pages)

archived post
https://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?
with copy of response following year to Jim's request
https://www.garlic.com/~lynn/2006w.html#email821019

csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
paging posts
https://www.garlic.com/~lynn/subtopic.html#clock

other posts mention the global/local LRU, CSC/Grenoble CP67, and Stanford phd:
https://www.garlic.com/~lynn/2025.html#38 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#107 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#34 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024b.html#95 Ferranti Atlas and Virtual Memory
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2021j.html#18 Windows 11 is now available
https://www.garlic.com/~lynn/2018f.html#63 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#62 LRU ... "global" vs "local"
https://www.garlic.com/~lynn/2018d.html#28 MMIX meltdown
https://www.garlic.com/~lynn/2016c.html#0 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015c.html#66 Messing Up the System/360
https://www.garlic.com/~lynn/2014m.html#138 How hyper threading works? (Intel)
https://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013i.html#30 By Any Other Name
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012g.html#21 Closure in Disappearance of Computer Scientist
https://www.garlic.com/~lynn/2011l.html#6 segments and sharing, was 68000 assembly language programming
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)
https://www.garlic.com/~lynn/2008h.html#79 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008c.html#65 No Glory for the PDP-15
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question

--
virtualization experience starting Jan1968, online at home since Mar1970

The Paging Game

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Paging Game
Date: 24 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#50 The Paging Game

historical reference
https://archive.org/details/ThePagingGame

trivia: In the late 70s & early 80s, I was being blamed for online computer conferencing on the IBM internal network (started out the CP67-based scientific center wide-area networked centered in Cambridge; larger than the arpanet/internet from just about the beginning until sometime mid/late 80s ... about the same time the internal network was forced to convert to SNA/VTAM). It really took off spring 1981when I distributed trip report of visit to Jim Gray at Tandem (Jim had left IBM fall of 1980 and tried to palm off some amount of stuff on me), only about 300 directly participated but claims that upwards of 25,000 were reading ... folklore was when corporate executive committee was told, 5of6 wanted to fire me.

When I went to send a reply and was told that I was prohibited ... I hoped it was executives figuring to punish me and not that they were playing in an academic dispute ... it wasn't until nearly year later that I was allowed to send response.

from IBM Jargon (copy here)
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf
Tandem Memos -n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

some recent posts mentioning CP67-based scientific center wide-area network centered in Cambridge
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2025.html#5 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#2 IBM APPN
https://www.garlic.com/~lynn/2024g.html#77 Early Email
https://www.garlic.com/~lynn/2024g.html#40 We all made IBM 'Great'
https://www.garlic.com/~lynn/2024g.html#13 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024f.html#75 Prodigy
https://www.garlic.com/~lynn/2024f.html#60 IBM 3705
https://www.garlic.com/~lynn/2024f.html#48 IBM Telecommunication Controllers
https://www.garlic.com/~lynn/2024d.html#82 APL and REXX Programming Languages
https://www.garlic.com/~lynn/2024c.html#111 Anyone here (on news.eternal-september.org)?

--
virtualization experience starting Jan1968, online at home since Mar1970

Canned Software and OCO-Wars

From: Lynn Wheeler <lynn@garlic.com>
Subject: Canned Software and OCO-Wars
Date: 24 Jan, 2025
Blog: Facebook
note *NOT* JCL, Clists, or Cobol .... when I 1st joined IBM science center after graduation, one of my hobbies was enhanced production operating systems for internal datacenters, 1st CP67 (online branch office sales&marketing support HONE systems were one of my first ... and long time customer) then moving to VM370. In the decision to add virtual memory to all 370s, the morph of CP67 to VM370 simplified and/or dropped a lot of stuff. I started moving stuff into VM370R2-base for my internal CSC/VM (including kernel reorg needed for multiprocessor, but not the actual support). Then transitioned to VM370R3-base and started adding multiprocessor support back in ... initially for HONE. US HONE had consolidated all their datacenters in palo alto with the largest single-system-image, shared DASD operation with load-balancing and fall-over across the complex. I then add have multiprocessor back in, so they can add a 2nd processor to each system (each 2-cpu operation getting approx twice the throughput of single cpu)

For some reason, somehow a VM370R3-based CSC/VM (before multiprocessor support) was provided to AT&T longlines ... which they continued to use, moving to latest IBM mainframe generation as well as propagated around AT&T. In the early 80s, the IBM national marketing rep for AT&T tracks me down looking for help with that ("ancient") CSC/VM. IBM had started shipping the 3081D, 3081 originally was going to be multiprocessor only and IBM was concerned that all those AT&T CSC/VM systems (before I did multiprocessor support, similar concern about the ACP/TPF market, ACP/TPF also didn't have multiprocessor support) would all move to the latest Amdahl single processor. Note the latest Amdhal single processor had higher MIP rate than the aggregate of the two processor 3081D. IBM then doubles the 3081 processor cache size, for the 3081K (about the same aggregate MIP as the Amdahl single processor, although for MVS ... IBM documentation was that MVS 2-cpu system throughput was only 1.2-1.5 times a 1-cpu system). Trivia: one of the things I had done as undergraduate in the 60s was dynamic adaptive resource management and scheduling ... which adapted across a wide-range of system processing power (as it went from at least 120KIPS to 50MIPS or more and multiple CPUs).

Part of the issue was AT&T had added significant source modifications to my (pre-multiprocessor) VM370R3-based CSC/VM ... and so it wasn't a straight-forward move to either a current VM370 or my current internal VM370. One of their enhancements was "virtual tape" (sort of like IBM's PVM 3270 virtual device) which allowed user in one datacenter to run tape applications, where tapes were mounted on system at remote datacenter (to do things like offsite backups, aka they possibly/likely had access to multi-megabit bandwidths)

some claims that customer source mods were at least part of the motivation behind the OCO-wars (object code only), complicating customers' ability to move to latest releases that would have support for the latest hardware.

cambridge scientific center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some OCO-war posts:
https://www.garlic.com/~lynn/2024g.html#37 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#114 REXX
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024d.html#81 APL and REXX Programming Languages
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#111 Copyright Software
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2022e.html#7 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022b.html#118 IBM Disks
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software

--
virtualization experience starting Jan1968, online at home since Mar1970

Canned Software and OCO-Wars

From: Lynn Wheeler <lynn@garlic.com>
Subject: Canned Software and OCO-Wars
Date: 24 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#52 Canned Software and OCO-Wars

As part of trying to hang onto the ACP/TPF market during the 3081 days, VM370 multiprocessor support was modified to help parallelize (single virtual CPU) ACP/TPF processing, improving its throughput ... however the changes degraded the throughput for nearly every other VM370 customer running multiprocessor. I was brought into a large, long-time online CMS gov. agency customer (back to early CP67/CMS days) to try and offset the latest VM370 multiprocessor hacks. While many SHARE
https://www.share.org/
members used three letter designation identifying their company, this gov. agency chose "CAD" (folklore stood for "Cloak And Dagger") ... can be seen in VMSHARE posts ... aka TYMSHARE started offering their CMS-based online computer conferencing free to SHARE in Aug1976, as "VMSHARE" ... archive here
http://vm.marist.edu/~vmshare

Eventually IBM started offering 3083, a 3081 with one of the processors removed. A series of 3083s came out with microcode tweaks tailored for TPF, 3083jx, 3083kx and finally 9083, hand picked running at faster clock cycle (also tried running with DAT disabled ... which only ran TPF).

Other trivia: 1st part of 70s, IBM had Future System project, completely different than 370 and was going to completely replace it (during FS, internal politics was killing off 370 efforts, the lack of new 370 during the FS period is claimed to have given the clone 370 makers, including Amdahl, their market foothold). When FS implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
and some details about 3081 from FS technology
http://www.jfsowa.com/computer/memo125.htm

During FS, lots of employees were told that the only way of getting promotions and raises was transfer to FS ... I continued to work on 360&370 stuff all during FS, including periodically ridiculing what they were doing ... which wasn't exactly career enhancing.

At Clemson, there is also info about death of ACS/360 ... Amdahl had won the battle to make ACS, 360 compatible ... then it was killed and Amdahl leaves IBM ... also gives some of the ACS/360 features that show up more than 20yrs later with ES/9000.
https://people.computing.clemson.edu/~mark/acs_end.html

One of the finally nails in the FS coffin was analysis done by the IBM Houston Scientific Center, if 370/195 applications were redone for FS machine made out of the fastest available technology, they would have throughput of 370/145 (about 30 times slowdown).

SMP, tightly-coupled, shared memory, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Fri, 24 Jan 2025 23:43:57 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Large corporates did, and do, tend to become unwieldy in their development processes. IBM had a long history of doing innovative research (and patenting the results); but if you looked at their actual product line, very little of this innovation actually seemed to make it into that.

One example I recall is SNA, their "Systems Network Architecture". For a long time this was not what we would understand as a "network" at all: it was primarily a way for large, expensive central machines to control remote, cheaper machines at branch offices.

IBM didn't discover peer-to-peer networking until the 1980s, years after other companies were already doing it routinely.


re:
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42
https://www.garlic.com/~lynn/2025.html#43
https://www.garlic.com/~lynn/2025.html#44
https://www.garlic.com/~lynn/2025.html#45
https://www.garlic.com/~lynn/2025.html#46
https://www.garlic.com/~lynn/2025.html#47
https://www.garlic.com/~lynn/2025.html#48
https://www.garlic.com/~lynn/2025.html#49

In the 70s about same time SNA appeared, my wife was co-author of AWP39, "Peer-to=Peer Networking" architecture ... peer-to-peer qualification necessary because the communication group had co-opted "network" ... joke was that SNA was "not a System", "not a Network", and "not an Architecture".

My wife was then con'ed into going to POK (IBM high-end mainframe) responsible for "loosely-coupled" (mainframe for "cluster") shared DASD (Peer-Coupled Shared Data) architecture She didn't remain long because 1) periodic battles with the SNA/VTAM forces trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake, until much later with SYSPLEX and Parallel SYSPLEX
https://en.wikipedia.org/wiki/IBM_Parallel_Sysplex
except for IMS hot-standby.

Co-worker at science center was responsible for the CP67-based (precursor to VM370) wide-area network, that morphs into the corproate internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s, about the time the SNA-org forced the internal network to be converted to SNA/VTAM). Technology also used for the corporate sponsored univ BITNET (also for a time larger than arpanet/internet):
https://en.wikipedia.org/wiki/BITNET

Account by one of the inventors of GML (in 1969) at the science center:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

... trivia: a decade later GML morphs into ISO standard SGML and after another decade morphs into HTML at CERN and 1st webserver in the use is CERN sister location, Stanford SLAC (on their VM370 system)
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

Edson (passed Aug2020)
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

We then transfer out to San Jose Research and in early 80s, I get HSDT, T1 and faster computer links, some amount of conflict with SNA forces (note in 60s, IBM had 2701 telecommunication controller that supported T1 links, however IBM's move to SNA in the mid-70s and associated issues seem to have capped links at 56kbits/sec).

trivia: at the time of arpanet/internet cut-over to internetworking protocol on 1Jan1983, there were 100 IMPs and approx 255 hosts ... at a time when the internal network was rapidly approach 1000 hosts all over the world (approval of IMPs somewhat held back arpanet growth; for internal network growth it was corporate requirement that all links be encrypted ... which could be real problem with various country gov. agencies, especially when links cross national boundaries). Archived post with list of world-wide corporate locations that got one or more hew host networking connections during 1983:
https://www.garlic.com/~lynn/2006k.html#8

For HSDT, was involved in working with the NSF director and was suppose to get $20M to interconnect the NSF Supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

One of HSDT first long-haul T1 links was between the IBM Los Gatos lab (on the west coast) and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston, NY, that had lots of floating point systems boxes that included 40mbyte/sec disk arrays.
https://en.wikipedia.org/wiki/Floating_Point_Systems
Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.
... snip ...

SJMerc article about Edson, "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine),
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

For awhile I reported to same executive as the person behind AWP164 (which becomes APPN), needling that he should come work on real networking ... becauses the SNA forces would never appreciated what he was doing ... the SNA forces veto the announcement of APPN ... and the APPN announcement letter was carefully rewritten to not imply and relationship between APPN and SNA.

SNA forces had been fiercely fighting off client/server and distributed computing and trying to block the announcement of mainframe TCP/IP. When that failed, they change their tactic and claim that since they have corporate strategic responsibility for everything that crosses datacenter walls, it has to be released through them. What ships get 44kbyte/sec aggregate using nearly whole 3090 processor. I then do the support for RFC1044 and in some turning tests at Cray Research between a Cray and a 4341, get sustained 4341 channel I/O throughput, using only a modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

In 1988 we get last product we did at IBM, HA/CMP.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
It starts out HA/6000 for the NYTimes to port their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors that have VAXCluster support in the same source base with UNIX (Oracle, Sybase, Ingres, Informix).

Early Jan1992, have meeting with Oracle CEO where IBM/AWD Hester tells Ellison that by mid92 there would be 16-system clusters and by ye92, 128-system clusters. Then late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four systems (we leave IBM a few months later). Note: IBM mainframe DB2 had been complaining if we were allowed to go ahead, it would be years ahead of them.

trivia: when 1st transferred to IBM SJR, I did some work with Jim Gray and Vera Watson on the original SQL/relational, System/R ... and while corporation was preoccupied with the next great DBMS ("EAGLE"), we were able to do tech transfer to Endicott for release as SQL/DS. Then when "EAGLE" implodes there was a request for how fast could System/R be ported to MVS ... which is eventually released as "DB2" (originally for decision support only).

Peer-Coupled Shared Data: architecture posts:
https://www.garlic.com/~lynn/submain.html#shareddata
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
original sql/relational, system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

some recent posts mentioning AWP39
https://www.garlic.com/~lynn/2025.html#0 IBM APPN
https://www.garlic.com/~lynn/2024d.html#69 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#56 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#30 ACP/TPF
https://www.garlic.com/~lynn/2024.html#84 SNA/VTAM
https://www.garlic.com/~lynn/2023f.html#40 Rise and Fall of IBM
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#43 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"
https://www.garlic.com/~lynn/2021h.html#90 IBM Internal network

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Management Briefings and Dictionary of Computing

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Management Briefings and Dictionary of Computing
Date: 25 Jan, 2025
Blog: Facebook
Post mentioning in 1972, Learson tries (and fails) to block the bureaucrats, careerists and MBAs from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
reference to his briefings ... IBM Management Briefings is online at bitsavers

then there is IBM Jargon ... it has been uploaded to the files section for other IBM theme groups ... but also on the web here
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf

one of the entries:
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

... late 70s and early 80s, I had been blamed for online computer conferencing ... it really took off spring of 1981 when I distributed trip report to visit Jim Gray at Tandem, only about 300 directly participated but claims of upwards of 25,000 reading (folklore when corporate executive committee was told, 5of6 wanted to fire me).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CMSBACK, WDSF, ADSM, TSM

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CMSBACK, WDSF, ADSM, TSM
Date: 25 Jan, 2025
Blog: Facebook
I had done CMSBACK at SJR in 1979 (before research moved up the hill to Almaden) for internal datacenters and went thru a few internal releases. Then distributed PC and workstation clients were added and released to customers as WDSF
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager

which morphs into ADSM, renamed TSM and then Storage Protect.

backup/archive posts
https://www.garlic.com/~lynn/submain.html#backup

a couple CMSBACK archived emails
https://www.garlic.com/~lynn/2006t.html#email791025
https://www.garlic.com/~lynn/2006w.html#email801211
https://www.garlic.com/~lynn/2010l.html#email830112

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Sat, 25 Jan 2025 12:27:29 -1000
Lars Poulsen <lars@cleo.beagle-ears.com> writes:
Of course, a tight cluster with survivability cannot have a strict hierarchy, since the top position would have to be renegotiated if the "master" node failed.

re:
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix

... when I was out marketing HA/CMP I coined the termas disaster survivability and geographic survivability. The IBM S/88 product administrator also started taking us around to their customers ... and got me to write a section for the corporate continuous availability strategy document (it got pulled when both Rochester/AS400 and POK/mainframe complained they couldn't meet the objectives).
https://www.pcmag.com/encyclopedia/term/system88

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
availability posts
https://www.garlic.com/~lynn/submain.html#available

other posts in this thread
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42
https://www.garlic.com/~lynn/2025.html#43
https://www.garlic.com/~lynn/2025.html#44
https://www.garlic.com/~lynn/2025.html#45
https://www.garlic.com/~lynn/2025.html#46
https://www.garlic.com/~lynn/2025.html#47
https://www.garlic.com/~lynn/2025.html#48
https://www.garlic.com/~lynn/2025.html#49

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Sat, 25 Jan 2025 17:11:24 -1000
Peter Flass <peter_flass@yahoo.com> writes:
It was fun mapping the DCSS to specific memory addresses, and making sure that a program didn't need two mapped to the same address. This, I think, was S/370, so 31-bit (16MB) address space. Didn't DCSS sizes have to be a multiple of 64K? It's been a while, so I don't recall the details.

re:
https://www.garlic.com/~lynn/2025.html#38 Multics vs Unix

370 virtual memory architecture allowed for two page sizes, 2k and 4k ... and two segment sizes, 64k and 1mbyte.

for vm370 CMS, used 4k page size and 64k (16 4k pages) segment size.

as i've periodic mentioned, DCSS ("discontiguous shared segments") was small piece of my CMS page mapped filesystem ... which included being able to specify loading file with pieces as shared segment(s).

370 had two level virtual memory table ... a segment table where each entry pointed to that (segment's) page table (for vm370, 16 4k page entries). in the case of a shared segment ... each individual virtual memory segment table entry point to a common, shared page table.

in the case of my CMS page mapped filesystem, filesystem control information specified mapping to individual pages ... or for a mapping(s) that specified 16 4k pages to be mapped as a "shared segment".

Since the CMS page mapped filesystem wasn't picked up for VM370R3, all the control information had to placed somewhere. The VM370 module name was (kernel assembled) DMKSNT which had SNT entires information specifying the number of pages, virtual addresses of the pages, which portion of the virtual pages to be treated as shared segments, physical disk locations of the saved pages, etc.

A sysadm or sysprog ... would typically load application into their virtual memory and issue the kernel (privileged) "SAVESYS" command specifying a SNT entry, the pages from their virtual memory would then be written to the specified disk locations. Then users could issue the "LOADSYS" command specifying a SNT entry, which would reverse the process, mapping their virtual memory to the SNT entry specified pages (and in case of shared segment, their corresponding segment table entry/entries mapped to the shared page table(s)).

If wanting to change, add or delete a DMKSNT SNT entry, edit the assemble source, assemble the changed DMKSNT file, rebuild the kernel (w/changed DMKSNT) and re-ipl (note SNT entries were global system wide).

CMS memory mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

other posts in this thread
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42
https://www.garlic.com/~lynn/2025.html#43
https://www.garlic.com/~lynn/2025.html#44
https://www.garlic.com/~lynn/2025.html#45
https://www.garlic.com/~lynn/2025.html#46
https://www.garlic.com/~lynn/2025.html#47
https://www.garlic.com/~lynn/2025.html#48
https://www.garlic.com/~lynn/2025.html#49
https://www.garlic.com/~lynn/2025.html#54
https://www.garlic.com/~lynn/2025.html#57

other posts mentioning dmksnt
https://www.garlic.com/~lynn/2023e.html#48 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023b.html#49 CP67 "IPL-by-name"
https://www.garlic.com/~lynn/2021e.html#25 rather far from Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2012o.html#43 Regarding Time Sharing
https://www.garlic.com/~lynn/2012f.html#50 SIE - CompArch
https://www.garlic.com/~lynn/2011b.html#20 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#19 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011.html#74 shared code, was Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2011.html#21 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2009j.html#77 More named/shared systems
https://www.garlic.com/~lynn/2009j.html#76 CMS IPL (& other misc)
https://www.garlic.com/~lynn/2006.html#13 VM maclib reference
https://www.garlic.com/~lynn/2004o.html#11 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2003o.html#42 misc. dmksnt
https://www.garlic.com/~lynn/2003g.html#27 SYSPROF and the 190 disk
https://www.garlic.com/~lynn/2003f.html#32 Alpha performance, why?
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Sun, 26 Jan 2025 08:52:26 -1000
jgd@cix.co.uk (John Dallman) writes:
On the IBM 360 family, working with multiple memory partitions required 256+KB RAM, which was a lot in the 1960s. Nowadays, that's L2 cache on a mobile device processor, but it was a fairly big mainframe 55 years ago, and the programs it ran were small enough - mostly written in assembler - to fit several into memory.

Starting in the middle 70s, I would periodically claim tht the original 1965 360 CKD DASD offloadeded function searching for information in the I/O rather than keeping it cached in limited/small real storage ... but by mid-70s that trade-off was starting to invert. CKD (multi-track search) searching directories and data could involve multiple full-cylinder searches, each one taking 1/3sec; which would not only busy the disk, but also controller (blocking access to all other disks on that controller) and channel (blocking access to all other disks on that channel), limited memory met that matching compare had to refetch the information from system memory for each compare ... limiting the amount of concurrent I/O possible.

after transferring to SJR in 77, I got to wander around lots of (IBM & non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test across the street. At the time they were had 7x24, prescheduled, mainframe testing ... and had recently tried MVS ... but it had 15min MTBF (requiring manual re-ipl) in that environment. I offered to rewrite I/O subsystem making it bullet proof and never crash, allowing any amount of concurrent disk testing, greatly improving productivity. I also wrote an internal research report on all the things (to the I/O subsystem) that needed to be done and happen to mention the MVS MTBF (bringing down the wrath of the MVS organization on my head).

I also worked with Jim Gray and Vera Watson on the original SQL/relational implementation, System/R. The major company IMS DBMS product was stressing that System/R was significantly slower (than IMS). IMS kept physical disk addresses internally in data records for related data and IMS would say that System/R record twice the disk space for data indexes and 4-5 times the I/O (transversing the physical disk records). System/R counter was that IMS had much greater sysadmin human time ... managing exposed data record addresses in data records.

By the early 80s, that had started to change, disk price/bit was significantly dropping (offsetting the doubling space requirement for indexes) and system memories sizes were significantly allowing indexes to be cached, significantly reducing I/O ... while overall dataprocessing dropping costs was greatly expanding computer installs and use of DBMS ... putting stress on available human skills and resources.

Early 80s, I wrote a tome that the relative system disk throughput had dropped by an order of magnitude since 1964 (disks had gotten 3-5 times faster while systems got 40-50 times faster, requiring increasing amount of concurrent I/O (and multi-tasking) ... but the extensive use of CKD multi-track search limited its effectiveness. A disk division executive took exception and assigned the division performance group to refute the claims ... after a few weeks, they came back and explained that I had slightly understated the problem..

There was a recent observation that the (current) latency of cache misses and main memory operations, when measured in in count of current processor cycles is comparable to 60s disk latency when measured in count of 60s processor cycles (memory is the new disk).

This is also the explanation given for decision to add virtual memory to all 370s, because MVT needed to increasingly increase the number of concurrent tasks to maintain system utilization and justification aka MVT storage management was so bad that region storage size requirements had to be specified four times larger than actually used limiting typical 1mbyte 370/165 to four tasks, going to 16mbyte virtual memory (VS2/SVS) would allow number of concurrently executing tasks to be increased by nearly a factor of four times (with little or no paging) ... although capped at 15 because of 4bit storage protect key ... keeping regions separated/isolated from each other. As systems increased processing power ... had to bypass 15 cap by going to separate virtual address space for each region (VS2/MVS).

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
original sql/relational System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

a few posts mentioning both disk division executive wanting to refute my claims and decision to add virtual memory to all 370s:
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2017j.html#96 thrashing, was Re: A Computer That Never Was: the IBM 7095

other posts in this thread
https://www.garlic.com/~lynn/2025.html#38
https://www.garlic.com/~lynn/2025.html#39
https://www.garlic.com/~lynn/2025.html#40
https://www.garlic.com/~lynn/2025.html#41
https://www.garlic.com/~lynn/2025.html#42
https://www.garlic.com/~lynn/2025.html#43
https://www.garlic.com/~lynn/2025.html#44
https://www.garlic.com/~lynn/2025.html#45
https://www.garlic.com/~lynn/2025.html#46
https://www.garlic.com/~lynn/2025.html#47
https://www.garlic.com/~lynn/2025.html#48
https://www.garlic.com/~lynn/2025.html#49
https://www.garlic.com/~lynn/2025.html#54
https://www.garlic.com/~lynn/2025.html#57
https://www.garlic.com/~lynn/2025.html#58

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Sun, 26 Jan 2025 12:24:09 -1000
Bob Eager <news0009@eager.cx> writes:
Later on, Amdahl had it too. I was heavily involved in porting a non-IBM operating system to the XA architecture, but from Amdahl. Some differences of course, but the main one was the different I/O architecture. We had been limited by the 24 bit address space, but 31 bits was fine.

As I recall, it was a 4381.


Tom Simpson (from Simpson & Crabtree, responsible for HASP spooling), in early 70s had done RASP, a MFT-11 adaption for virtual memory, but with page-mapped filesystem (rather retaining 360 channel programs, VS1) ... which IBM wasn't interested in. He leaves IBM for Amdahl and recreates RASP in "clean room" from scratch (IBM sued but court appointed experts were only able to find one or two similar instruction sequences).

Amdahl people like to keep me up on the latest Amdahl gossip after the (SLAC hosted) monthly BAYBUNCH meetings.

archived post with some email exchange about decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73

long list in this thread
https://www.garlic.com/~lynn/2024g.html#90 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#91 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#92 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#94 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#100 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#102 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#104 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#38 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#39 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#40 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#41 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#42 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#44 vfork history, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#45 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#46 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#47 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#48 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#49 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#57 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#58 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#59 Multics vs Unix

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Sun, 26 Jan 2025 12:28:35 -1000
John Levine <johnl@taugh.com> writes:
It was 370/XA, according to the manual I have here.

370 was follow on to 360 ... then increasing the multiprogramming level motivated adding virtual memory quickly/initially to all 370s (and CP67 was able to morph into vm370 for virtual machines and online interactive).

but also relatively quickly, the "Future System" effort appeared and 370 efforts were being killed off (lack of new IBM 370 during FS is credited with giving the 370 system clone makers their market foothold).
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

When FS imploded, there was mad rush to get stuff back into the product pipelines, including kicking off the quick & dirty 3033 (remapping 168 logic to 20% faster chips) and 3081 (leveraging some warmed over FS hardware, designed for 370/xa, a lot of architecture designed for addressing MVS short comings, but pending availability of MVS/XA ran 370 mode). trivia: 370/xa was referred to 811 for the nov1978 publication date of the internal design & architecture documentations.

part of 811 (370/xa) was as DASD I/O increasingly become major system throughput bottleneck, the MVS I/O redrive pathlength (from ending interrupt of previous I/O to start of next queued operation) was several thousand instructions (leaving device idle). I recently mentioned getting to play disk engineer and rewrite I/O supervisor for integrity and availability ... I also cut redrive pathlength to 1-to-2 hundred instructions ... being able to demonstrate 370 redrive very close to dedicated 811 hardware architecture (but then I had to bring the wrath of the MVS organization on my head).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts mentioning getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

longer list in this thread
https://www.garlic.com/~lynn/2024g.html#90 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#91 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#92 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#94 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#100 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#102 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#104 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#38 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#39 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#40 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#41 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#42 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#44 vfork history, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#45 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#46 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#47 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#48 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#49 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#57 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#58 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#59 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#60 old pharts, Multics vs Unix

--
virtualization experience starting Jan1968, online at home since Mar1970

Grace Hopper, Jean Sammat, Cobol

From: Lynn Wheeler <lynn@garlic.com>
Subject: Grace Hopper, Jean Sammat, Cobol
Date: 26 Jan, 2025
Blog: Facebook
Jean Sammet was in the IBM Boston Programming Office on 3rd flr of 545 tech sq. Some of the MIT CTSS/7094 people go to the 5th flr and Multics and others go to the 4th flr and the IBM Cambridge Scientific Center. CSC was expecting MULTICS be awarded to IBM (CSC), but instead it goes to GE. CSC does (virtual machine) CP40/CMS having modified 360/40 with virtual memory, which morphs into CP67/CMS when 360/67 standard with virtual memory. Later when IBM decided to add virtual memory to all 370s, some of the CSC members split off for (virtual machine) VM370, taking over BPS on the 3rd flr.

Sometimes I would bring in my kids on weekends to play spacewar (somebody had ported PDP1 spacewar to CSC's IBM 1130m4/2250 that was in the machine room on 2nd flr)
https://en.wikipedia.org/wiki/Spacewar!

... and Jean would come looking for me to complain my kids were running up and down halls making noise.

both Jean Sammat
https://en.wikipedia.org/wiki/Jean_E._Sammet
and Nat Rochester
https://en.wikipedia.org/wiki/Nathaniel_Rochester_(computer_scientist)
at BPS on 3rd flr

some other trivia:
https://en.wikipedia.org/wiki/Bob_Bemer
He served on the committee which amalgamated the design for his COMTRAN language with Grace Hopper's FLOW-MATIC and thus produced the specifications for COBOL.

https://en.wikipedia.org/wiki/FLOW-MATIC
Bob Bemer history page (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20180402200149/http://www.bobbemer.com/HISTORY.HTM
360s were originally to be ASCII machines ... but the ASCII unit record gear wasn't ready ... so had to use old tab BCD gear (and "temporarily" EBCDIC) ... biggest computer goof ever:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some other posts mentioning Hopper, Sammet, Cobol
https://www.garlic.com/~lynn/2024e.html#97 COBOL history, Article on new mainframe use
https://www.garlic.com/~lynn/2016d.html#35 PL/I advertising
https://www.garlic.com/~lynn/2010e.html#79 history of RPG and other languages, was search engine history
https://www.garlic.com/~lynn/2009k.html#57 COBOL: 50 not out

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Mon, 27 Jan 2025 07:09:48 -1000
Bob Eager <news0009@eager.cx> writes:
THat's interesting. In my case, the operating system I was porting was a university written one, from the ground up.

I remember having a problem testing it on VM/XA. The initial IPL brought in a large program (a mini version of the full system), and it overwrote the code that VM loaded to emulate the IPL hardware. I had to load in portions around that.


re:
https://www.garlic.com/~lynn/2025.html#60 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#61 old pharts, Multics vs Unix

about the same time was "gold" from recent univ. graduate (tried to hire him at IBM, but no takers) that had ported unix to 370. ("gold" code name taken from "Au" - "Amdahl Unix")

some posts mentioning RASP and Amdahl "Gold"
https://www.garlic.com/~lynn/2024.html#27 HASP, ASP, JES2, JES3
https://www.garlic.com/~lynn/2020.html#33 IBM TSS
https://www.garlic.com/~lynn/2018d.html#93 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017g.html#102 SEX
https://www.garlic.com/~lynn/2017d.html#76 Mainframe operating systems?
https://www.garlic.com/~lynn/2013n.html#92 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013n.html#24 Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2012f.html#78 What are you experiences with Amdahl Computers and Plug-Compatibles?
https://www.garlic.com/~lynn/2012.html#67 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2010o.html#0 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2010i.html#44 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2009o.html#47 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted
https://www.garlic.com/~lynn/2006w.html#24 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Mon, 27 Jan 2025 07:30:47 -1000
cross@spitfire.i.gajendra.net (Dan Cross) writes:
What OS, if I may ask? MTS, maybe?

re:
https://www.garlic.com/~lynn/2025.html#60 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#61 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#63 old pharts, Multics vs Unix

during IBM's Future System period (and internal politics killing off 370 efforts, giving clone 370 system makers their market foothold) I had recently joined IBM and got to continue going to SHARE and visiting customers.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
https://en.wikipedia.org/wiki/IBM_Future_Systems_project

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

The director of one of the largest (east coast) financial datacenters liked me to stop by and talk technology. At some point the IBM branch manager horribly offended the customer and in retaliation, the customer ordered an Amdahl system (it would be a lone Amdahl in vast sea of blue systems). This was in period when Amdahl was still primarily selling into the univ/technical/scientific market and had yet to make a commercial, "true blue" customer. I was asked to go onsite at the customer for 6-12months (apparently to help obfuscate why the customer was ordering an Amdahl system). I talk it over with the customer and decide to not accept IBM's offer. I was then told the branch manager was a good sailing buddy of IBM's CEO and if I didn't do this, I could forget career, promotions, raises.

MTS trivia:
https://en.wikipedia.org/wiki/Michigan_Terminal_System
https://web.archive.org/web/20221216212415/http://archive.michigan-terminal-system.org/
https://web.archive.org/web/20050212073808/www.itd.umich.edu/~doc/Digest/0596/feat01.html
https://web.archive.org/web/20050212073808/www.itd.umich.edu/~doc/Digest/0596/feat02.html
https://web.archive.org/web/20050212183905/www.itd.umich.edu/~doc/Digest/0596/feat03.html
http://www.eecis.udel.edu/~mills/gallery/gallery7.html
http://www.eecis.udel.edu/~mills/gallery/gallery8.html

MTS folklore is it started out being scaffolding off MIT Lincoln Labs (2nd CP67 installation after CSC itself) LLMPS
https://web.archive.org/web/20200926144628/michigan-terminal-system.org/discussions/anecdotes-comments-observations/8-1someinformationaboutllmps

posts mentioning Amdahl and MTS:
https://www.garlic.com/~lynn/2024c.html#50 third system syndrome, interactive use, The Design of Design
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017d.html#75 Mainframe operating systems?
https://www.garlic.com/~lynn/2016d.html#13 What Would Be Your Ultimate Computer?
https://www.garlic.com/~lynn/2013l.html#34 World's worst programming environment?
https://www.garlic.com/~lynn/2013l.html#22 Teletypewriter Model 33
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2010p.html#42 Which non-IBM software products (from ISVs) have been most significant to the mainframe's success?
https://www.garlic.com/~lynn/2010i.html#44 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2007q.html#15 The SLT Search LisT instruction - Maybe another one for the Wheelers
https://www.garlic.com/~lynn/2006o.html#36 Metroliner telephone article
https://www.garlic.com/~lynn/2006f.html#19 Over my head in a JES exit
https://www.garlic.com/~lynn/2006e.html#31 MCTS
https://www.garlic.com/~lynn/2006c.html#18 Change in computers as a hobbiest
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
https://www.garlic.com/~lynn/2005k.html#20 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2004n.html#34 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2003l.html#41 Secure OS Thoughts
https://www.garlic.com/~lynn/2003j.html#54 June 23, 1969: IBM "unbundles" software
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Mon, 27 Jan 2025 14:27:42 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Was it running on bare metal or under the VM hypervisor?

Because I believe Linux on IBM mainframes to this day runs as a VM, not on bare metal. Not sure why.


re:
https://www.garlic.com/~lynn/2025.html#63 old pharts, Multics vs Unix

80s claim that both IBM AIX/370 (from UCLA Locus) and Amdahl's GOLD .... ran under vm370 ... because field hardware support CEs demanded mainframe EREP .... to add mainframe EREP to them was several times larger effort than just getting them to run on mainframe (so they ran under VM370, relying on VM370 to provide the mainframe EREP).

other
https://www.garlic.com/~lynn/2025.html#60 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#61 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#64 old pharts, Multics vs Unix

--
virtualization experience starting Jan1968, online at home since Mar1970

Multics vs Unix

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Mon, 27 Jan 2025 14:55:58 -1000
Peter Flass <peter_flass@yahoo.com> writes:
At the point I got into it, I don't recall any messing with the nucleus being required. Of course the disadvantage was that, like windows shared libraries, DCSS could only need loaded at the address they occupied when saved.

re:
https://www.garlic.com/~lynn/2025.html#58 Multics vs Unix

at some point between vm370 and now, they converted DCSS kernel DMKSNT savesys and loadsys to use the spool file system.

the os/360 convention/heritage was executable images had address constants and the executable images had RLD information appended, which could turn all address constants into fixed values at the point of bringing the (linkedit/loader loading) executable images into memory.

Original/Early CMS implementation would "save" memory image of a loaded executable image as "MODULE" filetype. Since a "MODULE" filetype no longer required any storage modifications ... it could be R/O shared across multiple virtual address spaces ... but at same fixed address location across all virtual address spaces.

I would do some hacks of os/360 convention software to change all the "relative" adcons to "fixed" displacements ... which would be added to virtual address space specific location (sort of simulating the TSS/360 convention of location independent code) ... and enabling the specification of location independent option ... where it could use location other than possible default.

For non-shared R/O, an image could execute store operations into its own image (both CMS "MODULE" filetype) and DCSS savesys & loadsys. An early variation was booting MVT into a virtual machine ... and stopping it near the end of MVT IPL and doing a savesys (using savesys/loadsys sort of like image checkpoint) for a fast reboot (akin to some of the current PC fast container bringups).

posts mentioning CMS page mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap
posts mentioning page-mapped filesysten and location independent code
https://www.garlic.com/~lynn/submain.html#adcon

other posts mentioning DCSS & DMKSNT
https://www.garlic.com/~lynn/2023e.html#48 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2021e.html#25 rather far from Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2012f.html#50 SIE - CompArch
https://www.garlic.com/~lynn/2011b.html#20 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#19 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011.html#74 shared code, was Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2009j.html#77 More named/shared systems
https://www.garlic.com/~lynn/2009j.html#76 CMS IPL (& other misc)
https://www.garlic.com/~lynn/2006.html#13 VM maclib reference
https://www.garlic.com/~lynn/2003g.html#27 SYSPROF and the 190 disk

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Mon, 27 Jan 2025 15:21:39 -1000
Lynn Wheeler <lynn@garlic.com> writes:
80s claim that both IBM AIX/370 (from UCLA Locus) and Amdahl's GOLD .... ran under vm370 ... because field hardware support CEs demanded mainframe EREP .... to add mainframe EREP to them was several times larger effort than just getting them to run on mainframe (so they ran under VM370, relying on VM370 to provide the mainframe EREP).

re:
https://www.garlic.com/~lynn/2025.html#65 old pharts, Multics vs Unix

in gossiping with Amdahl people I mentioned IBM had special project for AT&T where IBM had done a strip down of TSS/370 kernel called SSUP (which included mainframe erep) which higher level part of UNIX kernel was being layered on top ... besides effectively adding mainframe EREP (more device support, etc) to UNIX kernel (for running on bare hardware) it also got mainframe multiprocessor support.

I ask if Amdahl might consider doing something with GOLD and possibly Simpson's Amdahl RASP

posts mentioning tss370/ssup work for bell/at&t:
https://www.garlic.com/~lynn/2024.html#15 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2022c.html#42 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2019d.html#121 IBM Acronyms
https://www.garlic.com/~lynn/2017d.html#82 Mainframe operating systems?
https://www.garlic.com/~lynn/2014f.html#74 Is end of mainframe near ?
https://www.garlic.com/~lynn/2010o.html#0 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2010c.html#43 PC history, was search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2008l.html#82 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2007b.html#3 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2006t.html#17 old Gold/UTS reference
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Wed, 29 Jan 2025 07:34:05 -1000
Dan Espen <dan1espen@gmail.com> writes:
When working on MVS and needing Unix, it makes the most sense to use z/OS Unix. Unix runs in a TSO type environment providing a very functional Unix environment.

re:
https://www.garlic.com/~lynn/2025.html#60 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#61 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#63 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#64 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#65 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#67 old pharts, Multics vs Unix

Late 80s, a IBM senior disk enginner got a talk scheduled at annual, world-wide, internal communication group conference, supposedly on 3174 performance, but opened the talk with the statement that the communication group was going to be responsible for the demise of disk division. The disk division was seeing data fleeing datacenters to more distributed computing friendly platforms, with a drop in disk sales. They came up with a number of solutions that were all being vetoed by the communication group (with their corporate strategic ownership of overything that crossed the datacenter walls, fiercely fighting off client/server and distributed computing).

The disk divison executive responsible for software was coming up with some work-arounds to the communication group; including investing in distributed computing startups that would use IBM disks, he would periodically ask us to visit his investments to see if we could offer some help). He also paid for development of (UNIX) Posix implementation for MVS (software, wasn't IBM physical hardware transmission product).

a couple years later, IBM has one of the largest losses in the history of US corporations (wasn't just disks, impacting whole mainframe datacenter market) and was being re-orged into the 13 "baby blues" (take-off on AT&T "baby bells" breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk (hdqtrs) asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup (but it isn't long before the disk division is gone).

posts mentioning communication group fighting off client/server & distributed computing
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Thu, 30 Jan 2025 06:37:30 -1000
Dan Espen <dan1espen@gmail.com> writes:
One a mainframe, there are a few issues to deal with to run Unix. The common use terminal, a 3270 is not character at a time, data is transferred in blocks with a pretty complex protocol. z/OS unix couldn't do things like run Emacs on a 3270 but it did a reasonably good job of providing a working stdin/stdout.

re:
https://www.garlic.com/~lynn/2025.html#60 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#61 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#63 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#64 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#65 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#67 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#68 old pharts, Multics vs Unix

70s 3272/3277 had .086sec hardware response ... then 80 came 3274/3278 where lots of electronics were moved out of terminal back into 3274 controller (reducing 3278 manufacturing cost) ... significantly driving up the coax protocol chatter and latency ... and hardware response went to .3sec-.5sec (depending on amount of data transferred). This was in period of studies showing improved human productivity with .25sec "response" (seen by human) ... with 3272/3277 required .164 system response (plus terminal hardware .086) to give .25sec human response.

after joining ibm, one of my hobbies was enhanced production operating systems for internal datacenters. circa 1980, rare MVS/TSO system was lucky to see even 1sec interactive system response (and nearly all much worse) ... some number of internal VM370 systems were claiming .25sec interactive system response ... but I had lots of internal systems with .11sec interactive system response (giving .196 response seeen by human w/3277).

Letters to the 3278 product administrator complaining about 3278 for interactive computing got reply that 3278 wasn't intended for interactive computing, but "data entry" (aka electronic keypunch).

Later IBM/PC hardware 3277 emulation cards had 4-5 times the upload/download throughput of 3278 emulation cards.

however, all 3270s were half-duplex and if you were unfortunate to hit a key same time system went to write to screen, it would lock the keyboard and would have to stop and hit the reset key. YKT developed a FIFO box for 3277, unplug the keyboard from the 3277 head, plug the FIFO box into the head and plug the 3277 keyboard into the fifo box ... eliminating the unfortunate keyboad lock.

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS, VMFPLC

From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS, VMFPLC
Date: 30 Jan, 2025
Blog: Facebook
trivia: VMFPLC source wasn't part of vm370/cms distribution ... then mid-70s, head of POK managed to convince corporate to kill vm370, shutdown the development group and transfer all the people to POK for MVS/XA. With the shutdown of the development group lots of stuff was trashed ... including VMFPLC (they also had huge amount of CMS enhancements that disappeared with the development group shutdown, VAX/VMS was in its infancy at the time and for those that managed to escape the transfer to POK, joke was that head of POK was a major contributor to VAX/VMS). Endicott finally managed to save the VM370 product mission (for the mid-range) ... but for things like VMFPLC had to reverse engineer VMFPLC tape format for VMFPLC2.

I did have source copy of the original VMFPLC ... that I had modified for VMXPLC. I increased maximum physical tape blocks (to cut down on wasted interrecord gaps, also CMS FST was a separate record in front of each new CMS file ... and appended it to the end of each file first data record (eliminated that wasted interrecord gap). Also since I had done a page-mapped CMS filesystem that had significantly higher throughput (a small subset of the CP support was released in VM/370 R3 as DCSS), I forced all buffers to page aligned boundaries.

I also used VMXPLC for CMSBACK that I did in 1979 .... that later morphs into WDSF, ADSM, TSM, storage protect, etc

backup, archive, cmsback, and/or storage managment pots
https://www.garlic.com/~lynn/submain.html#backup

some specific posts mentioning VMFPLC, VMXPLC, and CMSBACK
https://www.garlic.com/~lynn/2023c.html#104 IBM Term "DASD"
https://www.garlic.com/~lynn/2021j.html#22 IBM 3420 Tape
https://www.garlic.com/~lynn/2014b.html#92 write rings
https://www.garlic.com/~lynn/2012k.html#76 END OF FILE
https://www.garlic.com/~lynn/2009.html#17 Magnetic tape storage
https://www.garlic.com/~lynn/2006t.html#24 CMSBACK
https://www.garlic.com/~lynn/2001n.html#92 "blocking factors" (Was: Tapes)

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS, VMFPLC

From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS, VMFPLC
Date: 30 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#70 VM370/CMS, VMFPLC

Internal ibm science center and ibm research reports. When I transferred out to sjr in 2nd half 70s, got to wander around (ibm and non-IBM) datacenters i silicon valley including disk bldg14/engineering and bldg5/disk product test across the street. They were doing prescheduled, 7x24, stand alone testing and mentioned they had recently tried MVS, but it had 15min MTBF (in that environment; requiring manual re-ipl). I offered to rewrite I/O supervisor to be bullet proof and never fail, allowing any amount of on-demand, concurrent testing ... greatly improving productivity. I then wrote an (internal) IBM research report describing all the work necessary and happened to mention the MVS 15min MTBF ... which brings the wrath of the MVS organization down on my head.

I co-authored a number of "public" research reports that had to be reviewed by the IBM senior tech editor in San Jose ... when he retired, he sent me a copy of all my papers in his files and a note saying he never saw such a constant/nonstop flow of executive excuses why they weren't ready for publication.

Mar/Apr '05 eserver mag referencing my archived postings website, although garbled some of the details
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
here
https://www.garlic.com/~lynn/

a more recent tome
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Fri, 31 Jan 2025 08:30:10 -1000
Dan Espen <dan1espen@gmail.com> writes:
You'd buy a mainframe if you accumulated billions of dollars worth of software developed to only to run on mainframes.

IBM is still selling mainframe hardware.


re:
https://www.garlic.com/~lynn/2025.html#60 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#61 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#63 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#64 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#65 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#67 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#68 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix

turn of century, mainframe hardware was a few percent of IBM revenue and dropping. a decade ago, mainframe hardware was a couple percent of revenue and still dropping ... however the mainframe group accounted for 25% of revenue and 40% of profit ... nearly all software and services.

past posts mentioning mainframe hardware and manframe group revenue
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2022f.html#68 Security Chips and Chip Fabs
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#12 What is IBM SNA?
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#71 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#45 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#35 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022d.html#70 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021g.html#24 Big Blue's big email blues signal terminal decline - unless it learns to migrate itself
https://www.garlic.com/~lynn/2021g.html#18 IBM email migration disaster
https://www.garlic.com/~lynn/2021e.html#68 Amdahl
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2017h.html#95 PDP-11 question
https://www.garlic.com/~lynn/2017h.html#61 computer component reliability, 1951
https://www.garlic.com/~lynn/2017g.html#103 SEX
https://www.garlic.com/~lynn/2017d.html#17 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2014m.html#170 IBM Continues To Crumble
https://www.garlic.com/~lynn/2013h.html#40 The Mainframe is "Alive and Kicking"

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix vs mainframes

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix vs mainframes
Newsgroups: alt.folklore.computers
Date: Fri, 31 Jan 2025 08:53:26 -1000
John Levine <johnl@taugh.com> writes:
I agree that mainframes is a legacy business but it's one that still has a long life aheade it it.

re:
https://www.garlic.com/~lynn/2025.html#72 old pharts, Multics vs Unix

IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims. Lawsuit accuses Big Blue of cheating investors by shifting systems revenue to trendy cloud, mobile tech
https://www.theregister.com/2022/04/07/ibm_securities_lawsuit/
IBM has been sued by investors who claim the company under former CEO Ginni Rometty propped up its stock price and deceived shareholders by misclassifying revenues from its non-strategic mainframe business - and moving said sales to its strategic business segments - in violation of securities regulations.
... snip ...

past posts mentioning the above
https://www.garlic.com/~lynn/2024g.html#34 IBM and Amdahl history (Re: What is an N-bit machine?)
https://www.garlic.com/~lynn/2024f.html#121 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#108 Father, Son & CO. My Life At IBM And Beyond
https://www.garlic.com/~lynn/2024e.html#124 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#77 The Death of the Engineer CEO
https://www.garlic.com/~lynn/2024e.html#51 Former AMEX President and New IBM CEO
https://www.garlic.com/~lynn/2024.html#120 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023c.html#72 Father, Son & CO
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
https://www.garlic.com/~lynn/2022c.html#98 IBM Systems Revenue Put Into a Historical Context
https://www.garlic.com/~lynn/2022c.html#45 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix vs mainframes

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix vs mainframes
Newsgroups: alt.folklore.computers
Date: Fri, 31 Jan 2025 12:59:08 -1000
re:
https://www.garlic.com/~lynn/2025.html#73 old pharts, Multics vs Unix vs mainframes

periodically reposted, including in afc a couple years ago

industry benchmark ... number of program iterations compared to reference platform. Early mainframe actual benchmarks ... later mainframe numbers derived from pubs with percent difference change from previous generation (similar to most recent mainframe revenue, revenue derived from percent change)

Early 90s:
•eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
•RS6000/990 : 126MIPS; 16-way cluster: 2016MIPS, 128-way cluster: 16,128MIPS (16.128BIPS)


Then somerset/AIM (apple, ibm, motorola), power/pc single chip as well as motorola 88k bus supporting cache consistency for multiprocessor. i86/Pentium new generation where i86 instructions are hardware translated to RISC micro-ops for actual execution (negating RISC system advantage compared to i86).
•1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000 z900 processor)
•1999 single Pentium3 hits 2,054MIPS (twice PowerPC and 13times each Dec2000 z900 processor).


Mainframe this century:
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS (1000MIPS/proc), Sep2019
z16, 200 processors, 222BIPS (1111MIPS/proc), Sep2022


Note max configured z196 (& 50BIPS) went for $30M, at same time E5-2600 server blade (two 8core XEON & 500BIPS) had IBM base list price of $1815 (before industry press announced server chip vendors were shipping half the product directly to cloud megadatacenters for assembly at 1/3rd price of brand name systems, and IBM sells off server blade business)

Large cloud operations can have score of megadatacenters around the world, each with half million or more server blades (2010 era megadatacenter: processing equivalent around 5M max-configured z196 mainframes).

trivia, IBM early 80s, disk division hardware revenue slightly passed high end mainframe division hardware revenue. Late 80s, senior disk engineer got talk scheduled at annual, world-wide, internal communication group conference, supposedly on 3174 performance but opens the talk with statement that communication group was going to be responsible for the demise of the disk division. The disk division was seeing data fleeing mainframe datacenters to more distributed computing friendly platforms, with drop in disk sales. They had come up with a number of solutions but were constantly vetoed by the communication group (had corporate strategic responsibility for everything that crossed datacenter walls and were fiercely fighting off client/server and distributed computing).

The disk division software executive partial work around was investing in distributed computing startups (who would use IBM disks, as well as sponsored POSIX implementation for MVS). He would sometimes ask us to visit his investment to see if we could provide any help.

Turns out communication group affecting the whole mainframe revenue and couple years later, IBM has one of the largest losses in the history of US corporations ... and IBM was being reorged into the 13 "baby blues" (take off on "baby bells" in breakup a decade earlier) in preparation for breakup
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk (IBM hdqtrs) asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).

Then IBM becomes a financial engineering company

Stockman; The Great Deformation: The Corruption of Capitalism in America
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall Street momo traders. It was actually a stock buyback contraption on steroids. During the five years ending in fiscal 2011, the company spent a staggering $67 billion repurchasing its own shares, a figure that was equal to 100 percent of its net income.

pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82 billion, or 122 percent, of net income over this five-year period. Likewise, during the last five years IBM spent less on capital investment than its depreciation and amortization charges, and also shrank its constant dollar spending for research and development by nearly 2 percent annually.

(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases have come to a total of over $159 billion since 2000.

(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe Terminals

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe Terminals
Date: 31 Jan, 2025
Blog: Facebook
3272/3277 had .086sec hardware response ... 3278 moves a lot of hardware back up into the 3274 controller (significantly reducing 3278 manufacturing cost), which greatly increased the coax protocol chatter, driving up the hardware response from .3sec to .5sec (depending amount of data transferred). It also shows up in later IBM/PC 3270 emulation cards, 3277 emulation card had 4-5 times the upload/download thruput (compared to 3278 emulation). This was in period with studies showing increased productivity with .25sec interactive response (w/3277 hardware .086 ... needed mainframe system response 0164sec for .25sec seen by human (3278 hardware .3sec-.5sec, needed time machine in order to get interactive .25sec response). Letter written to 3278 product administrator had reply that 3278 wasn't intended for interactive computing ... but "data entry" (i.e. electronic keypunch).

All (real) 3270 were half-duplex and happening to hit a key at same time write was going to screen, locked the keyboard (and had to be manually reset). 3277 had enough electronics that YKT developed a FIFO box, unplug the keyboard from the 3277 head, plug in the FIFO box and plug the keyboard into the FIFO box (eliminating the periodic keyboard lock annoyance, of course that went away with IBM/PC simulated 3270s).

At the time, my internal systems were seeing .11sec trivial interactive system response ... the next tier of internal systems were claiming .25sec trivial interactive system response ... but for interactive response to be .25sec, needed no more than 0164sec system when using 3277 terminals (and negative time of -.05sec to -.25sec system response with 3278) .... issue didn't come up with MVS/TSO users, since at the time, it was rare for TSO to even have 1sec system response.

The difference showed up at SJR, that replaced 370/195 MVT with a 370/168 MVS and 370/158 VM/370 ... all 3830 DASD controllers were dual channel to both systems ... but strict rules that no MVS packs could be placed on 3330 strings with VM370 designated 3830 controllers. One morning it happened and immediately operations got calls from all over the bldg about enormously degraded CMS response. MVS (from OS/360 heritage) heavily relied on multi-track searches (that locked out controllers and all associated disks) which inteferred with CMS file accesses. Demands to operations that the pack be immediately moved got a response that they would do move 2nd shift. A highly optimized VM370 VS1 system pack was then put up on an MVS "string" doing its own multi-tracks searches (the optimized VS1 on the heavily loaded VM370 158 easily out-performed MVS on real 168 ... which slowed down all MVS throughput, significantly alleviating the degradation it was causing to CMS response). Then operations aggreed to immediately move the MVS 3330.

trivia: 1980, STL (since renamed SVL) was bursting at the seams and 300 people (& 3270s) from the IMS (DBMS) group was being moved to offsite bldg with dataprocessing back to STL datacenter. They had tried "remote 3270" but found human factors totally unacceptable (compared to inside STL). I get con'ed into doing channel extender support so the 3270 channel attached controllers could be placed at the offsite bldg, with no percept9ble difference in human factors compared to in STL. They then discover that the 168 system dedicated to the offsite IMS group were getting 10-15% higher system throughput. It turns out that all the STL datacenter mainframes had 3270 controllers were spread across mainframe system channels shared with DASD. The channel-extender hardware had significantly less mainframe channel busy (for the same amount of 3270 activity), improving DASD throughput. There was some consideration using channel-extenders for all 3270 controllers (whether they were inside STL or offsite), improving system throughput of all systems.

trivia2: VM370 implemented passthrough virtual machine (PVM) that simulated 3270s over the internal network ... which also included gateway to the internal CCDN terminal network. Then for the travel/home terminal program got special encrypting 2400 baud modems (that could simulate industry standard modem). On VM370 there was home terminal service that used PVM for host 3270 emulation and a IBM/PC app that simulated 3270 terminal. The host/PC (PCTERM) maintained synchronized caches of recently transmitted data,, could specify changes to the current screen display (actual data, or pieces of data from cache), reducing amount of data transmitted. For actual data transmitted, it was further reduced using compression.

misc. other posts mentioning SJR 370/168 MVS mis-mounted 3330 incident degrading CMS throughput
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2020.html#31 Main memory as an I/O device
https://www.garlic.com/~lynn/2018.html#93 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2018.html#46 VSE timeline [was: RE: VSAM usage for ancient disk models]
https://www.garlic.com/~lynn/2017f.html#30 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2016b.html#29 Qbasic
https://www.garlic.com/~lynn/2015g.html#58 [Poll] Computing favorities
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2011j.html#56 Graph of total world disk space over time?
https://www.garlic.com/~lynn/2011.html#36 CKD DASD

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix vs mainframes

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix vs mainframes
Newsgroups: alt.folklore.computers
Date: Fri, 31 Jan 2025 17:38:16 -1000
John Levine <johnl@taugh.com> writes:
I gather that a major reason one still uses a mainframe is databases and in particular database locking. Whi;e the aggregate throughput of a lot of blades may be more than a single mainframe, when you need to do database updates it's a lot faster if the updates are in one place rather than trying to synchronize a lot of loosely coupled systems. On that 200 processor z16, you can do a CAS on one processor and the other 199 will see it consistently.

I realize there are ways around this, sharding and such, but there's good reasons that the reading parts of database applications are widely distributed while the writing parts are not.


re:
https://www.garlic.com/~lynn/2025.html#73 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#74 old pharts, Multics vs Unix vs mainframes

Some of this left over from mid-90s where billions were spent on reWriting mainframe financial systems that queued real time transactions for processing during overnight batch settlement windows (many from the 60s&70s) ... to straight-through processing using large number of parallelized killer micros. Some of us pointed out that they were using industry standard parallelization libraries that had hundred times the overhead of mainframe cobol batch ... ignored until their pilots went down in flames (retrenching to the existing status quo).

A major change was combination of 1) major RDBMS vendors (including IBM) did significiant throughput performance work on cluster RDBMS, 2) implementations done with fine-grain SQL statements that were highly parallelized (rather than RYO implementation parallelization), and 3) non-mainframe systems having significantly higher throughput.

2022 z16 200 processors shared memory multiprocessor and aggregate 222BIPS (1.11BIPS/processor) compared to (single blade) 2010 e5-2600 16 processors shared memory multiprocessor and aggregate 500BIPS (31BIPS/processor) ... 2010 E5-2600 systen ten times 2022 z16 (2010 e5-2600 processor 30 times 2022 z16 processor). 2010 E5-2600 could have up to eight processors (with aggregate throughput more than 2022 200 processor z16) sharing cache (w/o going all the way to memory).

When I has doing HA/CMP ... it originally started out HA/6000 for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP when start doing cluster scaleup with national labs and RDBMS vendors (oracle, sybase, ingres, informix) that had VAXCluster in same source base with UNIX.

I did distributed lock manager that emulated VAXCluster DLI semantics ... but with a lot of enhancements. RDBMS had been doing cluster with logging and lazy writes to home location ... however when write lock had to move to a different system ... they had to force any pending associated write to RDBMS "home" location ... the system receiving the lock then would read the latest value from disk. Since I would be using gbit+ links, I did a DLI that could transmit both the lock ownership as well as the latest record(s) (avoiding the immediate disk write followed by immediate disk read on different system). Failures would recover from log records updating RDBMS "home" records.

However, still prefer to have transaction routing to processor (processor and/or cache affinity) holding current lock ... this is the observation that cache miss (all the main storage) when measured in count of processor cycles ... is similar to 60s disk latency when measured in 60s processor cycles (if purely for throughput).

I had also coined the terms disaster survivability and "geographically survivability" when out marketing HA/CMP ... so more processing for coordinate data at replicated distributed locations. Jim Gray's study of availability outages were increasingly shifting to human mistakes and environmental (as hardware was becoming increasingly reliable, I had worked with Jim Gray on System/R before he left IBM SJR for Tandem fall81).
https://www.garlic.com/~lynn/grayft84.pdf
'85 paper
https://pages.cs.wisc.edu/~remzi/Classes/739/Fall2018/Papers/gray85-easy.pdf
https://web.archive.org/web/20080724051051/http://www.cs.berkeley.edu/~yelick/294-f00/papers/Gray85.txt

In 1988, the IBM branch asks if i could help LLNL (national lab) with standardization of some serial stuff they were working with, which quickly becomes fibre standard channel (FCS, initially 1gbit/sec transfer, full-duplex, aggregate 200Mbytes/sec, including some stuff I had done in 1980). POK ships their fiber stuff in 1990 with ES/9000 as ESCON (when it was already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define heavy weight protocol that significantly reduces throughput, ships as "FICON"

Latest public benchmark I could find was (2010) z196 "Peak I/O" getting 2M IOPS using 104 FICON. About the same time a "FCS" was announced for E5-2600 server blade claimining over million IOPS (two such FCS would have higher throughput than 104 FICON ... that run over FCS). Also, IBM pubs recommend that SAPs (system assist proceessors that actually do I/O) be kept to 70% CPU ... which would be more like 1.5M IOPS. Also no CKD DASD have been made for decades, all being simulated on industry standard fixed-block disks.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
available posts
https://www.garlic.com/~lynn/submain.html#available
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

some recent posts mention straight-through processing
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2024.html#113 Cobol
https://www.garlic.com/~lynn/2023g.html#12 Vintage Future System
https://www.garlic.com/~lynn/2022g.html#69 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#3 Final Rules of Thumb on How Computing Affects Organizations and People
https://www.garlic.com/~lynn/2021k.html#123 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021g.html#18 IBM email migration disaster
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018f.html#85 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2017j.html#37 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017j.html#3 Somewhat Interesting Mainframe Article
https://www.garlic.com/~lynn/2017h.html#32 OFF TOPIC: University of California, Irvine, revokes 500 admissions
https://www.garlic.com/~lynn/2017f.html#11 The Mainframe vs. the Server Farm: A Comparison
https://www.garlic.com/~lynn/2017d.html#39 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017c.html#63 The ICL 2900
https://www.garlic.com/~lynn/2017.html#82 The ICL 2900

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe Terminals

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe Terminals
Date: 01 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals

was it IBM product?. the one I did the support for STL moving 300 people (from IMS group) to offsite bldg, was done originally by Thornton (with Cray responsible for CDC6600). Ran over channel on Collins digital radio T3 microwave that IBM had installed in south san jose.

Also went into IBM Boulder moving large group to bldg across highway ... used Infrared modem between roofs of the two bldgs. There was concern that Boulder severe weather would block transmission ... had bit-error testers on subchannel and during severe snow storm when people couldn't get into work ... saw only a few bit-errors. Did have a problem with sun warming bldgs during the day and uneven heating of the bldg, shifted roofs and moved the infrared modems out of alignment. Had to carefully select positions on the respective roofs that minimized shifting the alignment of the modems.

Trivia: when 1st moved to SJR, got to wander around (IBM and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test across the street. They were running 7x/24, prescheduled, stand-alone mainframe testing. They mentioned that they had recently tried MVS, but it had 15min MTBF (in that environment) requiring manual re-IPL. I offered to rewrite i/o supervisor to be bullet-proof and never fail enabling any amount of on-demand concurrent testing, greatly improving productivity. I then wrote (internal) research report about all the work and happened to mention the MVS 15min MTBF and also offered to help the MVS group fix all their problems ... turns out that brought done the wrath of the MVS group on my head (informally told that the MVS group tried to have me separated from the IBM company, when that didn't work, they tried to make things unpleasant in other ways).

other trivia; back when undergraduate, univ was replacing 709/1401 with 360/67 originally for tss/360 ... they got a 360/30 temporarily replacing 1401 pending 360/67 availability. after taking 2hr intro to fortran/computers at end of semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30. The univ. shutdown datacenter on weekends and I would have place dedicated, although 48hrs w/o sleep made monday classes hard. Then when 360/67 arrived, I was hired fulltime responsible for os/360 (tss/360 never came to production) and kept my 48hr dedicated weekend time.

then science center came out to install cp67 (3rd after csc itself and MIT lincoln labs). It came with 1052 and 2741 terminal support (implemented with automagic terminal type identification, changing telecommunication terminal type scanner with the SAD channel command). The univ had some Teletype ASCII terminals so I add TTY/ASCII support (integrated with automagic terminal type).

I then wanted to have single dialin number for all terminals, didn't quite work ... while changing port scanner type worked, IBM had taken short cut and hardwired port line speed. Univ. then kicked off project to do clone controller, build a 360 channel interface board for Interdata/3 programmed to emulate IBM controller with the addition that it did auto baud rate. It was then upgraded to Interdata/4 for the channel interface and cluster of Interdata/3s for port interfaces. Interdata (and then Perkin-Elmer) were selling as clone controller and four of us got written responsible for (some part of) clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

mid-80s, the communication group was fiercely fighting off client/server and distributed computing and trying to block release of mainframe TCP/IP. When that failed, they changed tactics and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. Thornton&company had come out with a high performance TCP/IP router (with channel interfaces, 16 high-performance Ethernet interfaces, T1&T3 telco support, FDDI lans, etc ... and I did the RFC1044 support; in some tuning tests at Cray Research between Cray and 4341, got sustained channel throughput using only modest amount of 4341 CPU ... something like 500 times increase in bytes moved per instruction executed.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
DASD, CKD, FBA, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix vs mainframes

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix vs mainframes
Newsgroups: alt.folklore.computers
Date: Sun, 02 Feb 2025 12:55:46 -1000
Lynn Wheeler <lynn@garlic.com> writes:
Some of this left over from mid-90s where billions were spent on reWriting mainframe financial systems that queued real time transactions for processing during overnight batch settlement windows (many from the 60s&70s) ... to straight-through processing using large number of parallelized killer micros. Some of us pointed out that they were using industry standard parallelization libraries that had hundred times the overhead of mainframe cobol batch ... ignored until their pilots went down in flames (retrenching to the existing status quo).

A major change was combination of 1) major RDBMS vendors (including IBM) did significiant throughput performance work on cluster RDBMS, 2) implementations done with fine-grain SQL statements that were highly parallelized (rather than RYO implementation parallelization), and 3) non-mainframe systems having significantly higher throughput.


re:
https://www.garlic.com/~lynn/2025.html#73 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#74 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#76 old pharts, Multics vs Unix vs mainframes

I got performance gig at turn of century w/financial outsourcer that handled half of all credit card accounts in the US ... datacenter had >40 max configured mainframe systems (none older than 18months) all running the same 450K statement Cobol app (number needed to finish settlement in the overnight batch window i.e. real-time transactions were still being queued for settlement in overnight batch window).

after turn of century got involved with operation that had experience doing large scale financial production apps for a couple decades and had developed a fiancial application language that translated financial transaction specifications into fine-grain SQL statements. It significantly reduced development effort and showed that it was significantly more agile responding to things like regulatory changes.

they prepared a number of applications that simulated the 5 or 6 largest financial operations that then existed, for demos ... showing a six multiprocessor (four processors each) intel/microsoft system cluster having many times the throughput capacity of the existing production operations.

then demo'ed at multiple financial industry organizations that initially saw high acceptance ... then hit brick wall ... eventually told that most of their executives still bore the scars from the 90s "modernization" disasters.

Note the 90s disaster with large number of parallelized killer micros involved processors with throughput that were small fraction of mainframe and (effectively) toy parallelization. move to middle of the 1st decade after the turn of the century ... and those systems now had

1) higher throughput (1999 pentium3 single processor chip rated at 2BIPS while max. configured DEC2000 IBM Mainframe z900 (16 processors) was only 2.5BIPS aggregate) ... and Intel was still doubling chip throughput every 18-24months (aided in part with transition to multi-core). July2005 IBM z9 with max-configured 54processor clocked at 18BIPS aggregate (333MIPS/processor) ... while Intel single multi-core chip was at least twice that.

2) significantly higher non-cluster and cluster RDBMS throughput

3) basically both mainframe & non-mainframe were using the same industry standard fixed-block disks ... giving non-mainframe higher IOPS advantage with mainframe requiring extra layer providing CKD emulation.

trivia: late 70s an IBM SE on financial industry account with large number of ATM machines, transactions running on 370/168 TPF machine. He then re-implemented the ATM transactions on 370/158 VM370 machine ... and was getting higher throughput (than TPF). He did some hacks to get the ATM transcation pathlength close to TPF (ATM transaction processing relatively trivial, typically swamped by disk I/O) ... and then all sorts of very sophisticated transaction scheduling strategies to optimize tansactions per disk arm sweep, including looking at account records on the same arm, evaluating historical transaction patterns (both by ATM machine and by account) that might include slight delays in some transactions. This was cluster of virtual machines ... virtual machines for transaction routing and sceduling along with dedicated virtual machines for each disk arm.

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

some posts mentioning financial processing specification translated to fine-grain SQL
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros

other 450K statement cobol app posts
https://www.garlic.com/~lynn/2024g.html#29 Computer System Performance Work
https://www.garlic.com/~lynn/2024c.html#4 Cobol
https://www.garlic.com/~lynn/2024.html#113 Cobol
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023g.html#87 Mainframe Performance Analysis
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023c.html#99 Account Transaction Update
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#68 How Gerstner Rebuilt IBM
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021c.html#49 IBM CEO
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs

--
virtualization experience starting Jan1968, online at home since Mar1970

360/370 IPL

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/370 IPL
Date: 02 Feb, 2025
Blog: Facebook
Note: 360 IPL, loads 24bytes from device, PSW and two CCWs and does a "TIC" to the two CCWs, when that I/O completes, it loads the PSW. Some IPL sequences will have the first CCW loading more CCWs and the 2nd CCW is TIC to the additional CCWs. There were various hacks, a "stand-alone" assembler module could include 2 or 3 "PUNCH" statements that PUNCHes a simple loader in front of the assembler TXT deck output for self-loading program.

back when undergraduate, univ was replacing 709/1401 with 360/67 originally for tss/360 ... they got a 360/30 temporarily replacing 1401 pending 360/67 availability. after taking 2hr intro to fortran/computers at end of semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30. The univ. shutdown datacenter on weekends and I would have place dedicated, although 48hrs w/o sleep made monday classes hard. Then when 360/67 arrived, I was hired fulltime responsible for os/360 (tss/360 never came to production) and kept my 48hr dedicated weekend time.

then science center came out to install cp67 (3rd after csc itself and MIT lincoln labs). At the time all CP67 source was on OS/360 ... assemble individual CP67 modules, get assembler punch output, mark top with diagonal strip and module name, place in organized sequence in card tray with BPS loader at the front. IPL the card desk, which passes control to CPINIT ... that writes the memory image to disk ... set to invoke CPINIT that reverses the process.

A couple months later, get distribution that has moved all CP67 source files to CMS. Now can have exec that effectively punches virtual card tray of BPS loader followed to all assembled module TXT decks ... transferred to virtual read ... which can be IPLed ... whole process of system build now done virtually. As backup the equivalent of the card tray is saved to tape ... which can be IPLed (used if new system fails and need to drop back to prior version).

I then want to make significant, low useage portions of the CP67 "pageable" (not in fixed memory) ... I split modules into 4K chunks, placed on 4k page boundaries placed following CPINIT. I find that when BPS loader branches to CPINIT, it also passes address and count of ESD loader table entries in registers. I modify CPINIT to append the loader table to the end of the pageable area. Problem was that fragmenting some modules into 4K chunks increased the number of ESD entries to more than the BPS loader 255 max ... and had to do some serious hacks to keep number of ESD entries to 255. Later when I graduate and join science center ... and find a copy of the BPS loader source in cabinet in 545 tech attic storage room ... and modify it to handle more than 255 entries. CP67 pageable kernel wasn't shipped to customers ... but I included it in my internal distribution.

trivia: Some of the MIT CTSS/7094 people had gone to the 5th flr to do MULTICS, others had gone to science center on 4th flr and did (virtual machine) CP40 (and do virtual memory hardware mods to 360/40), when 360/67 standard with virtual memory becomes available, CP40 morphs into CP67. When virtual memory is announced for all 370s and decision for VM370 product, some of the CSC people split off from the science center (on the 4th flr) and take over the IBM Boston Programming Center on the 3rd flr ... and pageable kernel is one of the things borrowed from CP67.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

recent related posts
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024e.html#16 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024g.html#77 Early Email
https://www.garlic.com/~lynn/2024g.html#79 Early Email
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#77 IBM Mainframe Terminals

some previous posts mentioning BPS loader
https://www.garlic.com/~lynn/2024g.html#78 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2023g.html#83 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2022.html#26 Is this group only about older computers?
https://www.garlic.com/~lynn/2022.html#25 CP67 and BPS Loader

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Bus&TAG Cables

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Bus&TAG Cables
Date: 02 Feb, 2025
Blog: Facebook
original bus&tag limited to 200ft half-duplex did end to end handshake for every byte transferred ... and could got 1.5mbyte/sec (for shorter distances) ... then 3mbytes & 400ft limit bus&tag still half duplex wtih "data streaming" ... multiple bytes transfer per end-to-end handshake.

I had done full-duplex concurrent streaming, serial in 1980 with channel programs downloaded to remote end. Then in 1988, IBM branch office asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre-channel standard (FCS, initially 1gbit transfer, full-duplex, 200mbyte/sec aggregate). About the same time POK releases ESCON (17mbyte/sec, when it is already obsolete).

Austin RS6000 engineers had played with early ESCON spec and tweaked it to be about 10% faster but also full-duplex ("SLA", serial link adapter) ... however it couldn't talk to anything else but other RS6000s. The engineers then wanted to do an 800mbit version of "SLA", but I badger them into joining the FCS committee instead.

Then some POK engineers become involved with FCS and defines heavy-weight FCS protocol that significantly reduces throughput, released as FICON. Most recent benchmark I can find is z196 "Peak I/O" that gets 2M IOPS using 104 "FICON". About the same time a FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS getting higher throughput than 104 FICON). Note also IBM pubs recommend that SAPs (system assist processors that do actual I/O) be kept to 70% CPU ... making it closer to 1.5M IOPS instead.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

some posts mentioning serial link adapter (SLA)
https://www.garlic.com/~lynn/2024b.html#52 IBM Token-Ring
https://www.garlic.com/~lynn/2022b.html#66 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2017d.html#31 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2012k.html#69 ESCON
https://www.garlic.com/~lynn/2010h.html#63 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2009s.html#32 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009q.html#32 Mainframe running 1,500 Linux servers?
https://www.garlic.com/~lynn/2009p.html#85 Anyone going to Supercomputers '09 in Portland?
https://www.garlic.com/~lynn/2009j.html#59 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2005v.html#0 DMV systems?
https://www.garlic.com/~lynn/2000f.html#31 OT?

trivia: 3380 were 3mbyte/sec transfer (needing 3mbyte/sec bus&tag data streaming). Original 3380 had 20 track spacings between each data track. They then cut the spacing in half for double the tracks (& cylinders), then cut the spacing again for triple the tracks (& cylinders).

then the father of 801/RISC asks me to help him with idea for "wide" disk head ... handling 16 closely spaced tracks transferring data in parallel for 50mbyes/sec (following servo track on each side of 16 data track grouping, format had servo track between each data track group) ... problem was IBM channel was still stuck at 3mbyte/sec transfer.

a few posts mentioning 3380 and wide disk head:
https://www.garlic.com/~lynn/2025.html#18 Thin-film Disk Heads
https://www.garlic.com/~lynn/2024g.html#57 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2024g.html#3 IBM CKD DASD
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024d.html#72 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2023g.html#97 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#84 Vintage DASD
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#67 Vintage IBM 3380s

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Bus&TAG Cables

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Bus&TAG Cables
Date: 02 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#80 IBM Bus&TAG Cables

note the "peak I/O" benchmark of 2M IOPS with 104 FICON is about 20K IOPS per FICON. the references last decade I saw (zhpf & tcw) for FICON was about 3 times improvement ... which would be about 60K IOPS per FICON ... still a long way from the 2010 FCS announced for E5-2600 server blade claiming over million IOPS.

posts mentioning FICON (&/or FCS)
https://www.garlic.com/~lynn/submisc.html#ficon

posts mentioning FICON, zhpf & tcw:
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2017j.html#88 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017i.html#59 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#94 Migration off Mainframe to other platform
https://www.garlic.com/~lynn/2017d.html#88 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#1 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#16 System z: I/O Interoperability Evolution - From Bus & Tag to FICON
https://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2016c.html#61 Can commodity hardware actually emulate the power of a mainframe?
https://www.garlic.com/~lynn/2016c.html#28 CeBIT and mainframes
https://www.garlic.com/~lynn/2016c.html#24 CeBIT and mainframes
https://www.garlic.com/~lynn/2015.html#40 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2015.html#39 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2014h.html#72 ancient terminals, was The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014g.html#12 Is end of mainframe near ?
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012o.html#25 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012o.html#6 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012n.html#72 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012n.html#70 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#51 history of Programming language and CPU in relation to each
https://www.garlic.com/~lynn/2012n.html#44 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#19 How to get a tape's DSCB
https://www.garlic.com/~lynn/2012m.html#43 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#28 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012m.html#13 Intel Confirms Decline of Server Giants HP, Dell, and IBM
https://www.garlic.com/~lynn/2012m.html#11 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#5 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#4 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Social Media

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Social Media
Date: 03 Feb, 2025
Blog: Facebook
Tymshare
https://en.wikipedia.org/wiki/Tymshare
start offering their CMS-based online computer conferencing service for free to the (mainframe) SHARE organization
https://en.wikipedia.org/wiki/SHARE_(computing)
for free starting aug1976, archives here:
http://vm.marist.edu/~vmshare

I cut a deal with TYMSAHRE for monthly tape of all VMSHARE files (and later after IBM/PC, PCSHARE files) for putting up on internal network & systems, biggest problem was concern that internal employees would be contaminated exposed to unfiltered customer information. I was then blamed for online computer conferencing late 70s and early 80s on the internal nework, it really took off spring of 1981 when I distributed trip report of visit to Jim Gray at Tandem, only about 300 directly participated but claims that upwards of 25,000 were reading ... "Tandem Memos" from IBMJargon
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

folklore is when corporate executive committee was told, 5of6 wanted to fire me. One of the outcomes was officially sanction conferencing software and moderated forums (also supported the various "tools" disks).

Note the internal network spawned from the CP-67-based scientific center wide-area network centered in cambridge & non-SNA/VTAM (larger than arpanet/internet from just about the beginning until sometime mid/late 80s ... about the time the internal network was forced to convert to SNA/VTAM).

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
commercial (virtual machine) online services
https://www.garlic.com/~lynn/submain.html#online

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Social Media

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Social Media
Date: 03 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#82 Online Social Media

One of inventors at GML at science center (in 1969) comment about CP67-based "wide-area" network
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

technology was also used for the corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/BITNET

trivia: CTSS RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
had been ported to CMS as SCRIPT. After GML was invented, GML tag processing was added to SCRIPT, after a decade morphs into ISO standard SGML, after another decade morphs into HTML at CERN. First webserver in the US is (CERN's sister institution) Stanford SLAC VM370 server
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

Edson (passed Aug2020), responsible for CP67 wide-area network (that morphs into company internal network)
https://en.wikipedia.org/wiki/Edson_Hendricks In June 1975, MIT
Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Old email from person in Paris tapped to form BITNET "in Europe" ("EARN", had been on sabbatical at CSC in late 60s and early 70s) ... looking for online apps:
Date: 03/20/84 15:15:41
To: wheeler

Hello Lynn,

I have left LaGaude last September for a 3 years assignement to IBM Europe, where I am starting a network that IBM helps the universities to start.

This network, called EARN (European Academic and Research Network), is, roughly speaking, a network of VM/CMS machines, and it looks like our own VNET. It includes some non IBM machines (many VAX, some CDC, UNIVAC and some IBM compatible mainframes). EARN is a 'brother' of the US network BITNET to which it is connected.

EARN is starting now, and 9 countries will be connected by June. It includes some national networks, such as JANET in U.K., SUNET in Sweden.

I am now trying to find applications which could be of great interest for the EARN users, and I am open to all ideas you may have. Particularly, I am interested in computer conferencing.

... snip ... top of post, old email index, HSDT email

shortly later, mailing list ('revised') listserv appears in Paris.
https://www.lsoft.com/products/listserv-history.asp
https://en.wikipedia.org/wiki/LISTSERV

that has subset of the internal "sanctioned" forum and tools software.

Edson and I transfer from Cambridge in 1977 to IBM SJR on west coast. In early 80s, I get HSDT, T1 and faster computer links (both satellite and terrestrial) and some number of battles with the communication group (note in 60s, IBM had the 2701 telecommunication controller that supported T1, however issues with IBM's move to SNA/VTAM in the 70s appeared to cap links at 56kbit/sec). Was also working with NSF Director and was suppose to get $20M to interconnect the NSF Supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

other trivia: communication group was fiercely fighting off client/server and distributed computing and was also trying to block release of mainframe TCP/IP support. When that failed, they changed their tactic and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got 44kbytes/sec aggregate, using nearly whole 3090 processor. I then do the changes to support RFC1044 and in some tuning test at Cray Research between Cray and 4341, get sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Special Company 1989

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Special Company 1989
Date: 05 Feb, 2025
Blog: Facebook
IBM Special Company 1989
https://www.amazon.com/Special-Company-Anniversary-Publication-Magazine/dp/B00071T5VW

3yrs later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preperation for breaking up the company (take off on "baby bells" breakup a decade earlier). We had already left the company but get call from bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former president of AMEX as CEO to try and save the company.

1992 "How IBM Was Left Behind", "baby blues", one of largest losses
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

20yrs earlier, Learson trying (and failing) to block the bureaucrats, careerists, and MBAs from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

30yrs of management briefings 1958-1988, pg160-163 Learson Briefing and THINK magazine article
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
Management Briefing

Number 1-72: January 18,1972

ZZ04-1312

TO ALL IBM MANAGERS:

Once again, I'm writing you a Management Briefing on the subject of bureaucracy. Evidently the earlier ones haven't worked. So this time I'm taking a further step: I'm going directly to the individual employees in the company. You will be reading this poster and my comment on it in the forthcoming issue of THINK magazine. But I wanted each one of you to have an advance copy because rooting out bureaucracy rests principally with the way each of us runs his own shop.

We've got to make a dent in this problem. By the time the THINK piece comes out, I want the correction process already to have begun. And that job starts with you and with me.

Vin Learson

... snip ...

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

Stress-testing of Mainframes (the HASP story)

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Stress-testing of Mainframes (the HASP story)
Newsgroups: alt.folklore.computers
Date: Thu, 06 Feb 2025 09:06:08 -1000
Lars Poulsen <lars@cleo.beagle-ears.com> writes:
The more I have learned later, the more I understand just how bad it was. It was an impressive denial-of-service attack for its time. As the phone calls flew from the machine room operator to the operations manager, to the head systems programmer, to the IBM field support, and on and on, red faces of embarrassment must have triggered explosive anger.

And like any other system vulnerability then or later, it was a simple case of insufficient input validation. In retrospect, it was bound to happen sooner or later. ;-)


As an undergraduate in the 60s, I had rewritten lots of CP67 ... including doing dynamic adpative resource management and scheduling. After graduation I joined the science center and one of my hobbies was enhanced operating systems for internal datacenters.

In the decision to add virtual memory to all 370s, there was also the creation of the VM370 group and some of the people in the science split off from CSC, taking over the IBM Boston Programming Center on the 3rd flr (Multics was on 5th flr, CSC was on the 4th flr and CSC machine room was on the 2nd flr). In the morph of CP67->VM370, lots of stuff was simplified and/or dropped, no more multiprocessor support, in-queue scheduling time was only based on virtual problem CPU, and kernel integrity was really simplified, other stuff.

Now virtual machine could in get into top, interactive Q1 and execute some code that was almost supervisor CPU (and very little virtual problom CPU) ... resulting in run-away CPU use ... locking out much of the rest of the users. The simplification in kernel integrity resulted in "zombie" users. In 1974, I started migrated lots of CP67 to VM370R2-base for my internal CSC/VM ... which included curing the run-away CPU use and zombie users.

Another problem was CP67 would determine long wait state drop from queue and interactive Q1 based on real terminal type ... VM370 changed it to virtual terminal type. That worked OK as long as the virtual terminal type was the similar to the real terminal type ... which broke with CMS virtual terminal type 3215 but the real terminal was 3270. CMS would put up READ for 3215 and go into virtul wait (waiting for the enter interrupt indicating end of typing input) and would be dropped from queue. 3270 typing would be saved in local buffer, user hits enter and presents a ATTN to system, CMS does a read and goes into wait state and is dropped from queue, but end of read is almost immediately (rather than waiting for somebody typing).

CP67 increased the count of virtual machine active "high-speed" real device channel programs and at entry to virtual wait state and check "high-speed" channel program count ... if it was zero, virtual machine dropped from queue. VM370 at virtual machine entry to wait, would scan complete virtual device configuration looking for "high-speed" device active channel program, and virtual 3215 didn't qualify.

After transfer out to SJR on the west coast, ran into a different problem, SJR replaced it 370/195 MVT system with 370/168 MVS and 370/158 VM370. It included MVS 3830 disk controller and MVS 3330 string with VM370 3830 disk controller and VM370 3330 string. Both the MVS & VM370 3830 disk controller had dual channel connections to both systems, however there was strict rules that never would MVS 3330 on VM370 string ... but one morning operator mounted a MVS 3330 on VM370 drive ... and almost immediately operations started getting irate phone calls from all over the bldg.

Issue was OS/360 and descendents make exensive use of multi-track search CCW ... which can take 1/3rd sec elapsed time ... which locks up controller ... and locks out all devices on that controller ... interferring with trivial interactive response that involves any disk I/O (MVS/TSO users are use to it, but the CMS users were use to better than .25sec interactive response).

Demands that operations move the offending MVS disk to the MVS string were met with it would be done offshit. We get a VM370-tuned VS1 system and mount its sysem pack on a MVS 3330 drive and start a program. The highly tuned VM370 VS1, even running on loaded VM370 158 ... could bring the MVS 168 nearly to a halt ... minimizing its interference with CMS workloads. Operations immediately agree to move the offending MVS 3330 if we move the VS1 3330.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
prior HASP/ASP, JES2/JES3, and/or NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

posts mentioning SJR, 168 MVS, 158 VM370, VS1 incidence/occurrence
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2024.html#75 Slow MVS/TSO
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2019b.html#15 Tandem Memo
https://www.garlic.com/~lynn/2018.html#93 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2018.html#46 VSE timeline [was: RE: VSAM usage for ancient disk models]
https://www.garlic.com/~lynn/2016.html#15 Dilbert ... oh, you must work for IBM
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2011.html#36 CKD DASD
https://www.garlic.com/~lynn/2007f.html#20 Historical curiosity question
https://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW

--
virtualization experience starting Jan1968, online at home since Mar1970

Big Iron Throughput

From: Lynn Wheeler <lynn@garlic.com>
Subject: Big Iron Throughput
Date: 09 Feb, 2025
Blog: Facebook
after transferring to SJR on the west coast in 2nd half of 70s, I got to wander around (both IBM and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test across the street. At the time they were running pre-scheduled, 7x24, stand-alone mainframe testing and mentioned that they had recently tried MVS but it had 15min MTBF (in that environment) requiring manual re-ipl. I offer to rewrite the I/O supervisor to make it bullet proof and never fail so they could do any amount of on-demand, concurrent testing, greatly improving productivity. I then do an "internal" research report on all the work and happen to mention the MVS 15min MTBF, bringing down the wrath of the MVS organization on my head.

1980, STL (since renamed SVL) was bursting at the seams and moving 300 people (& 3270s) from the IMS DBMS group to offsite bldg (with dataprocessing back to STL datacenter). They had tried "remote 3270" but found the human factors unacceptable. I get con'ed into doing channel-extender support, placing 3270 channel-attached controllers at the offsite bldg, with no perceived difference in human factors. Side-effect was STL had spread 3270 controllers across the mainframe channels with 3830 disk controllers. Placing channel-extender boxes directly on real IBM channels reduced channel-busy (for the same amount of 3270 terminal traffic) improved system throughput by 10-15%. There was consideration to move all 3270 controllers to channel-extenders to improve throughput of all their 370 systems.

1988, the IBM branch asked if I could help LLNL (national lab) standardize some serial stuff they were working with which quickly became fibre-channel standard ("FCS", including some stuff I had done in 1980, initially 1gbit transfer, full-duplex, aggregate 200mbyte/sec). Then POK releases some serial stuff they had been playing with for a decade with ES/9000 as ESCON (17mbyte/sec, when it was already obsolete). Some POK engineers then become involved w/FCS and define a heavy-weight protocol that significantly reduces throughput that is eventually released as FICON. The most recent public benchmark I can find is z196 "Peak I/O" getting 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time a FCS was announced for E5-2600 (Intel XEON) server blades claiming over a million IOPS (two such FCS having higher throughput than 104 FICON). There may be some confusion mistaking commodity consumer desktop throughput with server business throughput just because they have the same architecture (greater difference than an old 80KIP 370/115 and a 10MIP 370/195). Other trivia, IBM recommends that SAPs (system assist processors that actually do I/O) are kept to 70% CPU ... or more like 1.5M IOPS. Also CKD DASD hasn't been made for decades, all being simulated on industry standard fixed-block disks.

Trivia: max configured z196 was $30M and benchmarked at 50BIPS ... while IBM had base list price for E5-2600 server blade of $1815 and benchmarked at 500BIPS (IBM sells off its open-system server business, when industry press articles about server chip makers were shipping half their product directly to large cloud operators where they do their own assembly at 1/3rd the cost of brand name systems). A large cloud operation can have a score of megadatacenters around the world, each with half million or more server blades, and a megadatacenter operated by staff of 70-80 people) ... each megadatacenter processing equivalent of possibly 5M max-configured mainframes.

disclaimer: last product we did at IBM was HA/6000, started out for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when I start doing technical/scientific cluster scaleup with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scaleup with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster support in same source base with Unix, I also do a distributed lock manager supporting VAXCluster semantics to ease ATEX port). Early JAN92, have meeting with Oracle CEO where IBM/AWD Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan92, cluster scaleup is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than four processors, we leave a few months later.

1993 mainframe/RS6000 (industry benchmark; no. program iterations compared to reference platform)
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS; 16CPU: 2BIPS; 128CPU: 16BIPS


Executive that we had been reporting to, moves over to head-up Somerset/AIM (Apple, IBM, Motorola) doing single chip 801/risc power/pc ... also motorola 88k risc bus enabling multiprocessor configurations. However, i86/Pentium new generation where i86 instructions are hardware translated to RISC micro-ops for actual execution (negating RISC throughput advantage compared to i86).
• 1999 single IBM PowerPC 440 hits 1,000MIPS
• 1999 single Pentium3 hits 2,054MIPS (twice PowerPC 440)
• 2000 z900, 16 processors, 2.5BIPS (156MIPS/proc)

• 2010 E5-2600, two XEON 8core chips, 500BIPS (30BIPS/proc)
• 2010 z196, 80 processors, 50BIPS (625MIPS/proc)


more trivia: note (akin to the 3270 controller channel busy cutting STL/SVL system throughput) 3090 were configuring number of channels for target throughput based on assumption that 3880 disk controller was similar to 3830, but with 3mbyte transfer ("data streaming", multiple byte transfer per end-to-end handshake). However while 3880 had special hardware for 3mbyte data transfer it had really slow microprocessor for everything else ... which really increased channel busy. 3090 realized that they would have to significantly increase the number of channels (to offset the 3880 channel busy) in order to achieve necessary system throughput, which required an additional TCM (3090 semi-facetiously claimed they were billing the 3880 group for the increase in 3090 manufacturing cost). Marketing eventually respins the big increase in number of 3090 channels as being wonderful I/O machine.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
DASD, CKD, FBA, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd

misc recent posts mentioning "big iron" this century
https://www.garlic.com/~lynn/2025.html#74 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#21 Virtual Machine History
https://www.garlic.com/~lynn/2024e.html#130 Scalable Computing
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#97 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#85 Vintage DASD
https://www.garlic.com/~lynn/2023d.html#47 AI Scale-up
https://www.garlic.com/~lynn/2022h.html#113 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#12 What is IBM SNA?
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#71 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#54 IBM Z16 Mainframe
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16

--
virtualization experience starting Jan1968, online at home since Mar1970

Big Iron Throughput

From: Lynn Wheeler <lynn@garlic.com>
Subject: Big Iron Throughput
Date: 09 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput

Turn of century, IBM financials had mainframe hardware a few percent of IBM revenue (and dropping). A decade ago, IBM financials had mainframe hardware a couple percent of IBM revenue (and still dropping), but the IBM group was 25% of IBM revenue (and 40% of profit), nearly all software and services.

Note: late 80s, a senior disk engineer got a talk scheduled at a world-wide, annual, internal communication group conference supposedly on 3174 performance, but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing a drop in disk sales with data fleeing the datacenters for more distributed computing friendly platforms. The disk division had come up with a number of solutions, but they were constantly being vetoed by the communication group (which had corporate strategic responsibility for everything that crossed the datacenter walls and were fiercely fighting off client/server and distributed computing). Turns out it wasn't just disks, but the whole mainframe datacenter business and a couple years later, IBM has one of the largest losses in the history of US companies and was being reorged into the 13 "baby blues" in prepartion for breaking up the company (take-off on the "baby bells" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left the company but get a call from bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former president of AMEX as CEO to try and save the company.

communication group fighting off client/server and distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Wang Terminals (Re: old pharts, Multics vs Unix)

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wang Terminals (Re: old pharts, Multics vs Unix)
Newsgroups: alt.folklore.computers
Date: Sun, 09 Feb 2025 14:54:57 -1000
Large IBM customers could have hundreds of thousands of IBM 3270s ... one of the issues was SNA/VTAM only supported 64K ... so need to spread the terminals across multiple mainframes and invent "cross-domain" support where a terminal "owned" by one mainframe could create SNA session with a different mainframe.

Then customers started buying IBM/PCs with 3270 terminal emulation getting both mainframe terminal as well as some local computing capability ... part of the early explosion in IBM/PC sales. 30yrs of PC market share figures (IBM/PC & clone annual sales: 1981 0, 1984 2M, 1987 6M, 1990 16M, 1994 36M):
https://arstechnica.com/features/2005/12/total-share/

trivia: original article was separate web pages/URLs, now there is single webpage, and the previous web URLs are redirected into displacement
The rise of the PC (1987-1990):

But the real winner of this era was the IBM PC platform. Sales kept increasing, and by 1990 PCs and clone sales had more than tripled to over 16 million a year, leaving all of its competitors behind. The platform went from a 55 market share in 1986 to an 84% share in 1990. The Macintosh stabilized at about 6% market share and the Amiga and Atari ST at around 3% each.

... snip ...

trivia: 1980, there was an effort to replace many of the internal microprocessors with various 801/risc implementations ... all with common programming environment. For various reasons most of them floundered. The 801/risc ROMP chip was suppose to be for the Displaywriter followon, when that was canceled, they decide to pivot to the Unix workstation market and got the company that had done the AT&T Unix port to IBM/PC as PC/IX, to do one for ROMP ... which becomes PC/RT and AIX. The follow-on was RS/6000 that Wang resold

Wang Lab
https://en.wikipedia.org/wiki/Wang_Laboratories
Word processing market collapse
https://en.wikipedia.org/wiki/Wang_Laboratories#Word_processing_market_collapse
PCs and PC-based products
https://en.wikipedia.org/wiki/Wang_Laboratories#PCs_and_PC-based_products
Decline and fall
https://en.wikipedia.org/wiki/Wang_Laboratories#Decline_and_fall
In June 1991, Wang started reselling IBM computers, in exchange for IBM investing in Wang stock. Wang hardware strategy to re-sell IBM RS/6000s also included further pursuit of UNIX software.
... snip ...

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

posts mentioning 30yrs of PC market share:
https://www.garlic.com/~lynn/2024e.html#102 Rise and Fall IBM/PC
https://www.garlic.com/~lynn/2024e.html#16 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#2 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024c.html#36 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#47 OS2
https://www.garlic.com/~lynn/2024b.html#25 CTSS/7094, Multics, Unix, CP/67
https://www.garlic.com/~lynn/2024.html#4 IBM/PC History
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023c.html#22 IBM Downfall
https://www.garlic.com/~lynn/2022h.html#109 terminals and servers, was How convergent was the general use of binary floating point?
https://www.garlic.com/~lynn/2022h.html#41 Christmas 1989
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2019e.html#137 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2018f.html#49 PC Personal Computing Market
https://www.garlic.com/~lynn/2018b.html#103 Old word processors
https://www.garlic.com/~lynn/2017g.html#73 Mannix "computer in a briefcase"
https://www.garlic.com/~lynn/2017g.html#72 Mannix "computer in a briefcase"
https://www.garlic.com/~lynn/2014l.html#54 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2014f.html#28 upcoming TV show, "Halt & Catch Fire"
https://www.garlic.com/~lynn/2013n.html#80 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2011m.html#56 Steve Jobs passed away
https://www.garlic.com/~lynn/2010h.html#4 What is the protocal for GMT offset in SMTP (e-mail) header
https://www.garlic.com/~lynn/2009o.html#68 The Rise and Fall of Commodore
https://www.garlic.com/~lynn/2008r.html#5 What if the computers went back to the '70s too?
https://www.garlic.com/~lynn/2007v.html#76 Why Didn't Digital Catch the Wave?
https://www.garlic.com/~lynn/2007m.html#63 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM

--
virtualization experience starting Jan1968, online at home since Mar1970

Wang Terminals (Re: old pharts, Multics vs Unix)

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wang Terminals (Re: old pharts, Multics vs Unix)
Newsgroups: alt.folklore.computers
Date: Sun, 09 Feb 2025 17:52:15 -1000
Lars Poulsen <lars@cleo.beagle-ears.com> writes:
Wasn't RS/6000 what was in IBM's Internet routers at the time (1985-1995 IIRC)?

re:
https://www.garlic.com/~lynn/2025.html#88 Wang Terminals (Re: old pharts, Multics vs Unix)

RS/6000 didn't ship until 1990, PC/RT was used in some internet routers.
https://lastin.dti.supsi.ch/VET/sys/IBM/7012/rs6000.pdf

Early 80s, I had HSDT project, T1 and faster computer links and was working with NSF director and was suppose to get $20M to interconnect NSF supercomputer centers (trivia: IBM had 2701 telecommunication controllers in the 60s that support T1 links, however issues with the move to SNA/VTAM in the 70s appeared to cap controller links at 56kbits/sec).

Then congress cuts the budget, some other things happened and eventually RFP was released (in part based on what we already had running, including T1 links). Preliminare NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

What was initially deployed was PC/RT routers with 440kbit/sec cards driving links ... and to somewhat make it look like meets the RFP, ran with telco multiplexors and T1 trunks.

trivia: HSDT was having custom hardware built on the other side of the pacific. On friday before I was to checkup on some stuff, received internal email from Raleigh (communication group) announcing a new internal (online conferencing) "forum" with the definitions
low-speed: 9.6kbits/sec, medium speed: 19.2kbits/sec, high-speed: 56kbits/sec, very high-speed: 1.5mbits/sec

monday morning on the wall of conference room (on the other side of the pacific)
low-speed: <20mbits/sec, medium speed: 100mbits/sec, high-speed: 200mbits-300mbits/sec, very high-speed: >600mbits/sec

The last product we did at IBM was HA/6000, originally for NYTimes to move their newspaper system (ATEX) off DEC VAXcluster to RS/6000 (started 1988 before RS/6000 announced). I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster support in same source base with unix). The IBM S/88 product administrator then starts taking us around to their customers and has me write a section for the corporate continuous availability strategy document (it gets pulled when both Rochest/AS400 and POK/mainframe complain that they couldn't meet objectives).

Early Jan1992 in meeting with Oracle CEO, IBM/AWD Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we couldn't work with anything with more than four processors (we leave IBM a few months later).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Social Media

From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Social Media
Date: 10 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#82 Online Social Media
https://www.garlic.com/~lynn/2025.html#83 Online Social Media

PROFS group was out picking up internal apps to wrap 3270 menus around and picked up a very early version of VMSG for PROFS email client. Later the VMSG author tried to offer them a much enhanced version ... and the PROFS group tried to have him separated from IBM. The whole thing quieted down after the VMSG author demonstrated that every PROFS note/email had his initials in a non-displayed field (after that, the VMSG author only shared his source with me and one other person). Well predatating IBM/PC, 3270 emulation, HLLAPI ... the VMSG author also did PARASITE and STORY CMS APPS. PARASITE used same VM 3270 pseudo device (as PVM) for creating 3270 terminal. STORY did (HLLAPI-kind of) programming scripts. I had a terminal script to access (field engineering) RETAIN, download PUT Bucket... Old archived post with PARASITE/STORY informmation (remarkable aspect was code so efficient, could run in CMS "transient" area):
https://www.garlic.com/~lynn/2001k.html#35
and (field engineering) RETAIN Story
https://www.garlic.com/~lynn/2001k.html#36

trivia (upthread): internal network started out as CP67-based wide-area science center network centered in cambridge (RSCS/VNET) morphing into the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s ... about the time it was forced to convert to SNA/VTAM).

Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

other posts mentioning vmsg, profs, parasite, story, pvm
https://www.garlic.com/~lynn/2024e.html#27 VMNETMAP
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2019d.html#108 IBM HONE
https://www.garlic.com/~lynn/2017g.html#67 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2011o.html#30 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2011m.html#44 CMS load module format
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Computers
Date: 11 Feb, 2025
Blog: Facebook
I took two credit hr intro to fortran/computers and at the end of the semester was hired to rewrite 1401 MPIO (unit record front end for 709) for 360/30. Univ was getting 360/67 for tss/360 replacing 709/1401 and temporarily got 360/30 replacing 1401 pending arrival of 360/67. The univ shutdown the datacenter over the weekend and I would have the place dedicated .... although 48hrs w/o sleep made monday classes hard. I was given a bunch of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error retry/recovery, storage management, etc. Within a few weeks I had 360 assembler, stand-alone monitor loaded with BPS loader. I then added assembler option that either assembled the stand-alone monitor (took 30mins to assemble) or an os/360 program (system services, took 60mins to assemble, 5-6mins per DCB macro). When 360/67 arrives, TSS/360 hadn't come to production and it runs as 360/65 with OS/360 and I'm hired fulltime responsible for OS/360.

Student Fortran ran under second on 709 tape->tape. Initially on 360/65 OS/360 ran over a minute. I install HASP and that cuts time in half. I then start reorganizing STAGE2 SYSGEN to place datasets and PDS members to optimize arm seem and multi-track search, cutting another 2/3rds to 12.9secs. Student Fortran never got any better than 709 until I install Univ. of Waterloo WATFOR.

Then three people from CSC come out to install (virtual machine) CP67/CMS (3rd install after CSC itself and MIT Lincol Labs) and I mostly get to play with during my dedicated weekend 48hrs. First few months I rewrite a lot of CP67 pathlengths to improve running OS/360 in virtual machine. Test stream ran 322secs on real machine, but initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months had CP67 CPU down to 113secs (from 534secs) .... and was invited to the "official" CP67 announce at Houston SHARE conference (was also HASP 1st project meeting).

Before I graduate was hired fulltime into small group in Boeing CFO office to help with creation of Boeing Computer Services, consolidate all dataprocessing into an independent business unit. I think Renton datacenter largest in the world (360/65s arriving faster than they could be installed, boxes constantly statged in hallways around the machine), joke about Boeing was getting 360/65s like other companies got keypunch machines (sort of precursor to cloud megadatacenters). Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarge the room to install a 360/67 for me to play with when I wasn't doing other stuff). Then there was disaster plan (Mt. Rainier might heat up and resulting mud slide take out Renton) to replicate Renton datacenter at the new 747 plant up in Everett.

Comment in this end of ACS360 (rumor it was killed because afraid it would advance state of the art too fast)
https://people.computing.clemson.edu/~mark/acs_end.html
In his book, Data Processing Technology and Economics, Montgomery Phister, Jr., reports that as of 1968:

• Of the 26,000 IBM computer systems in use, 16,000 were S/360 models (that is, over 60%). • Of the general-purpose systems having the largest fraction of total installed value, the IBM S/360 Model 30 was ranked first with 12% (rising to 17% in 1969). The S/360 Model 40 was ranked second with 11% (rising to almost 15% in 1970). • Of the number of operations per second in use, the IBM S/360 Model 65 ranked first with 23%. The Univac 1108 ranked second with slightly over 14%, and the CDC 6600 ranked third with 10%.

... snip ...

Renton did have one 360/75 that did classified work, with heavy black rope around the parimeter. When classified work was running, there were guards at the perimeter and heavy black velvet draped over console lights and 1403 printer areas where printed output was exposed.

In the early 80s, I was introduced to John Boyd and would sponsor his briefings. One of his stories was being very vocal that the electronics across the trails wouldn't work. Possibly as result, he was put in command of "spook base" (about same time I was at Boeing). Boyd biography claims that "spook base" was $2.5B windfall for IBM (ten times Renton).
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html

some recent posts mentioning univ. 709/1401, MPIO, HASP, Fortran, Watfor, CSC, CP67, Boeing CFO, and BCS
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022.html#12 Programming Skills

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Suggestion Plan

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Suggestion Plan
Date: 11 Feb, 2025
Blog: Facebook
Early 1984, I submitted a suggestion about the requirement to apply security classification to each dataset ... including in the CMS environment, for each CMS file. I claimed that personal CMS minidisk is more akin to MVS PDS dataset and each of the CMS files is more akin to PDS member. A conservative estimate applying it to every employee, there would be a $12M/yr difference in employee time between security classification requirement for every CMS file versus every personal CMS minidisk. It was rejected, but a couple months later corporate memo went out adopting it.

trivia: co-worker at cambridge science center was responsible for the CP67-based wide-area scientific center network centered in cambridge that morphs into the corporate internal network (larger than the arpanet/internet from just about the beginning until sometime mid/late 80s ... about the time converted to SNA/VTAM). The big 1jan1983 conversion of arpanet to internetworking, arpanet had approx. 100 IMP network controllers and 255 hosts at the time the internal network was rapidly approaching 1000. Old archived post with some of the 1983 weekly internal network updates and summary of corporate locations that added one or more new nodes (during 1983):
https://www.garlic.com/~lynn/2006k.html#8

Note: I was in the corporate dog house. In late 70s & early 80s, I was blamed for online computer conferencing on the internal network; it really took off the spring of 1981, when I distributed a trip report of visit to Jim Gray at Tandem; only about 300 actively participated but claims upwards of 25,000 were reading (folklore when the corporate executive committee was told, 5of6 wanted to fire me). ... from IBMJargon ... copy here
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

One of the results was official sanctioned computer conferencing software and moderated sanctioned forums. Another was a researcher was paid to sit in the back of my office for nine months, go with me to meetings ... studying how I communicated. They also got copies of my incoming and outgoing email (their analysis that I exchanged email with an avg. of 273 people/week during the nine month study) and logs of all instant messages. The result was (IBM) research reports, papers, conference talks, books and Stanford Phd (joint between language and computer AI, Winograd was advisor on computer side).

cambrdige science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Public Facebook Mainframe Group

From: Lynn Wheeler <lynn@garlic.com>
Subject: Public Facebook Mainframe Group
Date: 12 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024e.html#69 Public Facebook Mainframe Group

closer to 50yrs (mid-70s). repeat from upthread 2005 article in mainframe magazine (although they garbled some details)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/

2022 post, 1972 Learson tried (& failed) to block bureaucrats, careerists, and MBAs from destroying watson culture/legacy, 20yrs later (1992), IBM has one of the largest losses in the history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company (take-off on the "baby bells" breakup decade earlier)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
We had already left IBM, but get a call from bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former AMEX president to try and save the company.

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

old pharts, Multics vs Unix

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: old pharts, Multics vs Unix
Newsgroups: alt.folklore.computers
Date: Wed, 12 Feb 2025 15:15:36 -1000
Dan Espen <dan1espen@gmail.com> writes:
A 3270 starts out in overwrite mode, but you just hit the insert key to get into insert mode. There was a more primitive TSO Edit. Not nearly as good as ISPF edit. CMS users also had a full screen editor. (XEDIT?) The thing I remember about that is it started out looking horrible, but if you set a bunch of options it was pretty nice.

one 1st CMS fullscreen 3270 edit shipped was EDGAR ... distinctive that up/down commands reversed, rather that logically move screen(/winddow) up/down file, file was logically moved up/down .(aka "up" moved closer to bottom of file).

then in the demise of Future System (completely different than 370 and going to completely replace 370), there was mad rush to get stuff back into the 370 product pipelines ... and head of POK (high-end 370 mainframes) managed to convince corporate to kill the vm370/cms product, shutdown the development group and transfer all the people to POK for MVS/XA. Eventually, Endicott manages to save the VM370/CMS product mission ... but had to recreate a development group from scratch.

In the meantime there were large number of internal VM370 installations and started to see significant work by internal datacenter people. One was the "RED" fullscreen editor. Then in early days when Endicott thot it would develope their own fullscreen editor (XEDIT), I got into a little conflict with Endicott suggesting that they should ship RED instead; RED was much more mature, much more function, significant better performance, etc. Endicott response was that it was the RED author's fault that RED was so much better than XEDIT ... therefor it should be his responsibility to fix XEDIT.

Later when POK was trying to do ISPF they ran into the problem that gov requirements for IBM priced software that revenue had to cover original development, as well as ongoing development/maintenance ... and there was couple hundred people in the ISPF group ... nobody would pay that price ... and it took a lot of creative accounting to get the price to point that people would pay.

some old RED email (in old a.f.c. archived posts)
https://www.garlic.com/~lynn/2006u.html#email781103
https://www.garlic.com/~lynn/2006u.html#email790523
https://www.garlic.com/~lynn/2006u.html#email790606
https://www.garlic.com/~lynn/2006u.html#email800311
https://www.garlic.com/~lynn/2006u.html#email800312
https://www.garlic.com/~lynn/2006u.html#email800429
https://www.garlic.com/~lynn/2006n.html#email810531
https://www.garlic.com/~lynn/2002p.html#email821122

some of above also references that after joining IBM one of my hobbies was enhanced production operating systems for internal datacenters, initially enhanced CP67, then enhanced "CSC/VM" at the science center and later "SJR/VM" at san jose research. also mentioned is having originally done CMSBACK in late 70s (archived/backup) ... which later morphs into WSDF, ADSM, TSM, etc.

cambridge scientific center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal CP67, CSC/VM, SJR/VM distribution posts
https://www.garlic.com/~lynn/submisc.html#cscvm
cmsback, backup, archive posts
https://www.garlic.com/~lynn/submain.html#backup
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 12 Feb, 2025
Blog: Facebook
IBM AWD did their own cards for the PC/RT (PC/AT 16bit bus), including 4mbit T/R card. Then the communication group was fiercely fighting off client/server and distributed computing and were severely performance kneecapping PS2 microchannel cards. For RS/6000 with microchannel, AWD was told they couldn't do their own cards but had to use PS2 microchannel cards. It turns out that the PS2 16mbit token-ring microchannel card had lower card throughput than the PC/RT 4mbit token-ring card (joke that a PC/RT 4mbit token-ring server had higher throughput than a RS/6000 16mbit token-ring server).

New Almaden Research was heavily provisioned with CAT wiring, but found that RS/6000s $69 10mbit Ethernet cards over CAT wiring, had higher throughput than $800 16mbit token-ring cards. Also for running TCP/IP and the difference in cost between 300 Ethernet and 300 Token-ring ... it was possible to get a number of high-performance TCP/IP routers with channel interfaces and 16 Ethernet interfaces (also had T1 and T3 options) ... being able to configure as few as 4-5 workstations sharing same Ethernet LAN. Also star-wired CAT between router box and workstation could run full-duplex for nearly whole distance, minimizing collisions.

The communication group tried to prevent mainframe TCP/IP support from being released. When that failed they said that since the communication group had corporate strategic ownership of everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbyte/sec throughput using nearly whole 3090 processor. I then did changes for RFC1044 and in some tuning tests at Cray Research between Cray and 4341, was able to get sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

1988, ACM SIGCOMM had paper analyzing 10mbit ethernet ... with 30 station configuration got 8.5mbits/sec effective LAN throughput. Putting all 30 stations in low-level device driver loop constantly transmitting minimum sized Ethernet packets, effective LAN throughput dropped to 8mbits/sec.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

posts mentioning ACM SIGCOMM Ethernet analysis
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024c.html#58 IBM Mainframe, TCP/IP, Token-ring, Ethernet
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2022f.html#19 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2017d.html#29 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2005h.html#12 practical applications for synchronous and asynchronous communication
https://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york times?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 13 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring

My wife was con'ed into co-author for response to gov. agency request for large, super-secure, campus like environment where she included 3-tier networking. We were then out doing customer executive presentations on TCP/IP, ethernet, 3-tier, high-performance routers and frequently getting misinformation arrows in the back coming from SNA & Token-ring forces.

I had been con'ed into helping Endicott in the mid-70s with microcode assists. The manager from mid-70s Endicott was then executive in charge of SAA, large top-floor corner office in Somers ... and we would periodically drop by his office to complain about how bad some of his people were.

trivia: Early 80s, got HSDT project, T1 and faster computer links, both satellite and terrestrial and some number of arguments with communication group (60s, IBM had 2701 telecommunication controller that supported T1, then move to SNA/VTAM in 70s seemed to cap links at 56kbits/sec). Was working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

3-tier, middle layer, saa posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
360/370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 13 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#96 IBM Token-Ring

disclaimer: in prior life, my wife had been co-inventor of token-passing IBM patent that was used in s/1 chat-ring.

also, about the time was trying to help NSF director, IBM branch office asks me if I would also turn out as type-1 product, a NCP/VTAM emulator that a "baby bell" had done on Series/1, it "owned" all resources and used "cross-domain" protocol and no single point of failure, with real mainframe VTAMs, significantly more function, much better price/performance, much higher performance, etc. Part of presentation I gave at Raleigh '86 SNA ARB meeting:
https://www.garlic.com/~lynn/99.html#67

and part of presentation that "baby bell" did at '86 Common user group meeting
https://www.garlic.com/~lynn/99.html#70

the Branch Office had lots of experience with communication group internal politics and tried their best to wall off anything communication group might do, what happened next can only be described as truth is stranger than fiction.

trivia: the IMS "hot-standby" group thought it was really great because it could maintain "shadow" sessions with the hot-standby; while IMS "hot-standby" could fall-over very fast ... a large terminal configuration could take over an hour on large 3090 before all the VTAM sessions were back up and operational.

other trivia: Dallas E&S had published comparison ... however their microchannel 16mbit t-r didn't correspond with AWD analysis showing the PS2 16mbit t-r microchannel cards had lower throughput than the PC/RT 4mbit t-r cards, and the ethernet data appeared as if it was based on early experimental 3mbit ethernet prior to listen-before-transmit protocol.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Tom Watson Jr Talks to Employees on 1960's decade of success and the 1970s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Tom Watson Jr Talks to Employees on 1960's decade of success and the 1970s
Date: 13 Feb, 2025
Blog: Facebook
Computer History 1970 IBM Tom Watson Jr Talks to Employees on 1960's decade of success and the 1970s
https://www.youtube.com/watch?v=C9_KDvu2pL8

1972, CEO Learson tries (buf fails) to block the bureaucrats, careerists and MBAs from destroying the Watson culture/legacy, 30yrs of management briefings 1958-1988, pg160-163 Learson Briefing and THINK magazine article
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
more detail over the next 20yrs
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

1992, IBM has one of the largest losses in the history of US corporations and was in the process of being re-orged into the 13 "baby blues" in preparation for breaking up the company (take off on the "baby bell" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

FAA And Simulated IBM Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: FAA And Simulated IBM Mainframe
Date: 14 Feb, 2025
Blog: Facebook
I didn't know Fox and the original FAA group at IBM, but after I left IBM, did a project with the company they formed (after they left IBM)
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514/

also late 90s, did some consulting for Steve Chen CTO @ Sequent and Fundemental (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20241009084843/http://www.funsoft.com/
https://web.archive.org/web/20240911032748/http://www.funsoft.com/index-technical.html

... Sequent was Fundemental platform of choice (and supposedly used by FAA) ... this was before IBM bought Sequent and shut it down.

a couple recent posts mentioning IBM FAA and sequent
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021i.html#20 FAA Mainframe
https://www.garlic.com/~lynn/2021.html#42 IBM Rusty Bucket

--
virtualization experience starting Jan1968, online at home since Mar1970

Clone 370 Mainframes

From: Lynn Wheeler <lynn@garlic.com>
Subject: Clone 370 Mainframes
Date: 15 Feb, 2025
Blog: Facebook
trivia: Amdahl had won the battle to make ACS, 360 compatible ... then it was killed, and Amdahl leaves IBM (folklore that they were afraid that it would advance the state of the art too fast and IBM might loose control of the market) ... ref has some ACS/360 features that show up more than 20yrs later with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html

Then IBM starts the Future System effort, completely different than 370 and was going to completely replace it. During this period, internal politics was killing off 370 efforts. Then when FS is finally killed there is mad rush to get stuff back into the 370 product pipelines, including kicking off quick & dirty 3033&3081 (claim is the lack of new IBM 370 products during this period is what gave the clone 370 makers their market foothold, also claim spawned huge amount of IBM marketing FUD).
http://www.jfsowa.com/computer/memo125.htm

Supposedly, the final nail in FS coffin was analysis by IBM Houston Science Center ... that if 370/195 applications were redone for FS machine made out fastest technology available, it would have throughput of 370/145 (about 30 times slowdown),

Initially 3081 was going to be multiprocessor only and 2CPU 3081D aggregate MIPS was less than Amdahl single processor. They quickly double the processor cache sizes for 3081K making aggregate MIPS about the same as Amdahl single processor, however (MVS) claim was that (MVS) 2CPU throughput was only 1.2-1.5 times single processor (because of extra multiprocessor system overhead).

also trivia: Not long after Amdahl formed his company, he gave a talk in large MIT auditorium and was grilled on what business case did he use to get funding (that even if IBM completely walks away from 370, there was enough customer 370 software that would keep him in business until the end of the century) and was it still a US company (combination of foreign investment and foreign manufacturing).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mentioning ACS/360, clone 370, amdhal throughput, 3081, MIT auditorium talk
https://www.garlic.com/~lynn/2024g.html#32 What is an N-bit machine?
https://www.garlic.com/~lynn/2023f.html#72 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023e.html#69 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2021e.html#67 Amdahl
https://www.garlic.com/~lynn/2012n.html#32 390 vector instruction set reuse, was 8-bit bytes

--
virtualization experience starting Jan1968, online at home since Mar1970

Clone 370 Mainframes

From: Lynn Wheeler <lynn@garlic.com>
Subject: Clone 370 Mainframes
Date: 15 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#100 Clone 370 Mainframes

... real challenge for IBM Marketing FUD.

Initially, no single processor 3081 was also huge problem for sales, IBM was concerned that ACP/TPF (that didn't have multiprocessor support) and were worried that the whole market shifts to Amdahl (separate from 2CPU 3081K was about same aggregate processor MIPs as single processor Amdahl, and MVS 2CPU 3081K w/same aggregate MIPS, only .6-.75 times the throughput). Eventually IBM ships 3083 (3081 with one of the processors removed, 2nd processor was in middle of box, just removing it would have made box top heavy and prone to tip over, had to rewire box to have 1st processor in the middle).

I had a separate (but similar) problem. When I first join IBM, one of my hobbies was enhanced production operating systems for internal datacenters (the 1st, and long-time was the online sales&marketing support HONE systems). Then in the decision to add virtual memory to all 370s and do VM370, some of the science center split off taking over the Boston Programming Center on the 3rd flr (MIT Multics was on 5th flr, CSC on 4th, BPS on 3rd, and CSC machine room on 2nd).

In the morph of CP67->VM370 lots of stuff was simplified and/or dropped (including multiprocessor support). In 1974, I start moving bunch of stuff from CP67 to VM370R2, including kernel reorg for multiprocessor support, but not the SMP itself). Somehow AT&T longlines gets a copy of this CSC/VM adding some number of features and propagating around AT&T. I then add multiprocessor support to a VM370R3-based CSC/VM, initially for HONE (US HONE had consolidated all their datacenters in silicon valley, extended to largest single-system image operation, shared-DASD with fall-over and load balancing) so they could add a 2nd processor to each system (trivia: some slight of hand, I was getting twice throughput of each 2CPU).

In the early 80s, the IBM AT&T national marketing rep tracks me down wanted help moving all the internal AT&T CSC/VM to something that had multiprocessor support (same concern as ACP/TPF all moving to Amdahl).

... random trivia: when facebook 1st moved into silicon valley, it was into new bldg built next door to the former US HONE consolidated datacenter.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Large IBM Customers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Large IBM Customers
Date: 15 Feb, 2025
Blog: Facebook
As undergraduate, univ had gotten a 360/67 for tss/360 replacing 709/1401 and I was hired fulltime responsible for os/360 (i.e. tss/360 was production and so ran as 360/65). Then before I graduate, I was hired fulltime in small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit). I thought Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room, sort of precursor to the current cloud megadatacenters (joke that Boeing got 360/65s like other companies got keypunch machines). Lots of politics between Renton datacenter director and CFO, who only had 360/30 up at Boeing field (although they enlarge the machine room for 360/67 for me to play with when I wasn't doing other stuff).

I was introduced to John Boyd in the early 80s, and use to sponsor his briefings. He told a lot of stories, including being very vocal that the electronics across the trail wouldn't work. Possibly as punishment, he was put in command of "spook base" (about the same time I'm at Boeing), claims it had the largest air conditioned bldg in that part of the world. Boyd biography has "spook base" a $2.5B "windfall" for IBM (about ten times Renton datacenter)
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

Note: In 89/90 time-frame, Commandant of Marine Corp leverages Boyd for make-over of the corps, at a time when IBM was desparately in need of make-over. In 1992, has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for the beakup of the comapny (take-off on the "baby bells" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts mentioning univ, Boeing CFO, BCS, Renton, John Boyd and "spook base"
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024f.html#40 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024e.html#58 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#42 IBM Koolaid
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022e.html#13 VM/370 Going Away
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022b.html#10 Seattle Dataprocessing
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018f.html#35 OT: Postal Service seeks record price hikes to bolster falling revenues

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe dumps and debugging

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe dumps and debugging
Date: 15 Feb, 2025
Blog: Facebook
Early in REX days (before rename REXX and released to customers), I wanted to show that REX wasn't just another pretty scripting language ... I chose a large assembler application (dump problem analysis) with the objective of working half time over 3 months, re-implement with ten times the function and ten times the performance (little bit of slight of hand with 100 assembler instructions). I finished early so started automated library to search for common failure signatures. I thought it would be released to customers ... but for whatever reason it wasn't (it was in the early days of OCO-wars and it included ability to disasemble sections of code and format storage fragments use macro library source) even thought it was in use by nearly every PSR and internal datacenters. I eventually get permission to give talks at user group meetings on how I did the implementation and within a couple months, similar, non-IBM versions started appearing.

dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

trivia: undergraduate and Univ. was getting 360/67 for tss/360 to replace 709/1401 ... when machine arrives, tss/360 wasn't production ... and I was hired fulltime responsible for OS/360 (360/67 running as 360/65). My first sysgen was OS9.5 MFT. Student Fortran ran under a second on 709, but initially over minute on OS/360. I install HASP and that cuts time in half. I start redoing STAGE2 SYSGEN for OS11 MFT, being able to run in production job stream and carefully reordered statements to place datasets and PDS members for optimized arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student Fortran never got better than 709 until I install Univ. of Waterloo WATFOR.

I had some 3rd shift time on nearest IBM regional center for MVT 15/16 ... and during the day wandered around the bldg and found MVT debugging class and asked to sit in. Didn't work out, within 20mins instructor asked me to leave, I kept suggesting alternative(/better) ways.

HASP/ASP, JES2/JES3, NJI/NJE posts
https://www.garlic.com/~lynn/submain.html#hasp
some recent posts mentioning Stage2 SYSGEN, Student Fortran, WATFOR
https://www.garlic.com/~lynn/2025.html#26 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#8 IBM OS/360 MFT HASP
https://www.garlic.com/~lynn/2024g.html#98 RSCS/VNET
https://www.garlic.com/~lynn/2024g.html#69 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024g.html#62 Progenitors of OS/360 - BPS, BOS, TOS, DOS (Ways To Say How Old
https://www.garlic.com/~lynn/2024g.html#54 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#29 Computer System Performance Work
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024f.html#15 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#98 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024e.html#2 DASD CKD
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe dumps and debugging

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe dumps and debugging
Date: 16 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#103 Mainframe dumps and debugging

more trivia: some of the MIT CTSS/7094 went to the 5th flr to do MULTICS, others went to the IBM Cambridge Scientific Center on the 4th and did virtual machines, internal network, invented GML in 1969 (that morphs into ISO standard SGML a decade later and after another decade morphs into HTML at CERN), other stuff. On the 3rd flr was the Boston Programming Center responsible for CPS. When IBM announced virtual memory for all 370s ... some of the scientific center spun off, taking over BPS group on the 3rd flr to become the VM370 development group (when they outgrew the 3rd flr, they moved out to Burlington Mall taking over the empty IBM SBC bldg).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

The last product we did started out HA/6000 in 1988 (before RS/6000 was announced) for the NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (oracle, sybase, ingres, informix that had vaxcluster support in same source base with unix). The S/88 product administrator then starts taking us around to their customers and also has me do a section for the corporate continuous availability strategy document ... it gets pulled when both Rochester/AS400 and POK/(high-end mainframe) complain they couldn't meet the requirements

Early Jan1992 have a meeting with Oracle CEO and IBM/AWD Hester tells Ellison we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

Giant Steps for IBM?

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Giant Steps for IBM?
Date: 17 Feb, 2025
Blog: Facebook
Sophomore took a intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30. Univ was getting 360/67 replacing 7094/1401 for tss/360 and got 360/30 temporary pending 360/67. Within year of taking intro class, 360/67 and I'm hired fulltime responsible for OS/360 (tss/360 never came to production). Univ. library gets ONR grant to do online catalog and uses part of the money for 2321 datacell ... was also selected as betatest for original CICS product and CICS debugging and support added to my tasks (1st problem CICS didn't come up, turns out had undocumented hard coded BDAM options and library had built files with different set of BDAM options).
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in independent unit). I think Renton datacenter largest in the world, 360/65s arriving faster than can be installed and boxes constantly staged in hallways around the machine room. Lots of politics between Renton director and CFO who only had 360/30 up at Boeing field for payroll (although they enlarge the machine room and install 360/67 for me to play with, when I'm not doing other stuff). Both Boeing and IBM tell story that on 360 announce da7, Boeing walks into marketing rep and presents an order making the rep the highest paid employee that year (in days of straight commission). The next year, IBM introduces "quota" ... and end of January, Boeing gives him another large order, making his quota for the year. IBM then adjusts his quota ... and he leaves IBM. When I graduate I join IBM instead of staying with CFO.

Some of the MIT CTSS/7094 people go to the 5th flr to do Multics. Others go to the IBM science center on the 4th flr to do virtual machines, internal network, lots of online apps, invent GML in 1969 (decade later morphs into ISO standard SGML, and after another decade morphs HTML at CERN). Comment about CP67-based wide-area network (that morphs into internal network, larger than arpanet/internet from just about the beginning until sometime mid/late 80s, about time forced into convert to SNA/VTAM)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

... technology also used for the corporate sponsored univ BITNET
https://en.wikipedia.org/wiki/BITNET

co-worker responsible for CP67-based wide area network, internal network, bitnet, etc
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

After transferring to SJR on west coast, got to wander around silicon valley datacenters, including disk bldg14/engineering and bldg15/product.test across the street. They were running prescheduled, 7x24, stand alone testing and mentioned that they had tried MVS, but it had 15min MTBF (in that environment) requiring manual re-ipl. I offer to rewrite I/O supervisor allowing any amount of on-demand, concurrent testing ... greatly improving productivity. Product test (bldg15) gets the 1st engineering 3033 (outside POK processor engineering) and since testing took only percent or two of CPU, scrounge up 3830 disk controller and 3033 string for private online service. Then bldg15 gets engineering 4341 and somebody in one of the branches asks me into doing benchmarks for a national lab looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). Then in the early 80s, also large customers start ordering hundreds of VM4341s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed departmental computing) ... inside IBM, conference rooms becoming scarce so many converted to VM4341 rooms.

Also, early 80s, I'm introduced to John Boyd and would sponsor his briefings. He had a lot of stories including being vocal that the electronics across the trail wouldn't work. Possibly as punishment he is put in command of "spook base" (about the same time I'm at Boeing). Boyd biography says that "spook base" was $2.5B "windfall" for IBM (ten times Renton).
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

Boyd posts and web URLs:
https://www.garlic.com/~lynn/subboyd.html

89/90, Commandant of the Marine Corp leverages Boyd for make-over of the corp (in the quantico library lobby with tributes to various prominent Marines, there is one for USAF retired Boyd) ... at the time, IBM was desperately in need of a make-over ... and 1992, IBM has one of the largest losses in the history of US companies; was being re-organized into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bells" breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

Note: 20yrs before (1972) Learson tries (and fails) to block the bureaucrats, careerists, and MBAs from destroying the Watson culture/legacy) ... more detail
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Learson refs, pgs160-163
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Back to early 80s, also got HSDT project, T1 and faster computer links ... and some battles with the communication group (60s, IBM 2701 telecommunication controller that supported T1 links, but transition to SNA/VTAM in the 70s and associated issues capped controller links at 56kbits). We were working with NSF Director and was suppose to get $20M to interconnect the NSF Supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

news article, former co-worker battles trying to move IBM to TCP/IP:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#cics
23jun1969 IBM Unbundling announce
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.hml#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning 4341, cluster supercomputing, distributed computing
https://www.garlic.com/~lynn/2025.html#38 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#26 Virtual Machine History
https://www.garlic.com/~lynn/2024g.html#81 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#55 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024f.html#95 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#70 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024e.html#46 Netscape
https://www.garlic.com/~lynn/2024e.html#16 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#85 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#15 Mid-Range Market
https://www.garlic.com/~lynn/2024c.html#107 architectural goals, Byte Addressability And Beyond
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#61 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia

--
virtualization experience starting Jan1968, online at home since Mar1970

Giant Steps for IBM?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Giant Steps for IBM?
Date: 17 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?

When transferred to SJR ... referenced in previous postupthread ref also did some work on original sql/relational System/R (during the 70s started out on vm/145), with Jim Gray and Vera Watson. IMS group criticized it because it required double the disk space (for indexes) and a lot more I/O (following index to get records). Was able to do tech transfer to Endicott for SQL/DS ... while most of the corporation was pre-occupied with "EAGLE" (next great DBMS, follow on to IMS). When EAGLE finally implodes, there was a request for how fast could System/R be ported to MVS ... which eventually ships as DB2 (originally for decision-support *ONLY*).

1988, Nick Dinofrio approves HA/6000, initially for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXcluster support in same source base with Unix; I do distributed lock manager with VAXCluster semantics to ease ports; IBM Toronto was still long way before having portable, simple "Shelby" relational for OS2). Then S/88 product administrator starts taking us around to their customers and gets me to write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/as400 and POK/mainframe, complain they can't meet the objectives).

Early Jan92, we have meeting with Oracle CEO, IBM/AWD Hester tells Ellison we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later). Part of the issue was mainframe DB2 was complaining if we were allowed to go ahead, we would be at least 5yrs ahead of them.

Note 1993 rs6000/mainframe industry benchmark (number program iterations compared to reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS; 16-system: 2BIPS; 1280system: 16BIPS


1999 RISC & Intel, Dec2000 mainframe
• single IBM PowerPC 440 hits 1,000MIPS
• single Pentium3 hits 2,054MIPS (twice PowerPC 440)
• z900, 16 processors, 2.5BIPS (156MIPS/proc)


IBM AWD did their own cards for PC/RT, including 4mbit token-ring card. For RS/6000 with microchannel, AWD was told it couldn't do its own cards, but had to use PS2 cards. The communication group was fiercely fighting off client/server and distributed computing and had severely performance kneecapped the microchannel cards. Turns out the PS2 microchannel 16mbit TR had lower card throughput than the PC/RT 4mbit TR (joke that PC/RT 4mbit TR server would have greater throughput than RS/6000 16mbit TR server).

ACM SIGCOMM 1988 article had analysis showed 30station ethernet network getting sustained aggregate 8.5mbits/sec ... device driver low-level loop in all 30stations constantly sending minimum sized packets, aggregate throughput drops to 8mbits/sec. Also $69 10mbit Ethernet (AMD LANCE and other chips) cards greatly outperform $800 16mbit TR microchannel. Published Dallas E&S comparison appeared to have been theoretical possibly using data for 3mbit Ethernet prototype before listen-before-transmit.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

a few posts mentioning ethernet and token-ring
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#19 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021b.html#45 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#17 IBM Kneecapping products
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
https://www.garlic.com/~lynn/2017k.html#18 THE IBM PC THAT BROKE IBM
https://www.garlic.com/~lynn/2015d.html#41 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014m.html#128 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2013m.html#30 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013m.html#18 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013b.html#32 Ethernet at 40: Its daddy reveals its turbulent youth

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Virtual Storage

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Virtual Storage
Date: 18 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#107 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage

... addenda: major issue was systems were getting faster, faster than disks were getting faster ... so newer systems needed increasing number of tasks doing increasing amount of concurrent I/O to keep systems busy and justified. article I wrote in the early 80s was that since 360 started shipping in the 60s, disk relative system throughput had declined by order of magnitude (aka systems got 40-50 times faster, but disks only got 3-5 times faster). A disk division executive took exception and assigned the division performance group to refute the claim ... but after a couple weeks they came back and basically said that I had slightly understated the problem. They then respun the analysis for a presentation about configuring disks & filesystems to improve system throughput (16Aug1984, SHARE 63, B874).

trivia: it has been pointed out that current system memory latency (like cache miss) when measured in count of processor cycles is similar to the 60s disk latency when measured in count of 60s processor cycles (aka memory is the new disk). Can see this going back a couple decades when various system hardware designs started including out-of-order execution, branch prediction, speculative execution, hyperthreading, etc .... trying to keep execution unit fed with instructions overlapped with waiting on memory latency.

posts getting to play disk engineer in disk bldgs 14/engineering and 15/product-test
https://www.garlic.com/~lynn/subtopic.html#disk
old archive post citing pieces of B847
https://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please

other recent posts mentioning SHARE 63, B847:
https://www.garlic.com/~lynn/2024f.html#9 Emulating vintage computers
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#109 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#92 IBM DASD 3380
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#0 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2021j.html#78 IBM 370 and Future System
https://www.garlic.com/~lynn/2021g.html#44 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021e.html#33 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021.html#79 IBM Disk Division
https://www.garlic.com/~lynn/2021.html#59 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Virtual Storage

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Virtual Storage
Date: 18 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#107 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2025.html#107 IBM 370 Virtual Storage

Some of the MIT CTSS/7094 went to the 5th flr and did Multics, others went to the IBM Science Center on the 4th and did virtual machine (CP40 on 360/40 with virtual memory hardware mods, morphs into CP67 when 360/67 standard with virtual memory, later after decision to add virtual memory to all 370s, morphs into VM370), internal network (also used for corporate sponsored univ BITNET), invented GML in 1969 (morphs into SGML a decade later and after another decade morphs into HTML at CERN).

shortly after decision to add virtual memory to all 370s there was the Future System ... replace all 370s with computers that were totally different from 370 (internal politics during FS included killing 370 efforts and the lack of new 370s during the period is credited with giving the clone 370 markers their market foothold). When FS finally implodes there is mad rush to get stuff back into the 370 product pipelines, including kickoff the quick&dirty 3033&3081 efforts.
http://www.jfsowa.com/computer/memo125.htm

The head of POK/Poughkeepsie also convinces corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott manages to save the VM370 mission for the mid-range, but has to recreate a development group from scratch).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Process Control Minicomputers

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Process Control Minicomputers
Date: 18 Feb, 2025
Blog: Facebook
There was presentation at (mainframe user group) SHARE about Amoco Research (in Tulsa) that went from 1800 to a few VM370 370/168s in the shortest elapsed time. Folklore was EDX (for Series/1) was done by physics major summer hire at SJR. Joke was that RPS was bunch of former Kingston OS/360 people transferred to Boca and trying to re-invent MFT.

CSC had System/7 and 2250m4 (i.e. 1130 w/2250, PDP1 spacewar was ported to the 1130). Early 70s, Science Center tried hard to get CPD to use Series/1 Peachtree processor for the 3705 because it was significantly better than the UC processor.

Trivia: Early 80, got HSDT, T1 (1.5mbits/sec) and faster computer links (and some battles with communication group, note IBM had 2701 controller in the 60s that supported T1, going into 70s and SNA/VTAM, issues appeared to cap controllers at 56kbit/sec links) and was working with NSF ... recent post
https://www.garlic.com/~lynn/2025.html#105

when Consulting SE on baby bell account and some number S/1 people talk me into looking at turning out as IBM Type-1 product, a (baby bell) VTAM/NCP emulation done on Series/1. The S/1 people were experienced dealing with the communication group and tried their best to wall them off ... but what was done to kill the effort could only be described as truth is stranger than fiction. Plan was to port to RIOS (the coming RS/6000) as soon as current implementation was out the door. Part of presentation (in archived post) I gave at fall86 SNA ARB meeting in Raleigh (S/1 VTAM/NCP light-years ahead of 3725/NCP and host VTAM).
https://www.garlic.com/~lynn/99.html#67
... also from "baby bell" presentation at 86 "Common" user group meeting
https://www.garlic.com/~lynn/99.html#70

disclaimer: As undergraduate in the 60s, I was hired fulltime responsible for OS/360 (running on 360/67 as 360/65) and then CSC came out and installed (virtual machine) CP67 (3rd install after CSC itself and MIT Lincoln Labs) and I mostly got to play with it during my weekend dedicated time. CP67 had 1050 and 2741 terminal support with some automagic terminal type identification that could switch the terminal port scanner type (in IBM controller) with SAD CCW. Univ. had some ASCII TTY and I add ASCII terminal support including integrated with the automagic terminal type identification. I then wanted to have a signle dial-in phone number ("hunt group") for all terminal types. Didn't quite work, while it was possible to switch the port scanner type, IBM had taken short cut and hard-wired each line baud rate.

This kicks off univ. project to build our own IBM clone controller, build channel interface board for Interdata/3 programmed to emulate IBM controller with the addition it could also do line auto baud rate. This was upgraded to Interdata/4 for channel interface and cluster of Interdata/3s for line interfaces. Interdata (and later Perkin-Elmer) was selling it as clone controller and four of us are written up responsible for (some part of) IBM clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

CSC Posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
360/370 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

DOGE Claimed It Saved $8 Billion in One Contract. It Was Actually $8 Million

From: Lynn Wheeler <lynn@garlic.com>
Subject: DOGE Claimed It Saved $8 Billion in One Contract. It Was Actually $8 Million
Date: 19 Feb, 2025
Blog: Facebook
DOGE Claimed It Saved $8 Billion in One Contract. It Was Actually $8 Million. The biggest single line item on the website of Elon Musk's cost-cutting team included a big error.
https://www.nytimes.com/2025/02/18/upshot/doge-contracts-musk-trump.html

note in 90s ... congress passed law that all agencies had to pass annual financial audits ... dod so far is only one that hasn't passed

Federal Financial Accountability
https://www.gao.gov/federal-financial-accountability

DOD's 2024 Audit Shows Progress Toward 2028 Goals, The Defense Department today released the results of its department wide fiscal year 2024 financial audit, the seventh such audit since 2018.
https://www.defense.gov/News/News-Stories/Article/Article/3967135/dods-2024-audit-shows-progress-toward-2028-goals/

late 90s was co-author for financial industry (x9) privacy standard ... had meetings with various gov privacy officers ... irs, cms/hipaa, etc

90s also had fiscal financial act (spending couldn't exceed tax revenue ... on its way to eliminating all federal debt). 2002 congress lets fiscal financial act lapse; 2010 CBO published study that 2003-2009, tax revenue was cut by $6T and spending increased by $6T (for $12T gap compared to fiscal responsibility act, first time taxes were cut to *NOT* pay for two wars) ... sort of confluence of special interests wanting huge tax cuts, military-industrial complex wanted huge spending increase and the Federal Reserve and Too-Big-To-Fail wanting huge debt (TBTF bailout done by Federal Reserve involved tens of trillions in ZIRP funds that were used to buy US Treasuries).

After turn of century, cousin to white house chief of staff (Card) was at the UN dealing with Iraq and given proof that the weapons of mass distruction (from Iran/Iraq war supplied by the US) had all been deactivated ... supplied the proof to cousin and various other gov. officials ... then was locked up in military mental institution. Published a book in 2010 (four years before the decommissioned WMDs were declassified)
https://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/

NY Times series from 2014, the decommission WMDs (tracing back to US from Iran/Iraq war), had been found early in the invasion, but the information was classified for a decade
http://www.nytimes.com/interactive/2014/10/14/world/middleeast/us-casualties-of-iraq-chemical-weapons.html

note the military-industrial complex had wanted a war so badly that corporate reps were telling former eastern block countries that if they voted for IRAQ2 invasion in the UN, they would get membership in NATO and (directed appropriation) USAID (can *ONLY* be used for purchase of modern US arms, aka additional congressional gifts to MIC complex not in DOD budget). From the law of unintended consequences, the invaders were told to bypass ammo dumps looking for WMDs, when they got around to going back, over a million metric tons had evaporated (showing up later in IEDs)
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/

fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
federal researve chaiman posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
too big to fail, too big to prosecute, too big to jail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
zirp posts
https://www.garlic.com/~lynn/submisc.html#zirp
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetaul war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds

--
virtualization experience starting Jan1968, online at home since Mar1970

Computers, Online, And Internet Long Time Ago

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Computers, Online, And Internet Long Time Ago
Date: 20 Feb, 2025
Blog: Facebook
A long time ago, 1966, I took two credit hr intro to fortran/computers, at the end of the semester I was hired to rewrite 1401 MPIO in 360 assembler for 360/30. The univ was getting 360/67 for tss/360, replacing 709/1401 and got a 360/30 replacing 1401 temporarily pending availability of 360/67. The datacenter shutdown on weekends and I had the place dedicated, although 48hrs w/o sleep made monday classes hard. I was given a pile of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. Within a few weeks, I had a tray of 2000 card program. I then did a assembler option that either assembled a stand-alone monitor (loaded with BPS loader) that took 30mins or OS/360 monitor (with DCB and system services macros) that assembled in 60mins (each DCB macro took 5-6mins).

Within a year of taking intro class, the 360/67 arrives and I'm hired fulltime responsible for OS/360 (tss/360 didn't come to production fruition) ... and still get to keep by weekend dedicated time. My first sysgen was os/360 R9.5. Student fortran jobs use to take under second on 709 but was taking over a minute on OS/360, I install HASP and that cuts time in half. I then start redoing STAGE2 SYSGEN with R11, run in production system (w/HASP, instead of starter system) careful ordering datasets and PDS members to optimize arm seek and multi-track search that cuts another 2/3rds to 12.9secs. Student Fortran never got better than 709 until I install Univ. of Waterloo Fortran.

CSC had come out to install (virtual machine) CP67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my dedicated window. I rewrite a lot of the code to improve running OS/360 in virtual machine. Test job stream ran 322secs on real machine, but initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months had CP67 CPU down to 113secs (from 534secs).

CP67 had arrived with 1052 & 2741 terminal support with some automagic terminal type identification that would switch the port scanner type for each line. Univ had some number of ASCII terminals and I implement TTY support, integrated with dynamic terminal type. I then want to have a single dial-in number ("hunt group") for all terminal types, didn't quite work since IBM had taken a short cut and hardwired line speed for each port. This kicks off univ. project to do clone controller; build a channel interface board for Interdata/3 programmed to emulate IBM telecommunication controller with addition of port auto baud rate. It then is enhanced with Interdata/4 for the channel interface and clusters of Interdata/3s for the port interfaces. Interdata (later Perkin-Elmer) sells it as IBM clone controller and four of us are written up responsible for (some part of) IBM clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

Before I graduate, I'm hired into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consoldate all dataprocessing into independent business unit). I think Renton datacenter is largest in the world, 360/65s arriving faster than they could be installed (boxes constantly staged in hallways around machine room) ... sort of precursor to cloud megadatacenters. Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll ... although they enlarge the machine room and install a 360/67 for me to play with when I'm not doing other stuff. When I graduate, I join IBM CSC (instead of staying with CFO) and get my first home dialup terminal.

Co-worker at CSC was responsible for the science center wide-area network (morphs into corporate internal network, larger than arpanet/internet from just about the beginning until sometime mid/late 80s) ... comment from one of the inventors of GML at the science center in 1969:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

Technology also used for the corporate sponsored Univ BITNET:
https://en.wikipedia.org/wiki/BITNET
and EARN in Europe
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
old email from person charged with setting up EARN
https://www.garlic.com/~lynn/2001h.html#email840320

In early 80s, I was introduced to John Boyd and would sponsor his briefings. He had lots of stories, one was about being very vocal that the electronics across the trail wouldn't work. Possibly as punishment, he was put in command of "spook base" (about the same time I'm at Boeing). One of his biographies says "spook base" was $2.5B "windfall" for IBM (ten times Renton).
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

Also early 80s, I get HSDT project, T1 and faster computer links and lots of arguments with the communication group. In 60s, IBM had 2701 that supported T1 links, but 70s transition to SNA/VTAM, issues seem to cap controllers at 56kbit links. Also working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

CSC co-worker trying to convert IBM to TCP/IP (rather than SNA/VTAM):
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
passed aug2020
https://en.wikipedia.org/wiki/Edson_Hendricks

Old archived post, big 1Jan1983 conversion to internetworking protocol there were approx. 100 IMPs and 250 hosts when the internal network was about to pass 1000; lists corporate locations around the world that added one or more hosts during 1983
https://www.garlic.com/~lynn/2006k.html#8
we had a CSNET gateway operational fall1982
https://en.wikipedia.org/wiki/CSNET
... some CSNET email about transition to internetworking
https://www.garlic.com/~lynn/2000e.html#email821230
https://www.garlic.com/~lynn/2000e.html#email830202

one of the internal network issues was corporate required all links be encrypted ... and various problems with various (world) gov agencies, especially when links cross national borders (mid-80s, major link encryptor company claimed internal network had more than half all link encryptors in the world). I also hated what I had to pay for T1 encryptors and really had problems finding encryptors faster than T1.

trivia: decade after GML was invented at science center, it morphs into ISO standard SGML, after another decade it morphs into HTML at CERN. 1st webserver in US was on (virtual machine) mainframe at SLAC (CERN sister institution):
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

clone telecommunication controller
https://www.garlic.com/~lynn/submain.html#360pcm
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
Boyd posts and/or web URLs
https://www.garlic.com/~lynn/subboyd.html

misc recent posts mentioniong 709/1401, MPIO, 360/67, Boeing CFO, Renton datacenter
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#99 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security

--
virtualization experience starting Jan1968, online at home since Mar1970

2301 Fixed-Head Drum

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 2301 Fixed-Head Drum
Date: 20 Feb, 2025
Blog: Facebook
2301 "fixed-head drum" was similar to 2303, except it transferred four heads in parallel, four times the transfer rate, 1/4th the number of tracks, each track four times larger. CP67 format had 9 4k pages per pair of tracks, records 1-4 on first of pair, #5 spanned the end of the 1st and the start of the 2nd, followed by 6-9. Original CP67 did FIFO queuing of channel programs, and each page transfer was separate channel program. I redid queuing to ordered arm seek (for disks) and multiple queued page transfers per channel program (for same arm position or in case of 2301, all queued requests) optimized for max transfers per revolution. Original CP67 would do about 70-75 page operations per second with 2301, optimized, I could get close to 270/sec (9 transfers per two revolutions, at 60 revolutions/sec) under heavy load.
https://bitsavers.org/pdf/ibm/2820/A22-6895-2_2820_2301_Component_Descr_Sep69.pdf

Used 360 selector channels, i.e. channel was busy from time channel program started until done. 370 introducted block-multiplexor and set-sector channel command, which would disconnect from channel while device was rotating until it reach a specified sector and then attempt to reconnect.

2305 fixed-head disk, 2305-2 had capacity of 11.2mbytes and 1.5mbyes/sec transfer. 2305-1 had capacity of 5.4mbytes and transfer of 3mbytes/sec. 2305-1 had same number of disk heads but pairs of heads on track off-set 180degrees, avg latency was quarter track since alternate bytes were on opposite of disk, used pairs of channel cable sets to transfer pairs of bytes for aggregate of 3mbytes/sec; 2305-1, pg3:
http://www.bitsavers.org/pdf/ibm/2835/GA26-1589-5_2835_2305_Reference_Oct83.pdf

Selector and block multiplexor had maximum channel distance of 200ft and did end-to-end handshake for each byte transferred. This is different from the later data-streaming block multiplexor channels doing 3mbyte/sec and increasing maximum channel distance to 400ft ... by transferring multiple bytes per end-to-end handshake.

recent post mentioning as undergraduate first redoing cp67 pathlengths for running os/360 in virtual machine, before moving on to rewriting other parts of cp67
https://www.garlic.com/~lynn/2025.html#111
some more (slightly older post)
https://www.garlic.com/~lynn/2024f.html#29

By 1980, there was no follow-on paging device product. For internal datacenters, IBM then contracted with vendor for what they called "1655", electronic disks that would emulate a 2305 ... but had no rotational delay, similar to SSD but lost data w/o power. One of the issue was that while IBM had fixed-block disks, the company favorite son batch operating system never supported anything other than CKD DASD ... so for their use it had to simulate an existing CKD 2305 running over 1.5mbyte I/O channels. However for other IBM systems that supported FBA ... 1655s could be configured as fixed-block disk running on 3mbyte/sec I/O "data-streaming" channels.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

some posts mentioning 1655:
https://www.garlic.com/~lynn/2024g.html#70 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024c.html#61 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2023g.html#84 Vintage DASD
https://www.garlic.com/~lynn/2022d.html#46 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2017e.html#36 National Telephone Day
https://www.garlic.com/~lynn/2017d.html#63 Paging subsystems in the era of bigass memory

--
virtualization experience starting Jan1968, online at home since Mar1970

2301 Fixed-Head Drum

From: Lynn Wheeler <lynn@garlic.com>
Subject: 2301 Fixed-Head Drum
Date: 21 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#112 2301 Fixed-Head Drum

trivia: when future system imploded, there was mad rush to get stuff back into the 370 product pipelines including kicking of quick&dirty 303x and 3081 efforts in parallel
http://www.jfsowa.com/computer/memo125.htm

for 303x, they used 158 engine with just the integrated channel microcode for the 303x channel director. A 3031 was two 158 engines, one with just the 370 microcode and one with just the integrated channel microcode. A 3032 was 168 redone to use the channel director for external channels. A 3033 started out 168 logic remapped to 20% faster chips.

when I transferred from CSC to SJR on the west coast I got to wander around datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test across the street. They were doing 7x24, prescheduled stand-alone testing and had mentioned they had recently tried MVS, but it had 15min MTBF in that environment (requiring manual re-ipl). I offered to do rewrite I/O supervisor to be bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity. Downside was they started blaming me any time they had problem and I had to spend increasing amount of time playing disk engineer and diagnosing their problems.

Then bldg15 gets engineering 3033 (first outside processor engineering in POK) and since disk testing only took a percent or two of CPU, we scrounge a 3830 disk controller and string of 3033 drives for our own private online service (and late 78 an engineering 4341). I do some channel timing tests (latency to fetch next CCW in channel program) with 145s, 158s, 168s, 303x and 4341. 158 (and channel director) ... 158-integrated channel microcode was slowest of the bunch (158s and all 303x) ... 4341 with some tweaking even would do 3mb/sec data-streaming.

I then write (internal IBM) research report of all the work and happened to mention the MVS 15min MTBF .... bringing down the wrath of the MVS organization on my head.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
getting to play disk engineering in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Virtual Memory
Date: 21 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#30 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#31 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#32 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#33 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory

Note: communication group was fiercely fighting off client/server and distributed computing and trying to block release of mainframe TCP/IP. When that was reversed they said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What ships got aggregate 44kbytes/sec using nearly whole 3090 processor (and the port to MVS was worse). I then implement support for RFC1044 and in some tuning tests at Cray Research between Cray and 4341, get sustained 4341 channel media throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

Early 90s, communication group hires a silicon valley contractor to implement TCP/IP support directly in VTAM and what he demo'ed had TCP running much faster than LU6.2. He was then told that everybody "knows" that LU6.2 runs much faster than a "proper" TCP/IP implementation, and they would only be paying for a proper implementation.

trivia1: late 80s, univ. did mainframe VTAM/LU6.2 comparison with common unix tcp; VTAM/LU6.2 had 160k instruction pathlength and 15 buffer copies while a common unix tcp implementation had 5k instruction pathlength and five buffer copies. I was on XTP TAB (that communication group tried to block) and doing outboard protocol assist 1) used scatter/gather I/O (akin to mainframe chain data) to do I/O directly from user space (eliminating buffer copies) and moved CRC from header protocol to trailer protocol so that outboard handling could generate/check CRC as data flowed through. I also provided dynamic adaptive rate-based pacing that had done originally in the early 80s for HSDT project (T1 and faster computer links, both terrestrial and satellite; some amount of conflict with communication group; 60s IBM 2701 controller supported T1, however 70s move to SNA/VTAM, issues caped controllers at 56kbit/sec links). One of my 1st HSDT long-haul satellite T1 (1.5mbits/sec) was between IBM Los Gatos lab on the west coast and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
Kingston's E&S lab on the east coast. Clementi had boat load of floating point systems boxes (that included 40mbyte/sec disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems
I was shooting for multiple, full-duplex T1s (each at sustained @3mbits or @300kbytes/sec) on 4341 class machine.

trivia2: VMCF shipped with VM370R3 in mid70s, then (still 70s) IUCV and SMSG ships. Much earlier, Pisa Scientific Center had done SPM for CP67 which was then ported to (internal) VM370 systems (a superset of the combination of VMCF, IUCV and SMSG). The standard RSCS/VNET supported SPM (but only worked on internal VM370). Example is 1980 multi-user client/server spacewar game that relied on SPM (and since RSCS/VNET supported it), clients didn't have to be on same real machine as the server.

trivia3: mainframe Pascal had started out two people in the Los Gatos VLSI lab doing high-performance VLSI design tools using Metaware's TWS.

some of the MIT CTSS/7094 people went to 5th flr to do Multics, others went to the IBM science center on the 4th flr and did virtual machines, internal network, lots of online apps, invented GML in 1969 (morphs into both SGML & HTML). When I joined science center and looked at what Multics was doing on 5th flr, I figured I could do page-mapped filesystem for CP67/CMS (with lots of segment stuff) later ported to VM370/CMS. Later for VM370R3, a very small subset of the segment stuff (w/o page-mapped filesystem) was picked and released as DCSS ("DisContiguous Shared Segments).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
page-mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

posts mentioning (CP) "SPM" and/or (CMS) "SPMS" app
https://www.garlic.com/~lynn/2024g.html#97 CMS Computer Games
https://www.garlic.com/~lynn/2024d.html#43 Chat Rooms and Social Media
https://www.garlic.com/~lynn/2024b.html#82 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#112 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#44 Adventure Game
https://www.garlic.com/~lynn/2022f.html#94 Foreign Language
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#2 IBM Games
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#81 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#33 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2021h.html#78 IBM Internal network
https://www.garlic.com/~lynn/2021c.html#11 Air Force thinking of a new F-16ish fighter
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2020.html#46 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2018e.html#104 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017k.html#37 CMS style XMITMSG for Unix and other platforms
https://www.garlic.com/~lynn/2016h.html#5 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016c.html#1 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016b.html#17 IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?
https://www.garlic.com/~lynn/2015g.html#99 PROFS & GML
https://www.garlic.com/~lynn/2015d.html#9 PROFS
https://www.garlic.com/~lynn/2014k.html#48 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014g.html#93 Costs of core
https://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013j.html#42 1969 networked word processor "Astrotype"
https://www.garlic.com/~lynn/2013j.html#38 1969 networked word processor "Astrotype"
https://www.garlic.com/~lynn/2013i.html#27 RBS Mainframe Meltdown: A year on, the fallout is still coming
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012l.html#36 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012j.html#7 Operating System, what is it?
https://www.garlic.com/~lynn/2012e.html#64 Typeface (font) and city identity
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2012d.html#24 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011i.html#66 Wasn't instant messaging on IBM's VM/CMS in the early 1980s
https://www.garlic.com/~lynn/2011g.html#56 VAXen on the Internet
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
https://www.garlic.com/~lynn/2011g.html#45 My first mainframe experience
https://www.garlic.com/~lynn/2010m.html#28 CSC History
https://www.garlic.com/~lynn/2010k.html#33 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2010h.html#0 What is the protocal for GMT offset in SMTP (e-mail) header time-stamp?
https://www.garlic.com/~lynn/2009n.html#67 Status of Arpanet/Internet in 1976?
https://www.garlic.com/~lynn/2008o.html#73 Addressing Scheme with 64 vs 63 bits
https://www.garlic.com/~lynn/2008g.html#41 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2008g.html#22 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2007o.html#68 CA to IBM TCP Conversion
https://www.garlic.com/~lynn/2007k.html#25 IBM 360 Model 20 Questions
https://www.garlic.com/~lynn/2007f.html#14 more shared segment archeology
https://www.garlic.com/~lynn/2007.html#11 vm/sp1
https://www.garlic.com/~lynn/2006w.html#16 intersection between autolog command and cmsback (more history)
https://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2006t.html#47 To RISC or not to RISC
https://www.garlic.com/~lynn/2006k.html#51 other cp/cms history
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2001b.html#32 z900 and Virtual Machine Theory

--
virtualization experience starting Jan1968, online at home since Mar1970

2301 Fixed-Head Drum

From: Lynn Wheeler <lynn@garlic.com>
Subject: 2301 Fixed-Head Drum
Date: 22 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#112 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#113 2301 Fixed-Head Drum

part of the demise of the disk division. Late 80s, senior disk engineer got a talk scheduled at internal annual world-wide communication group conference, supposedly on 3174 performance, but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing data fleeing data centers to more distributed computing friendly platforms with drop in disk sales. They had come up with a number of solutions that were constantly being vetoed by the communication group (with their corporate strategic responsibility for everything that crossed datacenter walls and were fiercely fighting off client/server and distributed computing). The GPD/Adstar software executive partial work around was investing in distributed computing startups that would use IBM disks and would periodically ask us to drop by his investments to offer any help.

Not just datacenter DASD. I was introduced to John Boyd in early 80s and would sponsor his briefings. In 1989-1990, the commandant of the marine corps leveraged Boyd for a corps make-over (when IBM was desperately in need of make-over). 1992, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company (take-off on the "baby bells" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
Former AMEX president before becoming IBM CEO, posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
John Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

CMS 3270 Multi-user SPACEWAR Game

From: Lynn Wheeler <lynn@garlic.com>
Subject: CMS 3270 Multi-user SPACEWAR Game
Date: 22 Feb, 2025
Blog: Facebook
There was a CMS 3270 terminal, client/server multi-user spacewar game (PLI CSP/MFF) appeared in 1980 that used "SPM" for communication (sort of superset of combination of VMCF, IUCV, & SMSG; even tho it was never released to customers, the product RSCS/VNET supported it, so players didn't have to be on same machine as server; SPM was originally implemented for CP67 at IBM Pisa Scientific Center ... and early ported to "internal" VM370 systems). Almost immediately robot players appeared (beating humans) and the server was modified to increase power use non-linearly as intervals between client commands dropped below human thresholds (somewhat leveling the playing field). The internal network (technology also used for the corporate sponsored univ. BITNET) started as the CP67-based scientific center wide-area network ... reference by one of the CSC members (that invented GML in 1969, precursor to SGML & HTML)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc
https://www.garlic.com/~lynn/submain.html#sgml
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

posts mentioning cms 3270 client/server spacewar game
https://www.garlic.com/~lynn/2024g.html#97 CMS Computer Games
https://www.garlic.com/~lynn/2024d.html#43 Chat Rooms and Social Media
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#33 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2021c.html#11 Air Force thinking of a new F-16ish fighter
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2020.html#46 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2018e.html#104 The (broken) economics of OSS
https://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2011g.html#45 My first mainframe experience

--
virtualization experience starting Jan1968, online at home since Mar1970

Consumer and Commercial Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Consumer and Commercial Computers
Date: 23 Feb, 2025
Blog: Facebook
note there is bigger difference between various laptop/desktop intel-based systems and the industrial blades used in cloud megadatacenters than there was between 370/115s and 370/195s. A large cloud operation will have a dozen or more megadatacenters around the world, each megadatacenter with half-million or more industrial blades (each blade can have more processor power than max configured mainframe), and megadatacenter enormous industrial automation ... staffed with only 70-80 people.

trivia: 1988, IBM branch office asks if I can help LLNL (national lab) get some serial stuff they were working with, standardized ... which quickly becomes Fibre Channel Standard (FCS, including some channel-extender support I did for STL and IMS DBMS group in 1980, initially 1gbit transfer, full-duplex, aggregate 200mbyte/sec). Then POK gets some of their serial stuff shipped in 90s with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec).

Then some POK engineers become involved in FCS and define a heavy-weight protocol that greatly reduces throughput, which eventually ships as FICON. The most recent public benchmark I can find is z196 "PEAK I/O" getting 2M IOPS using 104 FICON (i.e. 20K IOPS per FICON). About the same time a FCS was announced for E5-2600 server blades claiming over million IOPS (two FCS having higher throughput than 104 FICON). Also, IBM pubs recommend that SAPs (system assist processors that do actual I/O) be held to 70% CPU (approx. 1.5M IOPS).

Also this was before IBM sells off its E5-2600 server business. A max. configured z196 benchmarked at 50BIPs while E5-2600 server blade benchmarked at 500BIPS (ten times z196, benchmarks were number of program iterations compared to number done by benchmark reference platform). At the time IBM's E5-2600 base list price was $1815 ($3.63/BIPS) while z196 was $30M ($600,000/BIPS). Then industry press said that the server chip makers were shipping half their product directly to large cloud operations that assemble their own blades at 1/3rd brand name servers (aka $605, $1.21/BIPS) and IBM sells off its E5-2600 server business.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Consumer and Commercial Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Consumer and Commercial Computers
Date: 23 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#117 Consumer and Commercial Computers

I was on financial industry standards X9 in the 90s ... including secure financial transaction standards. Part of it included meeting the EU privacy standards ... which included cards being as private as cash at point-of-sale ... no name, just account number. However, banks were still subject to feds "know you customer" (and money laundering) mandates ... tight coupling between person and account. Then business things got confused over did banks become responsible for point-of-sale and some other kinds of fraud (or merchants or consumers)?

X9.59 standard
https://www.garlic.com/~lynn/subpubkey.html#privacy
and
https://www.garlic.com/~lynn/x959.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Consumer and Commercial Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Consumer and Commercial Computers
Date: 23 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#117 Consumer and Commercial Computers
https://www.garlic.com/~lynn/2025.html#118 Consumer and Commercial Computers

1988 got project to do HA/6000 product, originally for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when I start doing technical/scientific cluster scale-up with national labs (including LANL, NCAR, LLNL, FCS, and their supercomputer "LINCS" filesystem) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster support in same source base with Unix (and do a distributed lock manager with VAXCluster API semantics to ease the ports). I also coin the terms disaster survivability and geographic survivability when out marketing HA/CMP. The IBM S/88 product administrator then starts taking us around to their customers and gets me to write a section for corporate continuous availability strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain that they couldn't meet the objectives).

Early Jan1992, meeting with Oracle CEO, IBM/AWD executive Hester tells Ellison that we would have 16-system cluster by mid92 and 128-system cluster by ye92. Late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later). Possibly contributing was mainframe DB2 had been complaining that if we were allowed to continue, it would be years ahead of them.

Turns out there has been large overlap between many commercial and scientific cluster scaleup configurations. Places like Amazon is all megadatacenters and mostly commercial (although some of the commercial megadatacenters have been known to provide the ability to use credit card to automagically spin-up a large number of blades for a supercomputer that have benchmarked in the top 100 in the world).

Note 1993 rs6000/mainframe industry benchmark (number program iterations compared to reference platform)

• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS; 16-system: 2BIPS; 128-system: 16BIPS

The executive we had been reporting went over to head up AIM (apple, ibm, motorola) Somerset, to do single chip 801/RISC ... and also adopt Motorola 88K bus to support large tightly-coupled shared memory configurations. Also i86/Pentium new generation where i86 instructions are hardware translated on the fly to pipelined RISC micro-ops for actual execution (negating RISC throughput advantage compared to i86). 1999:

• single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000 z900 processor)
• single Pentium3 hits 2,054MIPS (twice PowerPC and 13times each Dec2000 z900 processor).

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS (1000MIPS/proc), Sep2019
z16, 200 processors, 222BIPS (1111MIPS/proc), Sep2022

Earlier mainframe numbers actual industry benchmark, more recent numbers inferred from IBM pubs giving performance compared to previous generations.

Trivia: when I transferred to San Jose Research in the 70s, I did some work on the original SQL/relational System/R with Jim Gray and Vera Watson. Then involved in tech transfer ("under the radar" while the company was preoccupied with the next, new, great DBMS, "EAGLE") to Endicott for SQL/DS. Then when "EAGLE" finally implodes there is request for how fast can System/R be ported to MVS, which eventually ships as DB2 (originally for decision support only).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
original sql/relational, system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

Amazon example

Amazon Datacenters, over 100, covering 31 AWS Regions, 99 Availability zones, over 400 edge locations/local zones, and 245 countries and territories
https://aws.amazon.com/compliance/data-center/data-centers/
AWS Global Infrastructure
https://aws.amazon.com/about-aws/global-infrastructure/
AWS (Server) Config
https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html
Example for configuration types; optimization: compute, memory, accelerated, storage, HPC
https://aws.amazon.com/ec2/instance-types/

--
virtualization experience starting Jan1968, online at home since Mar1970

Microcode and Virtual Machine

From: Lynn Wheeler <lynn@garlic.com>
Subject: Microcode and Virtual Machine
Date: 23 Feb, 2025
Blog: Facebook
Most IBM mainframe ran microcode that emulated 360, 370, etc. 155/165, 158/168, 3031/3032/3033 were "horizontal microcode" ... microcode instruction controlled a few concurrent operations. These machines machines measured their throughput in avg. processor cycle time between completing 370 instructions. 165 completed 370 instruction avg each 2.1cycles, improved for 168 to complete 370 instruction avg every 1.6cycles, and improved again for 3033 to avg of one 370 instruction each processor cycle (each 370 instruction elapsed time may take longer because some processing overlapped with processing other instructions)

Virtual machine CP40, CP67, VM370 started out running virtual machines in problem mode and supervisor instruction would result in privileged interrupt into virtual machine kernel ... would check if in virtual supervisor mode and emulate supervisor instruction according to virtual machine rules. Then there was modification to 370/158 microcode for VMASSIST, load VMBLOK pointer into CR6 and enter virtual machine mode, then microcode, for some supervisor instructions check if virtual machine was in virtual supervisor mode and if so, "execute" the supervisor instruction according to virtual machine rules.

Then VMASSIST was made available for 168 at (substantial) extra cost.

When Future System effort imploded there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033 & 3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm

The 158 engine microcode for integrated channels was used for the 303x (external) channel director. A 3031 was two 158 engines, one with just the integrated channel microcode and a 2nd with just the 370 microcode. A 3032 was a 168 redone to use 303x channel director for external channels. A 3033 started out 168 logic remapped for 20% faster chips.

Also with the demise of FS, the head of POK convinced corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (endicott managed to save the vm370 product mission for the mid-range, but had to recreate a development group from scratch). Note some of the POK people created VMTOOL, a virtual machine XA/370 environment for 3081 MVS/XA testing (never intended for production and/or customer use). VMTOOL started off needing special XA/370 instruction just to enter virtual machine mode ... and then added being able to do most supervisor instructions according to virtual machine rules (akin to VMASSIST). One of the issues was this required more new microcode than 3081 had space available ... so had to "page" microcode when entering/exiting virtual machine mode (further limiting its usefulness and performance to testing).

Then after MVS/XA ships, customers weren't converting as planned. Amdahl was having better success being able to run both MVS & MVS/XA currently with their purely microcoded hypervisor (aka "multiple domain" support) ... similar to 3090 LPAR/PRSM nearly decade later. It was then to decided to release VMTOOL, 1st as VM/MA (migration aid) and then VM/SF (system facility), then request a large couple hundred person group to bring VMTOOL up to the feature, function, and performance of VM/370 (modulo 3081 SIE microcode still needed to be "paged").

Also with FS imploding, got talked into helping with:

1) 370 16-processor tightly-coupled shared memory machine and we con the 3033 processor engineers into helping in their spare time. Everybody thought it was great until somebody tells the head of POK that it could be decades before POK's favorite batch system ("MVS") had "effective" 16-processor support (IBM docs of the period said "MVS" two-processor support only had 1.2-1.5 times the throughput of single processor, and the multiprocessor overhead increased as # of processors increased; POK doesn't ship 16 processor machine until after turn of the century). Then head of POK invites some of us to never visit POK again and directs the 3033 processor engineers, heads down and no distractions.

trivia: one of the reason got sucked into 16-processor was after joining IBM, one of my hobbies was production operating systems for internal datacenters and CP67 for online sales&marketing support HONE was first customer. Then in the decisions to add virtual memory to all 370s and do vm370 product, some of the science center people split off and take over the BPS group on the 3rd flr. In the morph of CP67->VM370 lots stuff got simplified or dropped (including multiprocessor support). I then start adding stuff back in for my internal CSC/VM ... and initially for US HONE (all the US HONE datacenters had been consolidated in silicon valley) add multiprocessor back into a VM370R3-based system so US HONE can add a 2nd processor to each system (and I was getting twice or sometimes slightly better throughput with the 2CPU systems). Note when facebook 1st moved into silicon valley, it was a new bldg built next to the former US HONE datacenter.

more trivia: I kept in touch with the 3033 processor engineers, and once 3033 was out the door, they start on trout/3090 ... old archived email from trout/3090 about their SIE implementation is implemented for performance & production use ("811" reference is original xa/370 documents were published with nov78 date).
https://www.garlic.com/~lynn/2006j.html#email810630

2) Endicott 138/148 (and later used for 4300s) VM ECPS (microcode assist) ... rewriting parts of the VM370 directly in native microcode ... which avg. ten native microcode instructions for every emulated 370 instruction (akin to Hercules implementation) and turned out to avg ten times speed-up. I was told to find the 6kbytes of highest executed VM370 instruction paths for recoding in native "microcode" (for 10 times speedup). From the initial analysis, 6kbytes accounted for 79.55% of VM370 execution time (old archived posting):
https://www.garlic.com/~lynn/94.html#21

trivia: Endicott then wants to pre-install VM370 on every 138 and 148 shipped, but with POK actively trying to kill (all) VM370, corporate vetoes the idea. POK executives were also going around internal datacenters trying to strong-arm them (including HONE) to move off all VM370s (to MVS).

other trivia: systems were getting faster, faster than disks were getting faster ... so to keep bigger & bigger systems busy and justified, they need to be running larger numbers of concurrent (mostly disk I/O bound) operations. Early last decade I was asked to track down decision to add virtual memory to all 370s. Turns out MVT storage management was so bad, regions were being specified four times larger than used. As a result normal 1mbyte 370/165 only ran four regions (insufficient to keep system busy and justified). Remapping MVT into 16mbyte virtual memory, allowed increasing number of concurrent regions by factor of four times with little or no paging (capped at 15 because of 4bit storage protect key) ... similar to running MVT in a CP67 16mbyte virtual machine. Ludlow was doing the initial implementation on 360/67 and biggest software effort was SVC0/EXCP getting channel programs with virtual addresses and needed to make copies of the channel programs, replacing virtual addresses with real (same thing that CP67 has to do, and he borrows CP67 CCWTRANS for crafting into EXCP).

To get past the (storage protect) 15 region limit, moves to giving 16mbyte virtual address space. However, OS/360 history is heavily pointer-passing API, and a 8mbyte image of the MVS kernel is mapped into every virtual address space (leaving 8mbytes). Then subsystems are given their own separate 16mbyte virtual address space and to make it all work, a one mbyte Common Segment Area is mapped into every 16mbyte virtual address space ... leaving 7mbytes. However, CSA requirements were increasing as systems got larger and by 3033, CSAs was pushing 5-6mbytes (and renamed Common System Area) leaving 2-3mbytes and threatening to increase to 8mbyes (leaving *ZERO*). The problem was placing enormous pressure on being able to get MVS/XA, XA/370, 31-bit, etc ... out the door (before the whole paradigm implodes).

In early 80s, I write a tome that the relative system throughput of disks had declined by order of magnitude since 360 was announced (systems throughput increased 40-50 times, but disk throughput only increased 3-5 times). GPD(/Adstar) executive took exception and assigned the division performance group to refute the claim; but after a couple weeks they basically came back and said I had slightly understated the problem. They then respin the analysis for SHARE (16Aug1984, SHARE 63, B874) presentation on how to configure disks (& filesystems) for improved system throughput.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal CP67L, CP67H, CP67I, CP67SJ, CSC/VM and/or SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

some posts mentioning ECPS, SIE, LPAR/PRSM, and/or Amdahl Hypervisor
https://www.garlic.com/~lynn/2025.html#19 Virtual Machine History
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024.html#63 VM Microcode Assist
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2016b.html#78 Microcode
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2010m.html#74 z millicode: where does it reside?
https://www.garlic.com/~lynn/2007g.html#72 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006n.html#44 Any resources on VLIW?

--
virtualization experience starting Jan1968, online at home since Mar1970

Clone 370 System Makers

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Clone 370 System Makers
Date: 24 Feb, 2025
Blog: Facebook
Early 70s, IBM has the "Future System" project (completely different from 370 and going to completely replace a 370 and internal politics was killing off 370 efforts). The lack of new 370 during the FS period is credited with giving the clone 370 makers (including Amdahl) their market foothold (also IBM marketing had to highly develop their FUD marketing skills).

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters, including the online sales&marketing support HONE systems were long time customers. I also got to continue to attend SHARE and visit lots of customer sites. Director of one of the large commercial financial datacenters on the east coast liked me to drop by and talk technology. At one point the IBM branch manager horribly offended the customer and in retaliation they ordered an Amdahl system (single Amdahl in large sea of blue). Up until then, Amdahl had been selling into technical/scientific/univ market and this would be the first commercial "true blue" install. I was asked to go onsite for 6-12 months (to help obscure why the Amdahl was being ordered). I talk it over with the customer and then decline IBM's offer. I was then told the branch manager was a close sailing buddy of IBM CEO and if I didn't do it, I could say goodby to career, promotions and raises.

When FS implodes there was mad rush getting stuff back into the 370 product pipelines, including kicking off quick&dirty 3033 & 3081.
http://www.jfsowa.com/computer/memo125.htm

3081 was going to be multiprocessor only and Amdahl single processor machine had higher MIPS than the aggregate MIPS of the two processor 3081D. IBM fairly quickly doubles the processor cache sizes and comes out with two processor 3081K, about the same aggregate MIPS as the Amdahl single processor. Although the MVS throughput much less, IBM documents that MVS two processor support only has 1.2-1.5 times the throughput (MVS multiprocessor overhead) of (same model) single processor, aka even with IBM two processor 3081k aggregate MIPS same as Amdahl single processor.

IBM was also concerned that the whole ACP/TPF market might move to Amdahl (because ACP/TPF system didn't have multiprocessor support, only ran on single processor). First IBM did some unnatural things to VM370 to improve ACP/TPF throughput running in single processor virtual machine on 3081 ... but it degraded the throughput of nearly every other VM370 multiprocessor customer. Eventually IBM ships a 3083, a 3081 with one of the processors removed.

The aggregate MVS multiprocessor throughput (only 1.2-1.5 times single processor) was also issue in another case. After FS crash I was asked to help with a 16-processor 370 and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system ("MVS") had (effective) 16-processor support (POK doesn't ship 16-processor machine until after turn of century) and he invites some of us to never visit POK again ... and 3033 processor engineers directed to heads down and no distractions.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal cp67l, cp67h, cp67i, cp67sjr, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Clone 370 System Makers

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Clone 370 System Makers
Date: 25 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#121 Clone 370 System Makers

I transferred from Cambridge Scientific Center to IBM San Jose Research on the west coast. I was allowed to still play with operating systems, because lots of internal IBM datacenters ran my CSC/VM systems, except they became SJR/VM production systems.

I was allowed to wander around IBM and non-IBM datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test across the street. They were doing 7x24, prescheduled stand-alone testing and had mentioned they had recently tried MVS, but it had 15min MTBF in that environment (requiring manual re-ipl). I offered to do rewrite I/O supervisor to be bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity. Downside was they started blaming me any time they had problem and I had to spend increasing amount of time playing disk engineer and diagnosing their problems. I then write an (internal) Research Report on the work and happened to mention the MVS 15min MTBF ... bringing the wrath of the MVS group down on my head.

Also, late 70s and early 80s I was blamed for online computer conferencing on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s ... about the time the internal network was forced to convert to SNA/VTAM ... had started out as the science center wide-area network). It really took off spring of 1981 when I distributed trip report of visit to Jim Gray at Tandem ... only about 300 actively participated but claims 25,000 were reading). Folklore is that when corporate executive committee was told, 5of6 wanted to fire me (still being told no career, promotions, raises). Was told that with the corporate executive committee wanting to fire me, I could never be made an IBM Fellow, but if I kept my head down ... they could divert funding my way for doing projects. from IBM Jargon (copy here)
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf
Tandem Memos -n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

I then got HSDT project funding, T1 and faster computer links (both terrestrial and satellite), lots of conflict with communication group (note IBM had 2701 controller in the 60s that supported T1, but move to SNA/VTAM in the 70s and associated issues would cap telecommunication controllers at 56kbits/sec). Was also working with NSF director, was suppose to get $20M to interconnect the IBM supercomputing centers, congress cuts the budget, some other things happened and finally an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (lots of reasons inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

Some more about "Tandem Memos" in this post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
note 1972, Learson tries (but fails) to block bureaucrats, careerists and MBAs from destroying Watson culture/legacy. refs pg160-163
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

20 yrs later, IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal cp67l, cp67h, cp67i, cp67sjr, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

PowerPoint snakes

From: Lynn Wheeler <lynn@garlic.com>
Subject: PowerPoint snakes
Date: 25 Feb, 2025
Blog: Facebook
Ex-Intel exec, Raja Koduri, blames the bureaucratic 'PowerPoint snakes' within the company for its current issues: 'These processes multiply and coil around engineers'
https://www.pcgamer.com/hardware/ex-intel-exec-raja-koduri-blames-the-bureaucratic-powerpoint-snakes-within-the-company-for-its-current-issues-these-processes-multiply-and-coil-around-engineers/

some other powerpoint

Universities should ban PowerPoint -- It makes students stupid and professors boring
https://www.businessinsider.com/universities-should-ban-powerpoint-it-makes-students-stupid-and-professors-boring-2015-6

this from 2006 (death by powerpoint)
http://armsandinfluence.typepad.com/armsandinfluence/2006/08/death_by_powerp.html
over last decade or two there have been ongoing discussions about how bad powerpoint is in the military ... especially prepared/static nature (including flowcharting static enemy encounters)
https://web.archive.org/web/20120907035856/http://smallwarsjournal.com/jrnl/art/how-powerpoint-stifles-understanding-creativity-and-innovation-within-your-organization
http://smallwarsjournal.com/blog/dilbert-leads-the-coin-fight
http://smallwarsjournal.com/blog/wired-magazine-microsoft-helps-the-army-avoid-death-by-powerpoint

last few years there has been periodic conference directives banning powerpoint presentations
https://www.forbes.com/sites/work-in-progress/2014/11/14/six-ways-to-avoid-death-by-powerpoint/#542bd52a64d4

Call Sign Chaos
https://www.amazon.com/Call-Sign-Chaos-Learning-Lead-ebook/dp/B07SBRFVNH/
pg216/loc3041-43:
PowerPoint is the scourge of critical thinking. It encourages fragmented logic by the briefer and passivity in the listener. Only a verbal narrative that logically connects a succinct problem statement using rational thinking can develop sound solutions. PowerPoint is excellent when displaying data; but it makes us stupid when applied to critical thinking.
... snip ...

and (before powerpoint) Learson trying (and failed) to block bureaucrats, careerists, and MBAs from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
refs pg160-163
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some past refs:
https://www.garlic.com/~lynn/2022f.html#36 Death By Powerpoint
https://www.garlic.com/~lynn/2021d.html#41 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021.html#15 Death by Powerpoint

--
virtualization experience starting Jan1968, online at home since Mar1970

The joy of FORTRAN

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The joy of FORTRAN
Newsgroups: alt.folklore.computers, comp.os.linux.misc
Date: Tue, 25 Feb 2025 16:59:08 -1000
John Levine <johnl@taugh.com> writes:
The VAX was developed over a decade later, when they put thousands of transistors on each logic chip and thousands of bits in each memory chip. It suffered from a severe case of second system syndrome, where they started from the elegant PDP-11 and added every feature a programmer could ever possibly want, with less than fabulous performance to match. There's a reason that the VAX inspired RISC systems.

I've claimed that John Cocke did RISC/801
https://en.wikipedia.org/wiki/John_Cocke_(computer_scientist)
https://www.ibm.com/history/john-cocke
The effort to develop RISC began in 1974, when IBM tasked Cocke and a team of researchers with creating an exchange controller to automate telephone switching -- phone calls were then largely handled by human operators who plugged cords into switchboards. Although IBM canceled the controller project in 1975, the team's efforts morphed into the creation of the first prototype computer that used RISC. The new system's power and efficiency became foundational to computer evolution up to the present day.

as counter to the Future System complexity, future system
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

folklore is that the complexity of "Future System" was countermeasure to the clone/compatible competition. FS was completely different from 370 and was going to completely replace 370 and internal politics during FS, was killing off 370 efforts (the lack of new 370 during the period is credited with given clone 370 makers, including Amdahl, more market foothold).

All during FS, I continued to work on 370 and would periodically ridicule FS, including analogy with long playing cult film down in central sq (lots of blue sky stuff going on with little idea on how it might be implemented). One of the last nails in the FS coffin was analysis by the IBM Houston Scientific Center that if 370/195 applications were redone for FS machine made out of the fastest technology available, it would have throughput of 370/145 (about 30 times slowdown).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

The joy of FORTRAN

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The joy of FORTRAN
Newsgroups: alt.folklore.computers, comp.os.linux.misc
Date: Tue, 25 Feb 2025 17:48:55 -1000
Lynn Wheeler <lynn@garlic.com> writes:
All during FS, I continued to work on 370 and would periodically ridicule FS, including analogy with long playing cult film down in central sq (lots of blue sky stuff going on with little idea on how it might be implemented). One of the last nails in the FS coffin was analysis by the IBM Houston Scientific Center that if 370/195 applications were redone for FS machine made out of the fastest technology available, it would have throughput of 370/145 (about 30 times slowdown).

re:
https://www.garlic.com/~lynn/2025.html#124 The joy of FORTRAN

late 70s, there was a plan to replace the large myriod of different internal CISC microprocessors, architectures, programming (controllers, low & mid-range 370s, as400 followon to s/38, etc) with common RISC and common software programing. For various reasons all these floundered and things returned to doing custom CISC ... and saw some number of the engineers leave for other vendors.

RISC ROMP was going to be for the DISPLAYWRITER follow-on running CP.r and PL.8 ... but it got canceled (lot of that market was moving to PCs). It was then decided to pivot to the unix workstation market and they got the company that had done AT&T UNIX port to the IBM/PC for PC/IX to do "AIX" for PC/RT (follow-on was mutli-chip RIOS for RS/6000).

IBM 801
https://en.wikipedia.org/wiki/IBM_801
The First RISC: John Cocke and the IBM 801
https://news.ycombinator.com/item?id=33055361

Computer Chronicles Revisited 65 -- The IBM RT PC
https://www.smoliva.blog/post/computer-chronicles-revisited-065-ibm-rt-pc/
... note in above: Birnbaum had been head of (IBM Research) Yorktown Computer Science and some people that had been working late 70s on 801 left for HP Labs ... and I got email asking if I might be joining them.

I was in San Jose Research, but also had offices&labs out in Los Gatos VLSI labs that were doing "Blue Iliad" (1st 32bit 801/risc chip ... large, hot chip, never came to fruition).

Before announced/ship, IBM executive Nick Donofrio approved HA/6000 product, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs (LANL, NCAR, LLNL, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster support in same source base with unix (I do a distributed lock manager that support VAXCluster API semantics to simplify port). Then IBM S/88 product administer starts taking us around to their customers and also gets me to write a section for the corporate continuouus available strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain that they can't meet the requirements)

Early Jan1992, have meeing with Oracle CEO, IBM/AWD executive Hester tells Ellison that we would have 16-system clusters mid92 and 128-system cluster ye92. Then late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

The Paging Game

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Paging Game
Date: 26 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#50 The Paging Game
https://www.garlic.com/~lynn/2025.html#51 The Paging Game

After transferring from science center to SJR Research on the west coast, got to wander around lots of (IBM & no-IBM) datacenters in silicon valley, including TYMSHARE ... also see them at the monthly BAYBUNCH meetings hosted at Stanford SLAC. TYMSHARE started offering their CMS based online computer conferencing, free to (mainframe user group) SHARE in Aug1976 as VMSHARE, archives:
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE to get monthly tape dump/copy of VMSHARE (and later PCSHARE) files for putting up on internal network and systems (biggest problem was lawyers concerned that internal employees would be contaminated exposed to unfiltered customer information). One such visit, they demonstrated ADVENTURE that they had found on Stanford SAIL (stanford ai lab) PDP10 and ported to VM370/CMS. I got copy for putting up on internal systems. I would send source to anybody that proved that got all points. Within short time, versions with more points as well as PLI versions appeared
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure

commercial (virtual machine based) online services
https://www.garlic.com/~lynn/submain.html#online

recent posts mentioning TYMSHARE, VMSHARE, and ADVENTURE
https://www.garlic.com/~lynn/2024g.html#97 CMS Computer Games
https://www.garlic.com/~lynn/2024g.html#45 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#125 Adventure Game
https://www.garlic.com/~lynn/2024f.html#11 TYMSHARE, Engelbart, Ann Hardy
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#139 RPG Game Master's Guide
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games

--
virtualization experience starting Jan1968, online at home since Mar1970

3270 Controllers and Terminals

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3270 Controllers and Terminals
Date: 26 Feb, 2025
Blog: Facebook
1980, STL (since renamed SVL) was bursting at the seams and moving 300 people (& 3270s) from the IMS DBMS group to offsite bldg (with dataprocessing back to STL datacenter). They had tried "remote 3270" but found the human factors unacceptable. I get con'ed into doing channel-extender support, placing 3270 channel-attached controllers at the offsite bldg, with no perceived difference in human factors. Side-effect was STL had spread 3270 controllers across the mainframe channels with 3830 disk controllers. Placing channel-extender boxes directly on real IBM channels reduced channel-busy (for the same amount of 3270 terminal traffic) improved system throughput by 10-15%. There was consideration to move all 3270 controllers to channel-extenders to improve throughput of all their 370 systems (eliminating high channel busy from the 3270 channel-attached controllers).

3272(& 3277) had .086sec hardware response. then 3274/3278 was introduced with lots of 3278 hardware move back to 3274 controller, cutting 3278 manufacturing costs and significantly driving up coax protocol chatter ... increasing hardware response to .3sec to .5sec depending on amount of data (in the period studies were showing .25sec response improved productivity. Letters to the 3278 product administrator complaining about interactive computing got a response that 3278 wasn't intended for interactive computing but data entry (sort of electronic keypunch).

.086sec hardware response required .164sec system response (for .25sec response). joke about 3278 was time machine required to transmit responses into the past (in order for .25sec response). I had several internal SJR/VM systems with .11sec system response.

3270 did have half-duplex problem, if typing away and hit key just as screen was being updated, keyboard would lockup and would have to stop and hit reset before continue. Yorktown had FIFO boxes made for 3277, unplug the keyboard from the screen, plug in the FIFO box and plug the keyboard into the FIFO box (it would hold chars in the FIFO box whenever screen was being written, eliminating the lockup problem).

Later IBM/PC 3277 emulator cards had 4-5 upload/download throughput of 3278 emulator cards.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

past posts mentioning 3272/3277 at .086sec and 3274/3278 at .3sec-.5sec
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024f.html#12 3270 Terminals
https://www.garlic.com/~lynn/2024e.html#26 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023f.html#78 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023e.html#0 3270
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2022h.html#96 IBM 3270
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2018d.html#32 Walt Doherty - RIP
https://www.garlic.com/~lynn/2017e.html#26 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2014h.html#106 TSO Test does not support 65-bit debugging?
https://www.garlic.com/~lynn/2014g.html#23 Three Reasons the Mainframe is in Trouble

--
virtualization experience starting Jan1968, online at home since Mar1970

The Paging Game

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Paging Game
Date: 26 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#50 The Paging Game
https://www.garlic.com/~lynn/2025.html#51 The Paging Game
https://www.garlic.com/~lynn/2025.html#126 The Paging Game

Need two processor 360/67 to get 2mbytes (2mbytes would be 512 pageable pages before fixed storage requirements). When I graduated and joined IBM science center, I updated their CP/67 768kbyte 360/67 (104 pageable pages after fixed kernel/storage requirement, 192-88=104) with bunch of stuff I had done as undergraduate and getting 75-80 users (mixed interactive, compute, i/o).

Joke TSS/360 claimed wonderful 360/67 multiprocessor support, two processor getting 3-4 times throughput of a single processor. However, TSS/360 had hugely bloated kernel so 2CPU, 2mbyte (512 pages) machine had 3-4 times the available paging space as a 1cpu, 1mbyte (256 pages) ... aka single processor machine would have maybe 80 pageable pages while two processor machine would have more like 320.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
paging, workset, LRU replacement, etc posts
https://www.garlic.com/~lynn/subtopic.html#clock

--
virtualization experience starting Jan1968, online at home since Mar1970

The Paging Game

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Paging Game
Date: 27 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#50 The Paging Game
https://www.garlic.com/~lynn/2025.html#51 The Paging Game
https://www.garlic.com/~lynn/2025.html#126 The Paging Game
https://www.garlic.com/~lynn/2025.html#128 The Paging Game

23Jun1969 unbundling announcement start to charge for (application) software, SE services, maint., etc (but they managed to make the case that kernel software would still be free). Then in early 70s, the decide to add virtual memory to all 370s. Early last decade I was asked to track down the decision and found staff to executive making the decision, basically MVT storage management was so bad that region sizes needed to be specified four times larger than used ... as a result typical 1mbyte, 370/165 only ran four concurrent regions, insufficient to keep the machine busy and justified. Going to a single 16mbyte virtual memory allowed number of concurrent regions to be increased by factor of four times (caped at 15 because of 4bit storage protect keys) with little or no paging (similar to running MVT in CP67 16mbyte virtual machine). Then because (VS2/SVS) 15 still wasn't enough, moved to giving each region its own 16mbyte virtual address space (VS2/MVS).

Overlapping with this was appearance of the "Future System" project, completely different from 370 and was going to completely replace it (internal politics during FS was starting to kill off 370 efforts, claim is that lack of new 370 during FS period is what gave the clone 370 system makers their market foothold). Then when FS finally implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033 and 3081.
http://www.jfsowa.com/computer/memo125.htm

Also with the rise of clone 370 makers, there was decision to start charging for kernel software. When I joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters and continued to work on 370 stuff all during FS, including periodically ridiculing what FS was doing. Then my paging and dynamic adaptive resource manager ("wheeler" scheduler) was tasked as guinea pig (for kernel software charging), lots of stuff I had done as undergraduate before joining IBM. There was a performance expert from corporate (heavily steeped in MVS) that reviewed it and said he wouldn't sign off on release because it didn't have any manual tuning parameters (like the huge array of parameters in MVS, current "state of art"). I tried to explain dynamic adaptive, but it fell on deaf errors. So I packaged some manual tuning parameters ... labeled SRM (as part of MVS joke) ... the dynamic adaptive stuff was packaged as "STP" (... racer's edge from TV advertisements). Full description, formulas, and source code was distributed ... but few recognized the joke, from Operations Research ... the SRM stuff had less degrees of freedom than the STP stuff ... so dynamic adaptive could compensate for any manual setting.

Then with transition to full kernel charging, begins the OCO-wars (object code only)

dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
23jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Social Media

From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Social Media
Date: 27 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#82 Online Social Media
https://www.garlic.com/~lynn/2025.html#83 Online Social Media
https://www.garlic.com/~lynn/2025.html#90 Online Social Media

trivia: In the wake of implosion of "FUTURE SYSTEM"
http://www.jfsowa.com/computer/memo125.htm

there was mad rush to get things back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 in parallel, the head of POK also convinced corporate to kill the VM370 product, shutdown the development group (some of the science center CP67/CMS people had split off, taking over the IBM Boston Programming Center on the 3rd flr, when they outgrew the 3rd flr, they moved out to the empty former IBM SBC bldg at Burlington Mall) and transfer all the people to POK for MVS/XA. They weren't going to tell the people until the very last minute to minimize the number that might be able to escape into the local boston/rt128 area ... however it managed to leak and some number managed to escape (this was in infancy of VAX and joke was that head of POK was major contributor to VMS).

Endicott eventually manages to save the VM370 product mission (for the mid-range), but had to recreate a development group from scratch. Then the technology from CP67-based science center wide-area network ... that morphed into the internal network was also used for the corporate sponsored univ BITNET.
https://en.wikipedia.org/wiki/BITNET

Future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
virtualization experience starting Jan1968, online at home since Mar1970

The joy of FORTRAN

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The joy of FORTRAN
Newsgroups: alt.folklore.computers, comp.os.linux.misc
Date: Thu, 27 Feb 2025 14:03:51 -1000
scott@slp53.sl.home (Scott Lurndal) writes:
COBOL, in its day, was the superior choice for business applications. Several 4GL environments were built around COBOL or autogenerated COBOL applications.

re:
https://www.garlic.com/~lynn/2024e.html#142 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#144 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#145 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#2 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#7 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#16 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#17 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#124 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#125 The joy of FORTRAN

In the 60s, that were a couple CP67/CMS commercial spin-offs of the science center. One was NCSS
https://en.wikipedia.org/wiki/National_CSS
later bought by Dun & Bradstreet

NCSS ref also mentions NOMAD (4th gen software)
https://en.wikipedia.org/wiki/Nomad_software
predates the original SQL/relational System/R done on VM370 system at san jose research

http://www.decosta.com/Nomad/tales/history.html
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a report that would have taken many hundreds of lines of Cobol to produce. The product grew in capability and in revenue, both to NCSS and to Mathematica, who enjoyed increasing royalty payments from the sizable customer base. FOCUS from Information Builders, Inc (IBI), did even better, with revenue approaching a reported $150M per year. RAMIS moved among several owners, ending at Computer Associates in 1990, and has had little limelight since. NOMAD's owners, Thomson, continue to market the language from Aonix, Inc. While the three continue to deliver 10-to-1 coding improvements over the 3GL alternatives of Fortran, Cobol, or PL/1, the movements to object orientation and outsourcing have stagnated acceptance.
... snip ...

other history
https://en.wikipedia.org/wiki/Ramis_software
When Mathematica (also) makes Ramis available to TYMSHARE for their VM370/CMS-based commercial online service, NCSS does their own version
https://en.wikipedia.org/wiki/Nomad_software
and then follow-on FOCUS from IBI
https://en.wikipedia.org/wiki/FOCUS
Information Builders's FOCUS product began as an alternate product to Mathematica's RAMIS, the first Fourth-generation programming language (4GL). Key developers/programmers of RAMIS, some stayed with Mathematica others left to form the company that became Information Builders, known for its FOCUS product
... snip ...

4th gen programming language
https://en.wikipedia.org/wiki/Fourth-generation_programming_language

from another CP67/CMS spinoff in the 60s ... this mentions "first financial language" at IDC
https://www.computerhistory.org/collections/catalog/102658182
as an aside, a decade later, person doing FFL joins with another to form startup and does the original spreadsheet
https://en.wikipedia.org/wiki/VisiCalc

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
commercial, online virtual machine offerings
https://www.garlic.com/~lynn/submain.html#online
system/R posts
https://www.garlic.com/~lynn/submain.html#systemr

a few posts mentioning ncss, mathematica, ramis, nomad
https://www.garlic.com/~lynn/2024g.html#9 4th Generation Programming Language
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100
https://www.garlic.com/~lynn/2023g.html#64 Mainframe Cobol, 3rd&4th Generation Languages
https://www.garlic.com/~lynn/2023.html#13 NCSS and Dun & Bradstreet
https://www.garlic.com/~lynn/2022f.html#116 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#49 4th generation language
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021g.html#23 report writer alternatives
https://www.garlic.com/~lynn/2021f.html#67 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2019d.html#16 The amount of software running on traditional servers is set to almost halve in the next 3 years amid the shift to the cloud, and it's great news for the data center business
https://www.garlic.com/~lynn/2019d.html#4 IBM Midrange today?
https://www.garlic.com/~lynn/2018d.html#3 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2018c.html#85 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2017j.html#39 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
https://www.garlic.com/~lynn/2017j.html#29 Db2! was: NODE.js for z/OS
https://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2014i.html#32 Speed of computers--wave equation for the copper atom? (curiosity)
https://www.garlic.com/~lynn/2013f.html#63 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013c.html#56 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012d.html#51 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2010n.html#21 What non-IBM software products have been most significant to the mainframe's success
https://www.garlic.com/~lynn/2007e.html#37 Quote from comp.object
https://www.garlic.com/~lynn/2006k.html#37 PDP-1

--
virtualization experience starting Jan1968, online at home since Mar1970

The joy of FORTRAN

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The joy of FORTRAN
Newsgroups: alt.folklore.computers, comp.os.linux.misc
Date: Thu, 27 Feb 2025 15:10:30 -1000
re:
https://www.garlic.com/~lynn/2025.html#124 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#125 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#131 The joy of FORTRAN

RAMIS
https://en.wikipedia.org/wiki/RAMIS_(software)
RAMIS was initially developed in the mid 1960s by the company Mathematica on a consulting contract for a marketing study by a team headed by Gerald Cohen[1] and subsequently further developed and marketed as a general purpose data management and analysis tool. In the late 1960s Cohen fell out with the management of Mathematica and left to form his own company. ... snip ...

History of SQL
https://docs.oracle.com/cd/B13789_01/server.101/b10759/intro001.htm
Dr. E. F. Codd published the paper, "A Relational Model of Data for Large Shared Data Banks", in June 1970 in the Association of Computer Machinery (ACM) journal, Communications of the ACM. Codd's model is now accepted as the definitive model for relational database management systems (RDBMS). The language, Structured English Query Language (SEQUEL) was developed by IBM Corporation, Inc., to use Codd's model. SEQUEL later became SQL (still pronounced "sequel"). In 1979, Relational Software, Inc. (now Oracle) introduced the first commercially available implementation of SQL. Today, SQL is accepted as the standard RDBMS language.
... snip ...

RAMIS mid-60s, SQL something like decade later, System/R, starting 1974 on VM370/CMS,
https://en.wikipedia.org/wiki/IBM_System_R
"Phase Zero" of the project, which occurred during 1974 and-most of 1975, involved the development of the SQL user interface
... snip ...

As undergraduate in 60s did a lot of work on CP67/CMS at the Univ and Boeing, then when I graduate, I joined the science center and did a lot more work on CP67/CMS and then VM370/CMS. I then transfer out to San Jose Research and did work on System/R with Jim Gray and Vera Watson. Lots of opposition inside the company, but while focus was on the next great, new DBMS "EAGLE", managed to do tech. transfer to Endicott ("under the radar") for SQL/DS. Then after "EAGLE" implodes there is a request for how fast could System/R be ported to MVS ... eventually released as DB2 (for decision/support *ONLY*).

When Jim departs for Tandem in fall of 1980, he tries foisting off lots of stuff on me. In spring 1981, I distribute a trip report of visit to Jim at Tandem which kicks off internal online computer conferencing (folklore is when corporate executive committee was told 5of6 wanted to fire me). from IBMJargon ... copy here
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

The joy of FORTRAN

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The joy of FORTRAN
Newsgroups: alt.folklore.computers, comp.os.linux.misc
Date: Fri, 28 Feb 2025 13:08:14 -1000
"Kerr-Mudd, John" <admin@127.0.0.1> writes:
EBCDIC text (+control codes) saved on a PC floppy!

trivia: 360s were originally going to be ASCII ... but the ASCII unit record gear wasn't ready ... so they were going to use BCD unit record gear with EBCDIC temporarily ... but it didn't quite turn out that way; "biggest computer goof ever" (gone 404, but lives on at wayback)
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
other refs:
https://web.archive.org/web/20180402200149/http://www.bobbemer.com/HISTORY.HTM

recent cobol refs:
https://www.garlic.com/~lynn/2025.html#62 Grace Hopper, Jean Sammat, Cobol
https://www.garlic.com/~lynn/2025.html#131 The joy of Fortran

also Cobol trivia:
https://en.wikipedia.org/wiki/Bob_Bemer
He served on the committee which amalgamated the design for his COMTRAN language with Grace Hopper's FLOW-MATIC and thus produced the specifications for COBOL.

and
https://en.wikipedia.org/wiki/COMTRAN

other recent posts mentioning Bob Bemer
https://www.garlic.com/~lynn/2024g.html#84 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#97 COBOL history, Article on new mainframe use
https://www.garlic.com/~lynn/2024e.html#2 DASD CKD
https://www.garlic.com/~lynn/2024d.html#107 Biggest Computer Goof Ever
https://www.garlic.com/~lynn/2024d.html#105 Biggest Computer Goof Ever
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#14 Bemer, ASCII, Brooks and Mythical Man Month
https://www.garlic.com/~lynn/2024b.html#113 EBCDIC
https://www.garlic.com/~lynn/2024.html#102 EBCDIC Card Punch Format

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 28 Feb, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#96 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#97 IBM Token-Ring

wasn't just Torken-Ring versus Ethernet ... but also SNA/VTAM versus TCP/IP and dumb terminal emulation versus client/server and distributed computing.

I was once told that part of performance kneecaping PS2 microchannel cards was for design point of 300 stations on single LAN limited to dumb terminal emulation traffic

Also note that in the 60s, IBM had 2701 controller supporting T1 (1.5mbits/sec), transition to SNA/VTAM in the 70s seemed to cap controllers at 56kbits/sec links. When I got HSDT project in the early 80s (T1 and faster links, both terrestrial and satellite), there was lots of conflicts with communication group.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
posts mentioning communication group fighting off client/server & distributed computing
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

3270 Controllers and Terminals

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3270 Controllers and Terminals
Date: 01 Mar, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#127 3270 Controllers and Terminals

side note: 1960s IBM had 2701 controllers that supported T1 (1.5mbits/sec) ... however 70s with move to SNA/VTAM seemed to cap 37x5 controllers at 56kbit/sec links. Early 80s, I got HSDT project, T1 and faster computer links (both terrestrial and satellite) ... and some amount of conflicts with communication group.

Mid-80s, communication group generated analysis why customers wouldn't need T1 until sometime well into the 90s; basically looked at customers with 37x5 "fat pipes" .... parallel 56kbit links treated as single logical link .... number of customer "fat pipe" configurations that went from 2, 3, 4, 5, etc links dropping to zero by seven links. What they didn't know (or didn't want to include for the corporate executive committee) was typical telco tariff for T1 links was about the same as 5 or 6 56kbit links. Trivial HSDT survey of customers found 200 T1 installations ... just moved to non-IBM box.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

recent posts mentioning "fat pipes" and/or 60s "2701" controllers
https://www.garlic.com/~lynn/2025.html#134 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#122 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#114 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#109 IBM Process Control Minicomputers
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#96 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#89 Wang Terminals (Re: old pharts, Multics vs Unix)
https://www.garlic.com/~lynn/2025.html#83 Online Social Media
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#35 IBM ATM Protocol?
https://www.garlic.com/~lynn/2025.html#33 IBM ATM Protocol?
https://www.garlic.com/~lynn/2025.html#31 On-demand Supercomputer
https://www.garlic.com/~lynn/2025.html#12 IBM APPN
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2025.html#2 IBM APPN
https://www.garlic.com/~lynn/2024g.html#99 Terminals
https://www.garlic.com/~lynn/2024g.html#77 Early Email
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024g.html#61 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2024g.html#50 Remote Satellite Communication
https://www.garlic.com/~lynn/2024g.html#40 We all made IBM 'Great'
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#13 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024g.html#8 IBM Transformational Change
https://www.garlic.com/~lynn/2024f.html#119 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#116 NASA Shuttle & SBS
https://www.garlic.com/~lynn/2024f.html#111 Japan Technology
https://www.garlic.com/~lynn/2024f.html#105 NSFnet
https://www.garlic.com/~lynn/2024f.html#92 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#82 IBM Registered Confidential and "811"
https://www.garlic.com/~lynn/2024f.html#73 IBM 2250 Hypertext Editing System
https://www.garlic.com/~lynn/2024f.html#60 IBM 3705
https://www.garlic.com/~lynn/2024f.html#48 IBM Telecommunication Controllers
https://www.garlic.com/~lynn/2024f.html#39 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#35 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#125 ARPANET, Internet, Internal Network and DES
https://www.garlic.com/~lynn/2024e.html#95 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#94 IBM TPF
https://www.garlic.com/~lynn/2024e.html#91 When Did "Internet" Come Into Common Use
https://www.garlic.com/~lynn/2024e.html#71 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#63 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#60 Early Networking
https://www.garlic.com/~lynn/2024e.html#46 Netscape
https://www.garlic.com/~lynn/2024e.html#34 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024c.html#24 TDM Computer Links
https://www.garlic.com/~lynn/2024b.html#112 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#83 SNA/VTAM
https://www.garlic.com/~lynn/2024.html#70 IBM AIX

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, next, index - home