List of Archived Posts

2023 Newsgroup Postings (11/21 - 12/31)

CSC, HONE, 23Jun69 Unbundling, Future System
Vintage TSS/360
Vintage TSS/360
Vintage Future System
Vintage Future System
Vintage Future System
Vintage Future System
Vintage 3880-11 & 3880-13
Vintage Future System
Viruses & Exploits
370/125 VM/370
Vintage Future System
Vintage Future System
Vintage Future System
How Finance Capitalism Ruined the World
Vintage IBM 4300
370/125 VM/370
Vintage Future System
Vintage X.25
OS/360 Bloat
IBM Educational Support
Vintage X.25
Vintage Cray
Vintage 3081 and Water Cooling
Vintage ARPANET/Internet
Vintage Cray
Vintage 370/158
Another IBM Downturn
IBM FSD
Another IBM Downturn
Vintage IBM OS/VU
Mainframe Datacenter
Storage Management
Interest Payments on the Ballooning Federal Debt vs. Tax Receipts & GDP: Not as Bad as in 1982-1997, but Getting There
Storage Management
Vintage TSS/360
Timeslice, Scheduling, Interdata
AL Gore Invented The Internet
Computer "DUMPS"
Vintage Mainframe
Vintage Mainframe
Vintage Mainframe
IBM Koolaid
Wheeler Scheduler
Amdahl CPUs
Wheeler Scheduler
Amdahl CPUs
Mainframe Printer
Vintage Mainframe
REXX (DUMRX, 3092, VMSG, Parasite/Story)
Vintage Mainframe
Vintage Mainframe
Why True Democratic Systems Are Incompatible with Class-Based Orders.....Like Capitalism
Vintage 2321, Data Cell
REX, REXX, and DUMPRX
AUSMINIUM
Future System, 115/125, 138/148, ECPS
Future System, 115/125, 138/148, ECPS
Multiprocessor
IBM Downfall
PDS Directory Multi-track Search
PDS Directory Multi-track Search
Silicon Valley Mainframes
CP67 support for 370 virtual memory
Mainframe Cobol, 3rd&4th Generation Languages
Vintage Mainframe
2540 "Column Binary"
Waiting for the reference to Algores creation documents/where to find- what to ask for
Assembler & non-Assembler For System Programming
Assembler & non-Assembler For System Programming
MVS/TSO and VM370/CMS Interactive Response
MVS/TSO and VM370/CMS Interactive Response
MVS/TSO and VM370/CMS Interactive Response
MVS/TSO and VM370/CMS Interactive Response
MVS/TSO and VM370/CMS Interactive Response
The Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us
Another IBM Downturn
MVT, MVS, MVS/XA & Posix support
MVT, MVS, MVS/XA & Posix support
MVT, MVS, MVS/XA & Posix support
MVT, MVS, MVS/XA & Posix support
MVT, MVS, MVS/XA & Posix support
Cloud and Megadatacenter
MVT, MVS, MVS/XA & Posix support
Vintage DASD
Vintage DASD
Shared Memory Feature
Mainframe Performance Analysis
MVS/TSO and VM370/CMS Interactive Response
Shared Memory Feature
Has anybody worked on SABRE for American Airlines
Vintage Christmas Tree Exec, Email, Virus, and phone book
Shared Memory Feature
Why Nations Fail
IBM Downturn and Downfall
Vintage 370 Clones, Amdahl, Fujitsu, Hitachi
Vintage 370 Clones, Amdahl, Fujitsu, Hitachi
Shared Memory Feature
Shared Memory Feature
VM Mascot
VM Mascot
VM Mascot
VM Mascot
More IBM Downfall
More IBM Downfall
VM Mascot
Shared Memory Feature
Cluster and Distributed Computing

CSC, HONE, 23Jun69 Unbundling, Future System

From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 21 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System

... trivia: I was blamed for online computer conferencing (precursor to social media) on the internal network in the late 70s & early 80s; it really took off spring 1981 when I distributed trip report of visit to Jim Gray at Tandem; only about 300 were actively participating, but claims that upwards of 25,000 were reading. Folklore that when the corporate executive committee was told, 5of6 wanted to fire me. Afterwards official supported software and moderated, sanctioned forums were created ... also a researcher was paid to study how I communicated, sat in back of my office for nine months taking notes on my conversations, face-to-face, phone, etc ... also got logs of all my instant messages and copies of all my incoming&outgoing email. Material was used for reports, conference papers&talks, books and Stanford phd (joint with language and computer ai).

I had drafted several reports for publication, and was told that they had to be worked over by the plant site senior tech editor before release. He periodically said that he was continually being told that they needed more work before publication. Finally in the late 80s, he was retiring and contacted me to turn over all the files (that were never allowed to be released for publication).

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

a couple other posts specificly mentioning tech editor I worked with
https://www.garlic.com/~lynn/2006e.html#14 X.509 and ssh
https://www.garlic.com/~lynn/2019b.html#5 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2022g.html#5 IBM Tech Editor

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage TSS/360

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage TSS/360
Date: 22 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#66 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360

... lots of univ. were sold (virtual memory) 360/67 for tss/360 ... tss/360 never really came to production fruition ... and so many places just used it as 360/65 for os/360. Stanford (ORVYL) and univ of mich. (MTS) did a virtual memory system for 360/67 (later stanford ported wylbur editor to MVS). Earlier, IBM Cambridge Science Center had modified a 360/40 for virtual memory and implemented CP/40 ... later when 360/67 standard with virtual memory became available, CP/40 morphs into CP/67 (I was undergraduate at univ that had one of the 360/67s and fulltime employee responsible for os/360, then CSC came out to install CP67, 3rd installation after CSC itself and MIT Lincoln Labs ... I mostly played with it at my 48hr weekend dedicated time).

One of the TSS/360 SEs was still around and I would periodically have to share my weekend time. Not long after CP67 was installed we do a synthetic Fortran edit, compile and exec benchmark. CP67/CMS with 35 users has better throughput and interactive response than TSS/360 with four users.

Note that science center originally wanted 360/50 to modify, adding hardware virtual memory ... but couldn't get any since all extra 360/50s were going to FAA ATC project ... so had to settle for 360/40. They claim that was fortunate, since it was easier to implement for 360/40 (than it would have been for 360/50). They then implement CP40/CMS ... which morphs into CP67/CMS when 360/67 standard with virtual memory becomes available. Comeau's history of CP40 at SEAS 82:
https://www.garlic.com/~lynn/cp40seas1982.txt
other history
https://www.leeandmelindavarian.com/Melinda#VMHist
other IBM mainframe history
https://en.wikipedia.org/wiki/History_of_IBM_mainframe_operating_systems

In the late 60s, there were 12 people in the CP67/CMS group (and two CP67 commercial online service bureau spin-offs from the science center) with many more 360/67s running CP/67 than TSS/360 ... and at the time the TSS/360 organization had 1200 people. TSS/360 also knew how to spin detail, at one point saying that TSS/360 had fantastic multiprocessor support because a two processor system had 3.9 times the throughput of a single processor. In fact, TSS/360 kernel and applications were extremely bloated and one mbyte single processor system would page thrash, while 2mbyte two processor system was just beginning to have modest amount memory available for applications (but neither having better throughput than CP67).

At the univ. I spend some time optimize CP67 for running OS/360 in virtual machine; Student Fortgclg jobstream benchmark initially ran 322secs (optimized OS360, originally ran 2000secs) on bare machine and 856secs under CP67 (534 CP67 CPU secs). Within a few months I had it down to 434secs under CP67 (113 CP67 CPU secs). Then I did a lot of work on scheduling and page replacement algorithms and I/O system. Originally I/O queuing was FIFO and did single page transfer per I/O. I modified disk queuing for ordered seek and would chain multiple queued page transfers per I/O (for same disk cylinder and for all queued 2301, optimized to maximize transfers per rotation). Originally 2301 peaked around 70 page transfers/sec ... optimized it could peak around 270/sec, nearly channel speed. Archived post with part of old SHARE presentation describing some of the univ. os/360 and cp/67 work:
https://www.garlic.com/~lynn/94.html#18

Note that student fortran jobs initially ran under second on 709. The 360/67 replaces 709/1401, running OS/360 and initially student jobs run over a minute and I install HASP cutting time in half. I then redo STAGE2 SYSGEN, 1) run in production jobstream and 2) order statements for placement of datasets and PDS members for optimized arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student fortran jobs never got better than 709 until I install Univ. of Waterloo WATFOR.

trivia: by the time I graduate and join the science center, Comeau had transferred to Gaithersburg and during "Future System" was an "owner" of one of the 13(or 14?) FS "sections" and my (future) wife reported to him. During FS, I continued to work on 370 and would periodically ridicule FS. I do a page-mapped filesystem for CP67/CMS and would claim that I learned what not to do from TSS/360 ... while FS would say they were doing a "single-level-store" (much of it from TSS/360, but most of the FS people didn't really appear to know what they were doing, which was later corroborated by my wife). more FS
http://www.jfsowa.com/computer/memo125.htm
and in this 360/67 thread comment
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
scheduling, dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement, paging posts
https://www.garlic.com/~lynn/subtopic.html#clock
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
page-mapped CMS filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
using page-mapped filesystems for shared memory objects
https://www.garlic.com/~lynn/submain.html#adcon
HASP/ASP, JES2/JES3, NJI/NJE posts
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage TSS/360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage TSS/360
Date: 22 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#66 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360

... trivia: Les and my wife have commented that there was possibly only one other person in the FS organization that they worried about knowing what they were doing. Later he was on west coast and involved in "811" (370/xa for the documents nov1978 publication date). He also did the work adapting subset of 811 access registers for 3033 dual address space mode (subsystems could directly access application space w/o needing CSA), also worked on "blue iliad" (original 32bit 801/risc chip ... although it never came to production) ... before leaving for HP labs (where he was involved in HP Snake & later Itanium).

Note that pending MVS/XA (811 & >16mbytes), MVS was starting to hit a brick wall. OS360 was heavy pointer passing API. The initial mapping MVT to VS2 (370 virtual memory) was very similar to running MVT in a CP67 16mbyte virtual machine where all of system services easily access all application addresses. The problem shows up in move to MVS with different address space for each application. First an 8mbyte image of MVS kernel was mapped into every application 16myte address space (still easily accessing application addresses). The problem shows up with MVS subsystems moved into their own (private) address spaces, with no direct way of accessing application addresses. To solve this a reserved 1mbyte segment was mapped into every address spaces, for allocating storage for passing information back&forth between subsystems and applications, called "common segment area" (CSA). Now CSA space requirements were somewhat proportional to number of concurrent applications and subystems ... as systems got larger, CSA matures into multiple segment "common system area". By 3033 time-frame lots of customers had MVS with 5-6 mbyte CSA (threatening to becomes 8mbyte, which with 8mbyte kernel area, leaves nothing in 16mbyte for application use). Dual-address space mode alleviated some of the pressure on CSA becoming 8mbytes ... pending 370/xa with 31bit addressing.

... other trivia: Burlington (Vt) VLSI had a 7mbyte Fortan chip design application with multiple dedicated MVS systems special configured with a 1mbyte CSA ... so it just barely fit ... but any changes were constantly bumping their head against the MVS 7mbyte brick wall. In discussions, couldn't quite get it to run with CMS 64kbyte OS-simulation. Then IBM Los Gatos lab shows that another 12kbytes of OS-simulation would get it running and they would nearly have the full 16mbyte address space (minus 128kbytes). It then became a political, not a technical issue; head of POK had recently convinced corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (supposedly otherwise, MVS/XA schedule would slip ... with large MVS systems on the verge of taking all of 16mbytes, they desperately needed MVS/XA >16mbytes). Note: Endicott eventually saves the VM370 product mission (supposedly just for the mid-range), but has to recreate a development organization from scratch.

some recent posts mentioning CSA
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#36 "The Big One" (IBM 3033)
https://www.garlic.com/~lynn/2023d.html#28 Ingenious librarians
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#93 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#69 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#49 IBM 3033 Personal Computing
https://www.garlic.com/~lynn/2022b.html#19 Channel I/O
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021i.html#67 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2020.html#36 IBM S/360 - 370

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Future System
Date: 22 Nov, 2023
Blog: Facebook
more detail
http://www.jfsowa.com/computer/memo125.htm

At least Multics & TSS/360 single-level-store predated FS. The FS failure contributed to giving single-level-store (and other page mapped architecture designs) performance a bad name .... so Simpson (from HASP fame) had done a MFT virtual memory with page mapped architecture ("RASP") wasn't picked up.

One of the final nails in the FS coffin was study by Houston Science Center if Eastern's 370/195 System/One ACP/TPF was remapped to FS machine made out of the fastest technology available, it would have throughput of 370//145 (about 30 times slow down).

Single-level-store however simplified a lot of data processing tasks and Rochester did initial implementation for S/38 (decade before AS/400) for low-end market ... but there were significant throughput headroom between the low-end requirements and available technology. The initial S/38 started with single disk drive .... and then merged any additional disk drives into a single filesystem (with possibly file records scatter across multiple drives) resulting in backups and restores were all disks as a single filesystem (failed to scale)

... trivia: this claims
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

that the major motivation for FS was clone controllers ... FS was going to make integration of the system/controller I/O interface so tightly coupled and complex that clone controller markers couldn't keep up. From the law of unintended consequences trying to address clone controllers gave rise to clone 370s i.e. during FS ,370 projects were being killed and the lack of new 370 is credited with giving clone 370 systems their market of foothold.

After ACS/360 was killed (in the 60s, claim was that executives were afraid it would advance state-of-the-art too fast and IBM would loose control off the market), Amdahl leaves to do clone systems
https://people.computing.clemson.edu/~mark/acs_end.html

above also mentions some of the ACS/360 features don't show up until ES/9000 (in the 90s)

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
paged-mapped (CMS filesystem) posts
https://www.garlic.com/~lynn/submain.html#mmap
work on clone (plug-compatable) controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
more downfall
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Future System
Date: 23 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
and related TSS/360
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#66 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360

long winded warning:

a decade ago, I was asked to track down decisions to add virtual memory to all 370s, including reference to Boeing Huntsville adding some virtual memory support to MVT13 running on 360/67s (I was undergraduate but had been hired fulltime into small group in the Boeing CFO office to help with formation of BCS, consolidate all dataprocessing into independent business unit), I thot Renton datacenter was largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in halfways around machine room. Somebody recently commented that Boeing was getting 360/65s like other companies were getting keypunches.

endicott proposes joint development with cambridge modifying CP67 to simulate 370 virtual memory virtual machines ("CP67H"). Then a version of CP67 is modified to run on 370 virtual memory and "CP67I" is regularly running in "CP67H" 370 virtual machines a year before the first engineering 370 is operational with virtual memory (running "CP67I" is then used to validate that 370/145 machine). Three engineers from San Jose come out and add block mux channel support and 3330&2305 device drivers to "CP67I" ... resulting in "CP67SJ" which is regularly running on most of the engineering 370 virtual memory machines.

A few split off from Cambridge and move to the 3rd flr and take over the IBM Boston Programming center, forming the VM370 development group. When they outgrow their side of 3rd flr (the other side is listed as law firm on the bldg directory but was a 3letter gov. agency), they move out to the vacant IBM SBC bldg at Burlington Mall on rt128.

A IBM 370 virtual memory document leaks to the industry press before virtual memory is announced and there is a hunt for the leak. Eventually all internal copy machines are modified to add each machine's identifier to every page copied. That motivates the FS project to restrict documents to soft copy only. A highly modified VM370 with lots of security added to only allow specifically identified 3270 terminals for special logons that are configured for limited use only reading FS soft copy documents ... which are deployed around the company.

In the morph of CP67->VM370, lots of stuff is dropped and/or simplified. One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters (and what became the world-wide online sales&marketing HONE systems were long time customers) starting with CP67. In 1974, I start moving lots of stuff to VM370 base and am able to do lots of testing in CP67H 370 virtual machine ... but eventually I need a real machine to do some testing. I get some weekend shifts on 370 out at VM370 development group. I arrive friday afternoon to check out everything setup for the weekend. They are proudly showing one of the special VM370 machines with the FS documents and say that the security is so good that I could be left alone in the machine room for the whole weekend and even I won't be able to break the security. Normally I ignore such stuff, but they were getting a little tiresome so I say it takes 5mins ... I have them disable all access to the system from outside the machine room and then use the front panel to make a simple core patch ... and instantly everything typed for a password is accepted as valid (I ran into something almost identical around turn of century doing some stuff in the financial industry).

disclaimer: IBM had got a new CSO about this time, formally from gov. service and head of presidential detail (60yrs ago, he was head of 3rd shift detail and at Dallas, they had already packed up and were at next city on the schedule), and I was asked to spend time with him talking about computer security.

Start of 1975, Boeblingen gets me to work on 370/125 5-processor machine and Endicott gets me to work on ECPS for 138/148. Endicott then complains that 125 5-processor throughput would overlap 138/148 and in escalation meeting, I have to argue both sides, but Endicott gets the 125 multiprocessor machine canceled.

After demise of the 370/125 five processor machine, I get roped into 16-processor 370 and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than mapping 168-3 logic to 20% faster chips). Everybody thought it was great until somebody told the head of POK that it could be decades before POK's favorite son operating system (MVS) had effective 16-way support ... and some of us were invited to never visit POK again and the 3033 processor engineers were told heads down on 3033 and no distractions ... once the 3033 was out the door, they start on trout/3090 (and they would have me sneak back into POK). Note: POK doesn't ship 16-way until after the turn of the century (over 20yrs later).

After transferring to SJR, I'm allowed to wander around IBM and non-IBM datacenters in silicon valley, including disk enginneering & product test (bldgs14&15) across the street. They are doing 7x24, prescheduled, stand-alone testing and mentioned that they had tried MVS but it had 15min MTBF (requiring manual re-ipl) in that environment. I offer to rewrite I/O supervisor making it bullet proof and never fail so they can do any amount of on-demand, concurrent testing, greatly improving productivity. One of the engineers gets a patent on RAID. Now S/38 scaleups so badly as number of disks increase and single disk failure is so tramatic, that S/38 becomes the 1st RAID adopter

this has some reference to when FS implodes there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel
http://www.jfsowa.com/computer/memo125.htm

Sowa trivia: most of Sowa's web site is about semantic network technology. Also after transferring to SJR, I'm doing some work with Jim Gray and Vera Watson on original SQL/relational implementation, System/R RDBMS. Sowa is then in STL and working with Los Gatos VLSI tools group on semantic network dbms implementation and I get roped into helping with that too (LSG lets me have part of wing with offices and labs as part of the deal).

cambride science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
370/125 5-processor machine posts
https://www.garlic.com/~lynn/submain.html#bounce
smp, multiprocessor, tightly-coupled
https://www.garlic.com/~lynn/subtopic.html#smp
archived post detailing ECPS analysis
https://www.garlic.com/~lynn/94.html#21
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

some posts mentioning cp67h, cp67i, cp67sj
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2019c.html#90 DNS & other trivia
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2018e.html#49 DEC introduces PDP-6 [was Re: IBM introduces System/360]
https://www.garlic.com/~lynn/2018e.html#45 DEC introduces PDP-6 [was Re: IBM introduces System/360]
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2013.html#71 New HD
https://www.garlic.com/~lynn/2012k.html#62 Any cool anecdotes IBM 40yrs of VM
https://www.garlic.com/~lynn/2011b.html#72 IBM Future System
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2010g.html#31 Mainframe Executive article on the death of tape
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2009s.html#3 "Portable" data centers
https://www.garlic.com/~lynn/2007i.html#16 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2006w.html#3 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006q.html#1 Materiel and graft
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future

some posts mentioning 3rd flr, boston programming center, burlington mall
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2022f.html#83 COBOL and tricks
https://www.garlic.com/~lynn/2022f.html#80 COBOL and tricks
https://www.garlic.com/~lynn/2022d.html#67 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021h.html#65 CSC, Virtual Machines, Internet
https://www.garlic.com/~lynn/2018f.html#72 Jean Sammet — Designer of COBOL – A Computer of One's Own – Medium
https://www.garlic.com/~lynn/2016d.html#44 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#34 The Network Nation, Revised Edition
https://www.garlic.com/~lynn/2015h.html#64 [CM] Coding with dad on the Dragon 32
https://www.garlic.com/~lynn/2015f.html#84 Miniskirts and mainframes
https://www.garlic.com/~lynn/2014f.html#4 Another Golden Anniversary - Dartmouth BASIC
https://www.garlic.com/~lynn/2013f.html#70 How internet can evolve
https://www.garlic.com/~lynn/2012n.html#26 Is there a correspondence between 64-bit IBM mainframes and PoOps editions levels?
https://www.garlic.com/~lynn/2012k.html#84 Did Bill Gates Steal the Heart of DOS?
https://www.garlic.com/~lynn/2012k.html#33 Using NOTE and POINT simulation macros on CMS?
https://www.garlic.com/~lynn/2011h.html#69 IBM Mainframe (1980's) on You tube
https://www.garlic.com/~lynn/2011g.html#8 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011d.html#52 Maybe off topic
https://www.garlic.com/~lynn/2011.html#18 IBM Future System
https://www.garlic.com/~lynn/2010q.html#41 Old EMAIL Index
https://www.garlic.com/~lynn/2010q.html#9 EXTERNAL: Re: Problem with an edit command in tso
https://www.garlic.com/~lynn/2010p.html#42 Which non-IBM software products (from ISVs) have been most significant to the mainframe's success?
https://www.garlic.com/~lynn/2010e.html#14 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2009c.html#14 Assembler Question
https://www.garlic.com/~lynn/2007v.html#96 source for VAX programmers
https://www.garlic.com/~lynn/2007u.html#40 Computer language history
https://www.garlic.com/~lynn/2007q.html#0 A question for the Wheelers - Diagnose instruction
https://www.garlic.com/~lynn/2007l.html#58 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2007g.html#39 Wylbur and Paging
https://www.garlic.com/~lynn/2006s.html#1 Info on Compiler System 1 (Univac, Navy)?
https://www.garlic.com/~lynn/2006r.html#41 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006m.html#28 Mainframe Limericks
https://www.garlic.com/~lynn/2005s.html#35 Filemode 7-9?

some posts mentioning 16-processor, 3033, trout, 3090
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#91 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#31 3081 TCMs
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#11 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#9 Cloud Timesharing
https://www.garlic.com/~lynn/2021h.html#51 OoO S/360 descendants

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Future System
Date: 23 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System

I'm undergraduate but am hired fulltime into small group in Boeing CFO office to help with the formation of BCS, consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities. I think Renton datacenter is largest in the world, new 360/65s constantly arriving, lots of politics between Renton director and CFO, who only has a 360/30 up at boeing field for payroll (although they enlarge the machine room for 360/67 for me to play with when I'm not doing other stuff)

IBM had sold Boeing Huntsville a large duplex 360/67 for TSS/360 with several 2250M1 for CAD/CAM. TSS/360 never came to production so they ran it as two 360/65 with OS/360. Long running CAD/CAM exacerbate the MVT storage management problems (that were the motivation for adding virtual memory to all 370s) .... and Boeing adds simple virtual memory support to MVT13 (no paging but used for address reorg as countermeasure to MVT problems) ... not nearly as much as I describe Ludlow was doing offshift on 360/67 for initial VS2/SVS.
https://www.garlic.com/~lynn/2011d.html#73

Summer 1969 the huntsville duplex 360/67 is moved to seattle

note: Renton is something like a couple hundred million in 360s. In the early 80s, I'm introduced to John Boyd and would sponsor his briefings at IBM. One of his stories is being vocal that the electronics across the trail wouldn't work and possibly as punishment he is put in command of "spook base" about the same time I'm at Boeing. spook base ref (gone 404) ... shows some large scopes and mention 2250s, but obviously aren't
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
other detail
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Boyd biography has "spook base" a $2.5B windfall for IBM (ten times renton).

Boyd posts and URLs:
https://www.garlic.com/~lynn/subboyd.html

a couple recent posts mentioning Boeing Huntsville
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Future System
Date: 23 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System

As undergraduate, I do a lot of work on both os360 and CP67 ... including paging and scheduling algorithm work on CP67. I do dynamic adaptive resource management for CP67, which IBM ships in the product. After joining IBM one of my hobbies was enhanced production operating systems for internal datacenters ... initially for CP67. In the morph of CP67->VM370, they drop and/or simplify lots of stuff (including multiprocessor support) and then in 1974, I start adding stuff back into VM370 for production "CSC/VM" for internal datacenters.

The 23jun69 unbundling announcement included starting to charge for (application) software (but managed to make the case that kernel software was still free). After FS implosion and mad rush to get stuff back into 370 product pipelines, contributes to picking up a lot of my stuff for release 3 VM370. The lack of new 370 products during FS contributes to the clone 370 makers getting market foothold and apparently motivates decision to start charging for kernel software and my dynamic adaptive resource manager was selected for guinea pig for charged-for kernel software (and I get to spend time with business planners and lawyers on kernel software charging policy and practice). I manage to cram a bunch of different stuff in the charged for release including kernel re-org for multiprocessor operation ... but not the actual multiprocessor support.

sidenote: decade ago I was ask to track down decision to add virtual memory to all 370s and I find staff to executive making decision ... archived post with parts of the email exchange (mentions moving resources around and bringing in os360 resources)
https://www.garlic.com/~lynn/2011d.html#73

basically MVT storage management was so bad that regions had to be specified four times larger than actual used, so typical 1mbyte 370/165 only ran four concurrent regions, insufficient to keep system available and justified. Going to virtual memory would allowing increasing the number of concurrent running regions by a factor of four with little or no paging. The 370/165 engineers then started whining if they had to support the full 370 virtual memory architecture, it would slip the virtual memory announcement by six months. Eventually it was decided that everybody would strip back to the 165 subset (remove features from models already with full implementation and any software using full architecture would be redone for the subset).

Also mentions that MVS V3 was suppose to be the "glide-path" to FS. I got visits from some of the group but I was pretty caustic about what they were doing (besides ridiculing FS in general, drawing comparison with long running cult film down in central sq). Then for my charged-for kernel product, I got audited by a corporate hdqtrs expert (who was apparently heavily infused with MVS). He said he wouldn't sign off on the release w/o static manual tuning parameters (because everybody knows that lots of manual tuning parameters was the current state of the art). I tried to explain about dynamic adaptive resource management (done as undergraduate in the 60s) to no avail. Note in this period MVS had large array of static manual tuning parameters and IBM was giving SHARE presentations about system throughput benchmarks with a variety of different static settings. So I added some static manual tuning knobs as a joke .... descriptions and formulas in the documentation and code clearly showed the static values were being used. What few people recognized was "degrees of freedom" (from operations research), that the dynamic adaptive code had greater degrees of freedom than the manual static parameters ... and would compensate for any manual setting.

Note the kernel charged-for policy was for new code during the transition, except new code directly supporting hardware (until into the 80s, when transition was complete and all kernel software would be charged ... which were followed by OCO-wars, customers complaining about IBM's object-code only and no source). This caused a problem with VM370 release4, when they decided to release multiprocessor support ... which required the kernel reorganization I had included in my charged-for product. Eventually they decided to move 80%-90% of the code from my charged for product into the free base release 4 (to support the release of multiprocessor support).

... other multiprocessor trivia: world-wide, online, sales&marketing support HONE system consolidated the US HONE datacenters in Palo Alto in the mid-70s (trivia: when facebook 1st moves into silicon valley, it is into a new bldg built next door to the former US HONE datacenter). Systems are enhanced with loosely-coupled, single-system-image operation with load-balancing and fall-over across the complex ... eventually max configured eight large POK systems. I then put multiprocessor support into release3-based CSC/VM, initially so they can add a 2nd processor to each system (easily beat max ACP/TPF config since didn't have multiprocessor support until much later). Head of POK had convinced corporate to kill VM370, shutdown the development group and move all the people to POK (or supposedly MVS/XA won't ship on time) ... note Endicott does finally manage to save the VM370 development mission (supposedly for mid-range), but have to rebuild a development group from scratch.

Then start finding POK periodically trying to browbeat HONE into porting everything to MVS (claiming that VM370 would no longer support high-end POK machines). HONE is forced to go through a couple efforts attempting MVS ports that failed. Then somebody decides that the reason HONE can't port to MVS is because they are running my enhanced system. HONE is then directed that they have to move to a vanilla 370 system (because what would they do if I was hit by a bus) ... apparently assuming once moved to standard VM370, it then could be ported to MVS.

recent posts about MVS looming brickwall with CSA (& requirement to ship MVS/XA and >16mbyes)
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
23jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage 3880-11 & 3880-13

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage 3880-11 & 3880-13
Date: 23 Nov, 2023
Blog: Facebook
3880-11 was 4k block "page" cache, 3880-13 was full track cache.

i pointed out that -11&-13 were too small to provide any significant throughput improvement.

3380-13 performance numbers claimed 90% "hit rate" on sequential read of dataset formated 10records/track ... i.e. the first record for track brought in the whole track ... then the next nine sequential reads were all remaining "hit" records from the track in cache. I pointed out that if the application was changed to do a full track reads, that the "hit rate" would go from 90% to zero.

At the time 3880-11 was frequently for paging in a system that had 32mbytes to 64mbytes of main memory. I pointed out that the way paging worked that if a page wasn't in main memory, it would be a 4k page read, (usually) first coming into cache and then brought into main memory. With main memory larger than the disk cache, it was nearly impossible for a page that was in cache memory wasn't also in main memory (which met that there would almost never be a request to read a record that was "hit") . To make the 3880-11 useful, you needed to play some special games to make sure that a page in main memory wasn't also in cache memory. I wrote some code for that, but it wasn't used (3880-11 augment main memory not duplicate it)

At San Jose Research in early 80s, we wrote a highly efficient disk record I/O monitoring/capture program that would record all records read&written. That was used to drive a cache emulation application where we could vary the cache performance, amount of cache and how it was configured ... i.e. cache/disk, cache/controller, cache/channel, cache/system. Cache/controller could be like the 3880 caches. Cache/system could be something like 3090 "expanded store". We also used it to drive hierarchical file system emulation ... found things like some applications that used group of files, only on daily, weekly, monthly, annual basis.

Had some heated email exchange with the head of cache organization in Tucson.

I had done lots of work on page replacement algorithms as undergraduate in the 60s ... I used the record tracker and cache emulation (set up to simulate main memory paging) to validate what I had concluded in the 60s. Part of it involved contrasting my 60s "global" LRU and 60s "local" LRU page replacement published in CACM. Posts mentions Jim Gray at Dec81 ACM SIGOPS asking me to help a co-worker with his Stanford Phd on "global" LRU ... which some of the 60s "local" LRU forces were attempting to block.

page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock

Posts mentioning at Dec81 ACM SIGOPS getting asked to help with Stanford Phd
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2022h.html#56 Tandem Memos
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2017d.html#66 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#52 Some IBM Research RJ reports
https://www.garlic.com/~lynn/2016g.html#40 Floating point registers or general purpose registers
https://www.garlic.com/~lynn/2016e.html#2 S/360 stacks, was self-modifying code, Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#0 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015c.html#48 The Stack Depth
https://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2014i.html#98 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2014g.html#97 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2014e.html#14 23Jun1969 Unbundling Announcement
https://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013i.html#30 By Any Other Name
https://www.garlic.com/~lynn/2013f.html#42 True LRU With 8-Way Associativity Is Implementable
https://www.garlic.com/~lynn/2013d.html#7 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012m.html#18 interactive, dispatching, etc
https://www.garlic.com/~lynn/2010m.html#5 Memory v. Storage: What's in a Name?
https://www.garlic.com/~lynn/2010l.html#23 OS idling

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Future System

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Future System
Date: 24 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System

as referenced in linkedin post,
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

FS (failing) significantly accelerated the rise of the bureaucrats, careerists, and MBAs .... From Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... joke became "wild ducks were tolerated as long as they fly in formation" ... of course I continued to work on 360&370 all during FS and periodically ridicule it (which wasn't a career enhancing activity so many careerists had dedicated themselves to it). After "Tandem Memos" I was told that they were never going to be able to make me a fellow (with 5of6 of the executive committee wanted to fire me) ... but if I were to keep a low profile, they could funnel funding my way as if I was one.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
"tandem memos" and online computer conferencing
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

Viruses & Exploits

From: Lynn Wheeler <lynn@garlic.com>
Subject: Viruses & Exploits
Date: 24 Nov, 2023
Blog: Facebook
note m'soft enabled visual basic for automagic execution in data files ... for stand-alone and small, safe business networks. However, at Jan1996 Moscone MSDC event all the banners said "Internet" ... but the constant refrain in all the sessions was "preserve your investment" ... aka all the (visual) basic code embedded in data files would continue to be automagically executed (m'soft network support would just be extended to the internet w/o any security countermeasures) ... resulting in explosion in viruses and exploits ... also giving rise to new anti-virus industry trying to recognize data patterns in files coming over the internet ... trivial pattern changes resulting in patterns exploding to hundreds of thousands and then to millions.

Risk, Fraud, Exploits, Threats, Vulnerabilities
https://www.garlic.com/~lynn/subintegrity.html#fraud

a few past posts mentioning Jan1996 Moscone MSDC
https://www.garlic.com/~lynn/2016d.html#79 Is it a lost cause?
https://www.garlic.com/~lynn/2016d.html#69 Open DoD's Doors To Cyber Talent, Carter Asks Congress
https://www.garlic.com/~lynn/2015c.html#87 On a lighter note, even the Holograms are demonstrating
https://www.garlic.com/~lynn/2010g.html#66 What is the protocal for GMT offset in SMTP (e-mail) header
https://www.garlic.com/~lynn/2010c.html#63 who pioneered the WEB

--
virtualization experience starting Jan1968, online at home since Mar1970

370/125 VM/370

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370/125 VM/370
Date: 24 Nov, 2023
Blog: Facebook
I was asked into 125 customer account in downtown NYC (foreign shipping company) to get VM running. Turns out DMKCPI boot did a MVCL for 16mbytes to clear core and the program check at end of storage would give the last address processed (caught the program check and proceeded with size of memory). 360 instruction convention was to check argument starting and ending address for available ... if not, immediately program check. Most 370s instructions continued to work the same except for the incremental execution instructions, CLCL, MVCL .... which incrementally check as it was executed. 115&125 had microcode "bug" because they checked all instruction addresses according to 360 rules ... including CLCL&MVCL ... so the VM370 DMKCPI MVCL instruction appeared to show zero storage..

I have long winded comment reply in (public) Internet Old Farts group ... about doing 5-processor 125 VM370 support:
https://www.facebook.com/groups/internetoldfarts/posts/832602538386966/?comment_id=833375878309632&reply_comment_id=884070239906862

also part of similar 125 post in facebook private group:

After "Future System" imploded, the 125 group cons me into doing multiprocessor implementation. The Boeblingen got their hands slapped for the 115/125 design ... which had nine memory bus positions for microprocessors. The 115 had all identical microprocessors with different microprogramming; the microprocessor with 370 microprogramming got about 80KIPS 370. The 125 was the same, except the microprocessor running 370 microcode was 50% faster (about 120KIPS 370). I would support up to five microprocessor running 370 microcode (leaving four positions for microprocessors running controller microcode).

... aka early 1975 ... approx. same time, Endicott asked to do work for 138/148 ECPS .... then Endicott complained the throughput of a five processor 125 would overlap the 138/148 throughput and got the 5-processor 125 canceled.

5-processor 370/125 posts
https://www.garlic.com/~lynn/submain.html#bounce
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

some of the archived posts mentioning 370/125
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#114 Copyright Software
https://www.garlic.com/~lynn/2023f.html#57 Vintage IBM 370/125
https://www.garlic.com/~lynn/2023e.html#50 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#61 mainframe bus wars, How much space did the 68000 registers take up?
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2023.html#47 370/125 and MVCL instruction

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Future System

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Future System
Date: 24 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#8 Vintage Future System

One of the things claimed for FS was something like object-oriented in microcode and taking 4-6 different storage accesses to get to the actual value ... part of the Houston Science Center claiming that if 370/195 Eastern Airline "System/One" (ACP/TPF) was redone for FS made out of the fastest technology available, it would have the throughput of 370/145 (something like 30 times slowdown).

There are multiple descriptions, end of century of Intel I86 instruction set moving to hardware decode to risc micro-ops for execution ... in part negating throughput difference between I86 and RISC. FS was pretty serialized ... while the I86 instruction fetch through mcro-op execution is heavily parallel and out-of-order.

Starting with the z10->z196 began to see hints of similar work going on for mainframe. Note industry standard benchmark (i.e. iterations compared to 370/158-3 assumed to be one MIPS) for max configured z196 is claimed to be 50BIPS. By comparison a E5-2600 blade (from the same era) using same benchmark, is claimed to be 500BIPS (ten times z196). A cloud operator will have possibly a dozen megadatacenters or more around the world, each with half million or more E5-2600 blades (or something similar, aka each megadatacenter with the processing of 5million max configured z196).

I attended a talk by Amdahl in large MIT auditorium in early 70s, after he left IBM and founded his company ... and he was asked what business case did he use to get backing for his company. He said something about that even if IBM was to completely walk away from 370, there was sufficient customer 370 software to keep him in business until the end of the century. Sounded a lot like he knew about FS ... but asked about it in later years, he claimed no knowledge.

trivia: ACS in the 60s wasn't 360 compatible but Amdahl manage to make the case for ACS/360 compatibility ... before it was shutdown (& Amdahl left the company).
https://people.computing.clemson.edu/~mark/acs_end.html
... possibly Amdahl knew more about the customer base than the IBM executives behind FS.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

more recent posts mentioning risc micro-ops
https://www.garlic.com/~lynn/2022g.html#85 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#82 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022b.html#64 Mainframes
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#45 Mainframe MIPS
https://www.garlic.com/~lynn/2021d.html#55 Cloud Computing
https://www.garlic.com/~lynn/2021b.html#66 where did RISC come from, Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2021.html#1 How an obscure British PC maker invented ARM and changed the world
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#62 instruction clock speed
https://www.garlic.com/~lynn/2018e.html#28 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2016h.html#98 A Christmassy PL/I tale
https://www.garlic.com/~lynn/2016f.html#97 ABO Automatic Binary Optimizer
https://www.garlic.com/~lynn/2016d.html#68 Raspberry Pi 3?
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015h.html#101 This new 'skyscraper' chip could make computers run 1,000 times faster
https://www.garlic.com/~lynn/2015h.html#81 IBM Automatic (COBOL) Binary Optimizer Now Availabile
https://www.garlic.com/~lynn/2015c.html#110 IBM System/32, System/34 implementation technology?
https://www.garlic.com/~lynn/2015.html#44 z13 "new"(?) characteristics from RedBook

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Future System

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Future System
Date: 25 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#8 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#11 Vintage Future System

Late 80s, senior disk engineer got a talk scheduled at world-wide, annual, internal communication group conference, supposedly on 3174 performance, but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division saw drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (with their corporate responsibility for everything that crossed datacenter walls). Communication group stranglehold on datacenters weren't just affecting disks and a couple years later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex that (mostly) reverses the breakup (although it wasn't long before the disk division is gone).

Communication group stranglehold and dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
more downfall
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

The easy hanging fruit had been moved off mainframes ... then in the mid-90s, one of the major remaining mainframe customers, financial industry spends billions of dollars for things like straight-through processing (real time transactions from 70s&80s were just being queued for financial processing during overnight batch window_ ... eliminating some of the last remaining mainframe batch processing). They were reimplementing it on large number of "killer macros" ... using industry standard parallel processing libraries. Some of us pointed out that the parallel processing libraries they were using had 100 times the overhead of batch cobol ... but we were ignored until some of the early deployments went up (down?) in flames.

Turn of the century, there were financials that mainframe hardware represented a few percent of revenue (and dropping). z12 time-frame, it was a couple percent and still dropping, but mainframe group was 25% of revenue (nearly all software and services) and 40% of profit.

Before z12, I got involved with business organization that developed a financial specification language that "compiled" down to fine-grain SQL statements taking advantage of the significant parallel cluster performance work by RDBMS vendors (including IBM). It initially saw significant interest by industry bodies and organizations (in part because it significantly reduced software development&maintenance). Then it hit a brick wall, finally telling us that executives still bore the scars from the 90s failures and they weren't planning on trying it again. Using financial specification language demonstrated being able to do real-time straight through processing at higher rates than seen by the largest institutions, with cluster of six SQL Server systems (each with four processors).

posts mentioning straight through processing and overnight batch window
https://www.garlic.com/~lynn/2022g.html#69 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#3 Final Rules of Thumb on How Computing Affects Organizations and People
https://www.garlic.com/~lynn/2021k.html#123 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021g.html#18 IBM email migration disaster
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018f.html#85 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2018c.html#33 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017j.html#37 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017j.html#3 Somewhat Interesting Mainframe Article
https://www.garlic.com/~lynn/2017h.html#32 OFF TOPIC: University of California, Irvine, revokes 500 admissions
https://www.garlic.com/~lynn/2017f.html#11 The Mainframe vs. the Server Farm: A Comparison
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017d.html#39 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017c.html#63 The ICL 2900
https://www.garlic.com/~lynn/2017.html#82 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#72 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016g.html#23 How to Fix IBM
https://www.garlic.com/~lynn/2016d.html#84 The mainframe is dead. Long live the mainframe!
https://www.garlic.com/~lynn/2016b.html#48 Windows 10 forceful update?
https://www.garlic.com/~lynn/2016.html#25 1976 vs. 2016?
https://www.garlic.com/~lynn/2015h.html#2 More "ageing mainframe" (bad) press
https://www.garlic.com/~lynn/2015.html#78 Is there an Inventory of the Inalled Mainframe Systems Worldwide
https://www.garlic.com/~lynn/2014m.html#170 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#119 Holy Grail for parallel programming language
https://www.garlic.com/~lynn/2014m.html#71 Decimation of the valuation of IBM
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014e.html#10 Can the mainframe remain relevant in the cloud and mobile era?
https://www.garlic.com/~lynn/2014c.html#90 Why do bank IT systems keep failing ?
https://www.garlic.com/~lynn/2014c.html#22 US Federal Reserve pushes ahead with Faster Payments planning
https://www.garlic.com/~lynn/2013o.html#80 "Death of the mainframe"
https://www.garlic.com/~lynn/2013m.html#35 Why is the mainframe so expensive?
https://www.garlic.com/~lynn/2013i.html#49 Internet Mainframe Forums Considered Harmful
https://www.garlic.com/~lynn/2013h.html#42 The Mainframe is "Alive and Kicking"
https://www.garlic.com/~lynn/2013g.html#50 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013f.html#57 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013c.html#84 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013b.html#42 COBOL will outlive us all
https://www.garlic.com/~lynn/2012n.html#24 System/360--50 years--the future?
https://www.garlic.com/~lynn/2012n.html#18 System/360--50 years--the future?
https://www.garlic.com/~lynn/2012l.html#47 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012l.html#31 X86 server
https://www.garlic.com/~lynn/2012j.html#77 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012j.html#69 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012f.html#36 Time to competency for new software language?
https://www.garlic.com/~lynn/2012f.html#24 Time to competency for new software language?
https://www.garlic.com/~lynn/2012f.html#0 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012e.html#49 US payments system failing to meet the needs of the digital economy
https://www.garlic.com/~lynn/2011p.html#8 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2011o.html#9 John R. Opel, RIP
https://www.garlic.com/~lynn/2011n.html#23 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2011n.html#10 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011k.html#70 New IBM Redbooks residency experience in Poughkeepsie, NY
https://www.garlic.com/~lynn/2011i.html#52 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#93 Itanium at ISSCC
https://www.garlic.com/~lynn/2011e.html#91 Mainframe Fresher
https://www.garlic.com/~lynn/2011e.html#19 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#15 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011c.html#35 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011.html#42 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
https://www.garlic.com/~lynn/2011.html#19 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010m.html#37 A Bright Future for Big Iron?
https://www.garlic.com/~lynn/2010m.html#13 Is the ATM still the banking industry's single greatest innovation?
https://www.garlic.com/~lynn/2010l.html#14 Age
https://www.garlic.com/~lynn/2010k.html#3 Assembler programs was Re: Delete all members of a PDS that is allocated
https://www.garlic.com/~lynn/2010i.html#41 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010h.html#47 COBOL - no longer being taught - is a problem
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010b.html#16 How long for IBM System/360 architecture and its descendants?
https://www.garlic.com/~lynn/2010.html#77 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2009q.html#67 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009o.html#81 big iron mainframe vs. x86 servers
https://www.garlic.com/~lynn/2009m.html#81 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009l.html#57 IBM halves mainframe Linux engine prices
https://www.garlic.com/~lynn/2009i.html#21 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009h.html#2 z/Journal Does it Again
https://www.garlic.com/~lynn/2009h.html#1 z/Journal Does it Again
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009d.html#14 Legacy clearing threat to OTC derivatives warns State Street
https://www.garlic.com/~lynn/2009c.html#43 Business process re-engineering
https://www.garlic.com/~lynn/2009.html#87 Cleaning Up Spaghetti Code vs. Getting Rid of It
https://www.garlic.com/~lynn/2008r.html#7 If you had a massively parallel computing architecture, what unsolved problem would you set out to solve?
https://www.garlic.com/~lynn/2008p.html#30 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technolgies?
https://www.garlic.com/~lynn/2008p.html#26 What is the biggest IT myth of all time?
https://www.garlic.com/~lynn/2008h.html#56 Long running Batch programs keep IMS databases offline
https://www.garlic.com/~lynn/2008h.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008g.html#55 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008d.html#89 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008d.html#87 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008d.html#31 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#30 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008b.html#74 Too much change opens up financial fault lines
https://www.garlic.com/~lynn/2007v.html#81 Tap and faucet and spellcheckers
https://www.garlic.com/~lynn/2007v.html#64 folklore indeed
https://www.garlic.com/~lynn/2007v.html#19 Education ranking
https://www.garlic.com/~lynn/2007u.html#61 folklore indeed
https://www.garlic.com/~lynn/2007u.html#44 Distributed Computing
https://www.garlic.com/~lynn/2007u.html#19 Distributed Computing
https://www.garlic.com/~lynn/2007m.html#36 Future of System/360 architecture?
https://www.garlic.com/~lynn/2007l.html#15 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2006s.html#40 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/aadsm28.htm#35 H2.1 Protocols Divide Naturally Into Two Parts
https://www.garlic.com/~lynn/aadsm28.htm#14 Break the rules of governance and lose 4.9 billion

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Future System

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Future System
Date: 25 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#8 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#11 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#12 Vintage Future System

disclaimer1: when I transferred to SJR, I worked with Jim Gray and Vera Watson on original SQL/relational server System/R ... One of the System/R joint partnerrs was BofA getting 60 VM/4341s for distributed operation. Then was involved in the tech transfer ("under the radar" while company was focused on EAGLE, next great DBMS) to Endicott for SQL/DS (when EAGLE implodes there is a request how long it would take to port System/R to MVS .... eventually ships as DB2, originally for decision-support only).

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

Later, Nick Donofrio stops by Austin and local executives were out of town. My wife prepares some hand drawn charts and estimates to do HA/6000 for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000 ... which he approves. Working with the four RDBMS vendors, Oracle, Sybase, Informix, and Ingress that have vaxcluster support in the same source with unix ... I do API with vaxcluster semnatics to ease the port.

I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
after starting to do technical/scientific cluster scale-up with the national labs and commercial cluster scale-up with the RDBMS vendors. Early Jan1992, meeting with Oracle CEO, AWD tells them that we will have 16-system clusters by mid92 and 128-system clusters by ye92. In Jan92, I'm giving pitches to FSD about HA/CMP work with national labs and end of Jan, FSD tells the IBM Kingston Supercomputer group that they are going with HA/CMP. Almost immediately cluster scale-up is transferred for announce as cluster supercomputer for technical/scientific only and we are told we can't work on anything with more than four processors (we leave IBM a few months later).

Mixed in with all of this, mainframe DB2 were complaining if we were allowed to proceed, it would be at least 5yrs ahead of them. Computerworld news 17feb1992 (from wayback machine) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
cluster supercomputer for technical/scientific only
https://www.garlic.com/~lynn/2001n.html#6000clusters1
more news 11may1992, IBM "caught" by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

disclaimer2: after leaving IBM was brought into the largest airline res system to look at the ten things that they can't do. They initially focus on ROUTES... represented about 25% of mainframe processing ... and they list ten impossible things they can't do. I'm given lots of detail and leave with a complete softcopy of OAG (all commercial scheduled flts in the world).

I spend about a month reimplementing ROUTES from scratch. I've claimed that original ACP implementation has 60s technology trade-offs made for the mainframe, starting from scratch I can make totally different trade-off and have it running 100 times faster. I then spend another month to do the ten impossible things ... that cuts it to only about 10 times faster (although there are many interactions that are doing multiple things which previously each required sequence of several human interactions). I'm then able to demonstrate and take it through its paces ... also demonstrate that ten RS/6000 990s cluster, could handle all ROUTE transactions in the world, for all airlines in the world.

Also because of the 60s trade-offs, the existing ROUTES implementation had staff of several hundred people, with the different trade-offs, the necessary staff is cut by 95% ... which is possibly why the hand-wringing started. They eventually tell me that they hadn't actually planned I would do the ten impossible things, they just wanted to tell the parent company's board that I was working on it (for at least the following five years; apparently somebody on the board was former IBMer that I had known in our former lives). Obviously they then weren't going to let me anywhere close to "FARES" (which represented about 40% of the mainframe processing)

industry benchmark, number of iterations compared to 370/158:
(1993) 990:126MIPS (ten 990s: 1.26BIPS);
(1993) eight processor ES/9000-982: 408MIPS


some posts mentioning being asked to redo some of airline res apps
https://www.garlic.com/~lynn/2022h.html#58 Model Mainframe
https://www.garlic.com/~lynn/2021f.html#8 Air Traffic System
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System
https://www.garlic.com/~lynn/2016f.html#109 Airlines Reservation Systems
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
https://www.garlic.com/~lynn/2013g.html#87 Old data storage or data base
https://www.garlic.com/~lynn/2007g.html#22 Bidirectional Binary Self-Joins

--
virtualization experience starting Jan1968, online at home since Mar1970

How Finance Capitalism Ruined the World

From: Lynn Wheeler <lynn@garlic.com>
Subject: How Finance Capitalism Ruined the World
Date: 25 Nov, 2023
Blog: Facebook
How Finance Capitalism Ruined the World - Dr. Michael Hudson & Dr. Steve Keen
https://www.nakedcapitalism.com/2023/11/how-finance-capitalism-ruined-the-world-dr-michael-hudson-dr-steve-keen.html
Here their deep give focuses on how finance capital amassed more power and influence, to the detriment not just of ordinary citizens but even many businesses.

... snip ...

... note in the economic mess after the turn of the century ... there were claims that the financial industry tripled in size as percent of (US) GDP ... for no apparent benefit.

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess

some posts specifically mentioned financial tripled in size as percent of GDP
https://www.garlic.com/~lynn/2022g.html#89 Five fundamental reasons for high oil volatility
https://www.garlic.com/~lynn/2022g.html#29 The Financial Industry is a Lot Bigger than a Giant Vampire Squid
https://www.garlic.com/~lynn/2017h.html#24 OFF TOPIC: University of California, Irvine, revokes 500 admissions
https://www.garlic.com/~lynn/2017h.html#9 Corporate Profit and Taxes
https://www.garlic.com/~lynn/2017g.html#100 Why CEO pay structures harm companies
https://www.garlic.com/~lynn/2016g.html#88 Finance Is Not the Economy
https://www.garlic.com/~lynn/2013d.html#67 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013b.html#35 Adair Turner: A New Debt-Free Money Advocate
https://www.garlic.com/~lynn/2012o.html#73 These Two Charts Show How The Priorities Of US Companies Have Gotten Screwed Up
https://www.garlic.com/~lynn/2011j.html#24 rating agencies
https://www.garlic.com/~lynn/2011h.html#22 Is BitCoin a triple entry system?
https://www.garlic.com/~lynn/2011b.html#42 Productivity And Bubbles
https://www.garlic.com/~lynn/2011.html#80 Chinese and Indian Entrepreneurs Are Eating America's Lunch
https://www.garlic.com/~lynn/2010o.html#59 They always think we don't understand
https://www.garlic.com/~lynn/2010k.html#24 Snow White and the Seven Dwarfs
https://www.garlic.com/~lynn/2010i.html#47 "Fraud & Stupidity Look a Lot Alike"

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM 4300

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM 4300
Date: 25 Nov, 2023
Blog: Facebook
when I first transferred to SJR, I got to wander around most IBM and non-IBM datacenters in silicon valley ... including disk engineering and product test (bldg14&15) across the street. They were doing prescheduled, 7x24, stand alone testing and said they are recently tried MVS, but it had 15min MTBF (requiring manual re-ipl). I offered to rewrite I/O supervisor so it was bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity. Bldg15 product test got first engineering 3033 outside POK flr, and since testing only took a percent or two of processor, we then scrounge a string of 3330s and 3830 controller and setup our private online service ... including running 3270 coax under the road to my office in 28.

Then bldg15 get engineering 4341 and I have more 4341 test time than anybody in Endicott. Eventually nat. lab marketing hears about it and in Jan1979 (before 1st customer ship) cons me into running benchmark for national lab looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). Turns out small 4341 cluster has higher (aggregate) throughput than 3033, significantly cheaper, much less flr space, power, and cooling requirements. Also while 303xs (3031 was two 158 engines, one with 370 mcode, the other with integrated channel mcode; 3032 was 168 reworked to use channel director for external channels) where stuck with the (slow) 158 engine running integrated channel microcode for external channels (channel director), slight tweaks of 4341 microcode and they could be used for 3mbyte 3880/3380 testing

posts getting to play disk engineers in bldg 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

4300 competed in the same mid-range market as DEC VAX and sold in about the same numbers for small unit orders. However, large corporations started ordering hundreds at a time for placing out in departmental areas ... sort of the leading edge of the coming distributed computing tsunami. Inside IBM, departmental conference were disappearing since so many were converted to vm/4341 rooms. MVS looked at the distributed computing numbers ... but the only non-datacenter disk were the 3370 FBA. Eventually they come out with CKD simulation for 3370 as 3375. However, it didn't do MVS much good ... the distributed VM4341s were looking at dozens of systems per support person, while MVS was still large number of support and operational people per system.

a few recent posts mentioning 4300 (cluster supercomputer and distributed) tsunamis
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2023.html#1 IMS & DB2

This a decade of VAX sales, sliced & dice by model, year, US/non-US ... and the small unit number 4300 sales were similar. However, by the mid-80s the mid-range market was starting to shift to large PCs and workstations. IBM was expected that the 4361/4381 would continue similar explosion in sales as 4331/4381 but the market had started to shift.
https://www.garlic.com/~lynn/2002f.html#0

The big San Jose GPD MVS datacenter was starting to burst at the seams and major application was large microcode development system that only ran on MVS ... didn't quite run on CMS (with its 64kbytes of OS/360 simulation) ... so their distributed 4341 was deploy the mcode dev system out in the departments on MVS43441). They looked at MVS CPU use for mapping to MVS 4341s. However, the numbers they looked at was the MVS "capture CPU" (which turned out to be around 40%, the other, "non-capture" 60% was MVS "overhead" big chunk VTAM) ... so it wasn't quite as rosy as they were expecting. Then the Los Gatos VLSI tools group wrote another 12kbytes of OS360 simulation to get the mcode development system running on CMS (effectively giving them nearly all the 4341 CPU) ... the move to CMS also enabled some enhancements that hadn't been possible on MVS (including MVS structure taking up a good part of a 16mbyte address space while w/CMS they could get almost a full 16mbyte address space, less maybe 128kbytes).

trivia: I do an internal-only research report on the work for the engineering and product test labs and happen to mention the MVS 15min MTBF ... bringing down the wrath of the MVS organization on my head. Later when 3380s were about to ship, FE had test suite of 57 simulated errors that they expected to occur .... and MVS was failing in all 57, and in 2/3rds of the cases, no evidence of what caused the system failure ... I didn't feel badly.

Note that when 3033 was out the door, the processor engineers started on trout/3090. The 3081 "service processor" was UC microprocessor with ryo system. For the 3090, they decided that the service processor would be a 4331 running a highly modified version of VM370 release 6 ... and all the service screens were done in CMS IOS3270. Before it ships, the 4331 is upgraded and the "3092" became a pair of vm/cms 4361s each running off 3370 FBA ...

region and larger branch office starting get VMIC 4341s (information systems) in the mid 80s, augmenting online sales&marketing support online (VM) HONE system.

posts referencing HONE
https://www.garlic.com/~lynn/subtopic.html#hone

a couple recent posts mentioning CMS OS/360 simulation
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023.html#87 IBM San Jose

recent posts discussing CSA and disappearing application space:
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia

--
virtualization experience starting Jan1968, online at home since Mar1970

370/125 VM/370

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370/125 VM/370
Date: 26 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#10 370/125 VM/370

The 370 115/125 design/implementation was for 9 position memory bus ... but most of them were never used ... so the suggestion can use them for processors.

Note right after it got canceled, I got roped into a 16 processor 370 and we con'ed the 3033 processor engineers into working on it in their spared time (a lot more interesting than remapping 168 logic to 20% faster chips.. Everybody thought it was great until somebody told the head of POK that it could be decades before the POK favorite son operating system (MVS) had effective 16-way support (POK doesn't ship 16-way until after turn of the century). The head of POK then invites some of us to never visit POK again and directed the 3033 processor engineers heads down on 3033 and don't be distracted.

A decade later I got asked to participate in SCI (started by person at Stanford SLAC). There were comments that 801/risc (ROMP, RIOS) were never multiprocessor because the experience with the performance penalty paid by 370 strong memory (& cache) consistency (rumors that 168 cache consistency had to even be strengthen for MVS after introduction of 2-processor). SCI had slightly weaker memory/cache consistency and was used by Sequent and Data General for 256 multiprocessor (using 64 i86 four processor shared cache boards) and by Convex for 128 multiprocessor HP Snake (risc) two processor shared cache boards ... and several others (trivia: late 90s, I did some consulting for Steve Chen when he was CTO at Sequent ... before IBM bought Sequent and shut them down).
https://www.slac.stanford.edu/pubs/slacpubs/5000/slac-pub-5184.pdf
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface

It was about the same time the IBM branch office asked if I could help LLNL get some serial stuff standardized which quickly becomes fibre-channel standard (FCS, including some stuff I had done in 1980), initially 1gbit, full-duplex, aggregate 200mbyte/sec. A couple yrs later, the serial stuff that POK had been playing with since 1980 is finally announced with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy-weight protocol that significantly cuts the throughput which eventually is announced as FICON. The most recently published benchmark I can find is "Peak I/O" for the z196 getting 2M IOPS using 104 FICON (running over 104 FCS) ... about the same time a FCS was announced for E5-2600 blades claiming million IOPS (two such FCS have higher throughput than 104 FICON). However, there was also articles that the SAPs (system assist processors that actually do I/O) should be kept to no more than 70% CPU ... which would have capped I/O around 1.5M IOPS (rather than peak 2M).

5processor 370/125 posts
https://www.garlic.com/~lynn/submain.html#bounce
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Future System

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Future System
Date: 26 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#8 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#11 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#12 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#13 Vintage Future System

... 16-bit "Iliad" 801/risc chip for (low & mid-range) microprogramed processors. I helped with white paper for the 4361/4381 (follow-on to 4331/4341) that chips had advanced to state that nearly whole 370 can be done in circuits (i.e. the low/mid-range 370s had been running avg. of ten native instructions per 370 instruction) ... Boeblingen at the time had 3-chip 370 "ROMAN" with the throughput of 370/168-3. For various reason, the 801/risc efforts (4300s, as/400, controllers, etc) imploded and some returned to CISC microprogrammed microprocessors (and 4300 to silicon) ... and saw some of the 801/risc engineers going to risc efforts at other vendors.

ROMP 801/risc "chip" was suppose to have been for displaywriter follow-on, when that got canceled and decided to retarget for the unix workstation market and got the company than had done unix PC/IX for IBM/PC to do unix port for ROMP

The Los Gatos lab was doing 1st 32bit 801/risc "Blue Iliad" (never came to production). In the mid-80s I had proposal to cram racks with as many "Blue Iliad" and ROMAN chips as possibly (big problem was heat dissipation). A few years later I got involved in seeing how many RIOS (RS/6000) I could cram into rack and how many I could tie together. It had started out HA/6000 for the NYTIMES, but I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
The executive we reported to, then went over to head up Somerset (AIM for power/pc).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

some posts mentioning ROMAN and Blue Iliad
https://www.garlic.com/~lynn/2022c.html#77 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021b.html#50 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021.html#65 Mainframe IPL
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2012l.html#82 zEC12, and previous generations, "why?" type question - GPU computing
https://www.garlic.com/~lynn/2011m.html#24 Supervisory Processors
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2010l.html#42 IBM zEnterprise Announced
https://www.garlic.com/~lynn/2006n.html#37 History: How did Forth get its stacks?
https://www.garlic.com/~lynn/2002l.html#27 End of Moore's law and how it can influence job market

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage X.25

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage X.25
Date: 26 Nov, 2023
Blog: Facebook
For awhile I reported to same executive as the person responsible for AWP164 (that becomes APPN) ... I would chide him to come work on real networking (TCP/IP) because the SNA folks are never going to appreciate you. When it came time to announce APPN, SNA group non-concurred ... eventually escalation rewrote the APPN announcement letter to carefully NOT imply any relationship between APPN and SNA.

My wife did a short stint as chief architect for Amadeus (European airline res system built off the old Eastern Airline System/One). She didn't remain long, she sided with Europe on x.25 (rather than SNA) and the communication group had her replaced. It didn't do them much good, since Europe went with X.25 anyway, and their replacement was replaced.

Note starting in early 80s, I had HSDT project (T1 and faster computer links, both terrestrial and satellite, lots of grief from the communication group since the fastest they supported was 56kbits) and was suppose to get $20M from the NSF director to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

some recent Amadeus posts
https://www.garlic.com/~lynn/2023f.html#9 Internet
https://www.garlic.com/~lynn/2023d.html#80 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#35 Eastern Airlines 370/195 System/One
https://www.garlic.com/~lynn/2023c.html#48 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#47 IBM ACIS
https://www.garlic.com/~lynn/2023c.html#8 IBM Downfall
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022h.html#97 IBM 360
https://www.garlic.com/~lynn/2022h.html#10 Google Cloud Launches Service to Simplify Mainframe Modernization
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#76 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#75 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System

--
virtualization experience starting Jan1968, online at home since Mar1970

OS/360 Bloat

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: OS/360 Bloat
Date: 26 Nov, 2023
Blog: Facebook
I took 2 credit hr intro to fortran/computers and at the end of the semester was hired to port 1401 MPIO to 360/30. Univ had 709 (tape->tape) with 1401 unit record front-end. Univ had been sold 370/67 for tss/360 (replacing 709/1401) and pending 360/67 machines, a 360/30 temporarily replaced 1401 (gaining 360 experience, while 360/30 had 1401 emulation and could continue to run 1401 MPIO, I guess I was part of getting 360 experience). Then within a year of taking intro class, the 360/67 came in and I was hired fulltime responsible of OS/360 (TSS/360 never came to production fruition). Student fortran had run under a second on 709, initially student fortgclg ran over a minute. I install HASP that cuts the time in half. I then redo STAGE2 SYSGEN 1) be able to run in production jobstream and 2) place/order datasets and PDS members to optimize disk arm seek and multi-track search cutting another 2/3rds to 12.9secs. It never got better than 709 until I install Univ. of Waterloo WATFOR.

Note: OS/360 bloat is largely responsible for CICS design. At startup it would do all its file opens and obtain large amount of storage and then while running it would do its own (simulated) file open/close and storage allocation/deallocation (minimize as much as possible use of system services).

CICS &/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

Then before I graduate, I'm hired into small group in the Boeing CFOs office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment). I think Renton datacenter possibly largest in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room (somebody recently commented that Boeing was getting 360/65s like other organizations got keypunches).

Boeing Huntsville had gotten a two processor/duplex 360/67 with several 2250M1s for CAD/CAM ... but settle on running it as two 360/65 with OS/360. Now long running jobs like CAD/CAM even worsened the MVT storage management and they modified MVT13 to run in virtual memory mode to compensate (didn't do any paging but could reorg storage addresses). Summer1969 the duplex was moved to seattle.

some posts mentioning 709, 1401, mpio, watfor, boeing cfo, renton
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

This could be considered a precursor to decision to add virtual memory to all 370s ... a decade ago I was asked to track down. Basically MVT storage management was so bad, that region sizes had to be specified four times larger than used and so typical 1mbyte 370/165 would only run four concurrent regions, insufficient to keep system busy and justified. Going to virtual memory would allow concurrently running regions to be increased by factor of four times with little or no paging. Archived post with pieces of virtual memory decision email
https://www.garlic.com/~lynn/2011d.html#73

And if you thought OS/360 was bloated ... Both TSS/360 kernel and applications really were. They had some spin master and would say that TSS/360 multiprocessor support was the best there was (MVT 65/MP and MVS MP would claim two processor was 1.2-1.5 times a single processor) ... TSS/360 would claim two processor was 3.9 times single processor. However, a one mbyte, one processor 360/67 had hardly anything left for applications after fixed kernel requirements ... it wasn't until you got to two mbytes (available with two processor) that there started to be reasonable memory for running applications. (the 3.9 times was purely with respect to 1mbyte tss/360 ... no comparisons with other kinds of systems)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Educational Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Educational Support
Date: 26 Nov, 2023
Blog: Facebook
education/univ discount seemed to disappear with the litigation in the late 60s (along with spawning 23jun1969 unbundling announcement, starting to charge for software, SE services, maintenance, etc). It sort of reappeared in the early 80s (possibly after some litigation event) with ACIS and giving out large grants to various places. Co-worker at the science center was responsible for the technology used for the internal network (larger than arpanet/internet from just about beginning until sometime mid/late 80s) and then in 80s used for the corporate sponsored univ BITNET.
https://en.wikipedia.org/wiki/BITNET

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

Part of the issue was rise of distributed & personal computing that the communication group was fiercely fighting off trying to preserve SNA and their dumb terminal paradigm.

communication group fighting off distributed & personal computer posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

Co-worker
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ....

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Communication group was fighting the release of mainframe TCP/IP support and possibly some influential customers got that changed. Then they changed their tactic and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped used nearly whole 3090 processor getting aggregate of 44kbytes/sec. I then do changes for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, get sustained channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

trivia: I started doing T1 in 1980 which quickly morphs into HSDT, T1 and faster computer links, both terrestrial and satellite ... and was working with NSF director, suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happened and eventually results in RFP (in part based on what we already had running). Preliminary announce (28Mar1986):
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (no CPD content and being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

Then of course, late 80s IBM was rapidly heading down ... 1992, they have one of the largest losses in history of US companies and being reorged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex that (mostly) reverses the breakup (although it wasn't long before the places like the disk division are gone).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts mentioning IBM ACIS
https://www.garlic.com/~lynn/2023c.html#47 IBM ACIS
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#62 File Backup
https://www.garlic.com/~lynn/2021g.html#65 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2017j.html#94 IBM does what IBM does best: Raises the chopper again
https://www.garlic.com/~lynn/2017j.html#92 mainframe fortran, or A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017j.html#76 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017d.html#41 What are mainframes
https://www.garlic.com/~lynn/2017b.html#46 The ICL 2900
https://www.garlic.com/~lynn/2016d.html#100 Multithreaded output to stderr and stdout
https://www.garlic.com/~lynn/2016d.html#76 IBM plans for the future - an imaginary tale
https://www.garlic.com/~lynn/2016c.html#82 Fwd: Tech News 1964
https://www.garlic.com/~lynn/2015e.html#20 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2014f.html#75 Is end of mainframe near ?
https://www.garlic.com/~lynn/2013h.html#76 DataPower XML Appliance and RACF
https://www.garlic.com/~lynn/2013b.html#43 Article for the boss: COBOL will outlive us all

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage X.25

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage X.25
Date: 27 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#18 Vintage X.25

Early/mid-80s, I get asked to take a VTAM/NCP simulation done by a "baby bell" on Series/1 (and carried SNA traffic over real networking that included T1) and turn it out as IBM Type1 product. Oct1986, I did a presentation at the SNA ARB meeting in Raleigh how it was many times better than their products ... the young turks in the room really liked it, but afterwards the executive running the show wanted to know who authorized me to talk to them. Several IBMers were familiar with Raleigh internal politics and had setup numerous countermeasures ... but what communication group did next to kill the effort can only be described as truth is stranger than fiction.

part of presentation to SNA ARP
https://www.garlic.com/~lynn/99.html#70

trivia: as undergraduate in 60s, had code that could do dynamic terminal type identification connecting into any port on mainframe telecommunication controller and dynamically switch the port scanner type. I then wanted to have a single dial-in number ("hunt group") for all terminal types (1052, 2741, tty/ascii). However, it didn't quite work because IBM had taken short-cut, while could dynamical change port scanner type, line-speed had been hard wired. Thus kicks off a univ project, built a channel interface board for Interdata/3 programmed to emulate IBM controller with the addition it could do dynamic line speed. Later it was upgraded to Interdata/4 for the channel interface and a cluster of Interdata/3s for port interfaces. Four of us get written up for (some part of) the clone controller business ... Interdata and then Perkin-Elmer sell it as clone controller.
https://en.wikipedia.org/wiki/Interdata
Interdata, Inc., was a computer company, founded in 1966 by a former Electronic Associates engineer, Daniel Sinnott, and was based in Oceanport, New Jersey. The company produced a line of 16- and 32-bit minicomputers that were loosely based on the IBM 360 instruction set architecture but at a cheaper price.[2] In 1974, it produced one of the first 32-bit minicomputers,[3] the Interdata 7/32. The company then used the parallel processing approach, where multiple tasks were performed at the same time, making real-time computing a reality.[4]

... snip ...

Interdata bought by Perkin-Elmer
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
spun off in 1985 as concurrent computer corp.
https://en.wikipedia.org/wiki/Concurrent_Computer_Corporation

plug compatable (OEM) clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Cray

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Cray
Date: 27 Nov, 2023
Blog: Facebook
The communication group was fighting the release of mainframe TCP/IP when possibly some influential customers got release approval. Then the communication group changed their tactic and said that since they had corporate strategic ownership of everything that crossed datacenter walls, it had to be released through them. What got released used nearly whole 3090 processor getting aggregate of 44kbytes/sec. I then did the support for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

Some years later (before IBM bought Sequent and shut them down), I did some consulting for Steve Chen when he was Sequent CTO (they had the SCI Numa-Q 256 multiprocessor, decade earlier he had been at Cray and principle desiigner for Cray X-MP & Y-MP).
https://en.wikipedia.org/wiki/Steve_Chen_(computer_engineer)
Chen left Cray Research in September 1987 after it dropped the MP line.[3]. With IBM's financial support, Chen founded Supercomputer Systems Incorporated (SSI) in January 1988

... snip ...

... trivia: rise of cluster supercomputing; Jan1979 I was con'ed into doing some benchmarks on engineering 4341 (before shipped to customers) for national lab that was looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).

Late 80s, IBM Nick Donofrio approved (our) HA/6000, originally for NYTimes to migrate their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
after starting doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix). Early Jan1992 had meeting with Oracle CEO where AWD told them that we would have 16-way clusters by mid-92 and 128-way clusters by ye-92.

During Jan, I gave presentations to FSD on work with the national labs. FSD end of Jan notified IBM Kingston supercomputer (multiprocessor and supporting Chen) group that they were going with HA/CMP. Almost immediately cluster scale-up is transferred for announce as IBM supercomputer ("technical/scientific" *ONLY*, note mainframe DB2 group had also been complaining on what we were doing) and we are told we couldn't work on anything with more than four processors (we leave IBM a few months later). Computerworld news 17feb1992 (from wayback machine) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
cluster supercomputer for technical/scientific only
https://www.garlic.com/~lynn/2001n.html#6000clusters1
more news 11may1992, IBM "caught" by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning 370/125 5-processor machine effort
https://www.garlic.com/~lynn/submain.html#bounce
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

a few recent posts mentioning SCI and/or Sequent Numa-q
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022g.html#91 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#29 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022b.html#67 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022.html#118 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021i.html#16 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#45 OoO S/360 descendants
https://www.garlic.com/~lynn/2021b.html#64 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage 3081 and Water Cooling

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage 3081 and Water Cooling
Date: 27 Nov, 2023
Blog: Facebook
note TCMs for tightly packing the significant more circuits that went into the 3081 ... mentions the significant increase in circuits compared to past IBM performance, and especially compared to Amdahl performance.
http://www.jfsowa.com/computer/memo125.htm
... TCM had liguid cooling with heat exchange and liquid flow on both sides. At one point, IBM had flow sensor on the TCM side, but not on the outboard side of the heat exchange ... and some installation lost outboard flow; by the time the thermal sensor tripped, it was too late and the TCMs had fried. Then IBM put in flow sensors on both sides of the heat exchange.

Amdahl left IBM to form his own company after ACS/360 was killed (folklore it was canceled because executives were afraid it would advance the state too fast and they would loose control of the market) ... also lists features that show up more than 20yrs later with ES/9000.
https://people.computing.clemson.edu/~mark/acs_end.html

Amdahl air-cooled single processor was faster than initially announced two processor 3081D. IBM then doubled the cache for the 3081K claiming two processor performance was now about the same as Amdahl single processor (however MVS Amdahl single processor throughput was much better because of MVS multiprocessor overhead) ... and Amdahl two processor was much better than 3084 four processor.

Posts mentioning SMP, multiprocessor, tightly-coupled
https://www.garlic.com/~lynn/subtopic.html#smp

Posts mentioning Sowa's Memo125, ACS/360 end, and TCMs
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2019c.html#44 IBM 9020
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2012e.html#38 A bit of IBM System 360 nostalgia

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage ARPANET/Internet

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage ARPANET/Internet
Date: 27 Nov, 2023
Blog: Facebook
co-worker at science center was responsible for the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 8s)
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

also used for the corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/BITNET

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

Note: GML was invented at the science center in 1969 (name taken from the 1st letters of the 3 inventors last name), after decade morphs into ISO standardd SGML (after another decade it morphs into HTML at CERN), SGML history
https://web.archive.org/web/20230402213042/http://www.sgmlsource.com/history/index.htm
https://web.archive.org/web/20230703135955/http://www.sgmlsource.com/history/index.htm
papers about early GML ... as GML was being invented.
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml

Birth of the ARPAnet : 1969 (mentions the 1st four nodes) ... science center wide-area already operational
https://www.cybertelecom.org/notes/internet_history69.htm
"These sites were running a Sigma 7 with the SEX operating system, an SDS 940 with the Genie operating system, an IBM 360/75 with OS/MVT (or perhaps OS/MFT), and a DEC PDP-10 with the Tenex operating system. Options existed for additional nodes if the first experiments were successful.

... snip ...

... also mentions that IBM didn't participate (until NSFNET). Great cut-over from IMP/HOST ARPANET to internetworking (TCP/IP) was 1jan1983.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

NOTE: early 80s I got HSDT project, T1 and faster computer links, both satellite and terrestrial and was suppose to get $20M from NSF director to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and finally RFP is released (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Cray

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Cray
Date: 28 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#22 Vintage Cray

Congress had passed some legislation encouraging national labs to spin off technology for commercial use ... as part of making the US more competitive. LANL spun off hierarchical filesystem as Datatree, LLNL was spinning of their Cray UNICOS hierarchical filesystem LINCS as Unitree, NCAR had formed "Mesa Archival" to commercialize their hierarchical filesystem ... and we were having regular meetings with all three.

DataTree and UniTree ref:
https://ieeexplore.ieee.org/document/113582
The capabilities and advantages of DataTree and UniTree (hierarchical file- and storage-management systems for networked, multivendor computing environments) are discussed. DataTree is an advanced centralized MVS-based system; UniTree is a Unix-based file-server product. DataTree is the commercial version of the common file system (CFS) developed at the Los Alamos National Laboratory. The DataTree server platform is built upon the MVS operating system and serves client computers running on a wide variety of operating systems. UniTree is the commercial version of the file server developed and currently in production at the Lawrence Livermore National Laboratory. Unitree's compatibility with DataTree will enable it to provide a long-term migration option for DataTree/CFS users. The UniTree system includes utility programs for the conversion of DataTree directories to UniTree format. UniTree will process DataTree tapes directly, i.e. without having to rewrite them.

... snip ...

besides working with LLNL on cluster scale-up HA/CMP for cluster supercomputer, we had also hired a LA company to port the LLNL Cray UNICOS filesystem LINCS (UNITREE) to HA/CMP (all that cratered when cluster scale-up was transferred).

National Information Infrastructure legislation as part of
https://en.wikipedia.org/wiki/National_Information_Infrastructure
as part of High Performance Computing Act (9Dec1991)
https://en.wikipedia.org/wiki/High_Performance_Computing_Act_of_1991

and I was participating in the NII testbed meetings at LLNL .... and my participation cratered when cluster scale-up was transferred. Joke: US wanted industry to participate in NII testbed on their own nickel ... Singapore then invited all the NII testbed participates to duplicate effort there ... fully funded.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Some posts mentioning NCAR/Mesa Archival, LANL/Datatree, and LLNL/Unitree
https://www.garlic.com/~lynn/2023e.html#106 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2021h.html#93 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2021g.html#2 IBM ESCON Experience
https://www.garlic.com/~lynn/2018d.html#41 The Rise and Fall of IBM
https://www.garlic.com/~lynn/2017b.html#67 Zero-copy write on modern motherboards
https://www.garlic.com/~lynn/2015c.html#68 30 yr old email
https://www.garlic.com/~lynn/2012p.html#9 3270s & other stuff
https://www.garlic.com/~lynn/2012k.html#46 Slackware
https://www.garlic.com/~lynn/2012i.html#47 IBM, Lawrence Livermore aim to meld supercomputing, industries
https://www.garlic.com/~lynn/2011n.html#34 Last Word on Dennis Ritchie
https://www.garlic.com/~lynn/2010d.html#71 LPARs: More or Less?
https://www.garlic.com/~lynn/2009s.html#42 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2008p.html#51 Barbless
https://www.garlic.com/~lynn/2007j.html#47 IBM Unionization
https://www.garlic.com/~lynn/2006n.html#29 CRAM, DataCell, and 3850
https://www.garlic.com/~lynn/2005e.html#16 Device and channel
https://www.garlic.com/~lynn/2003b.html#31 360/370 disk drives
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2001f.html#66 commodity storage servers

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage 370/158

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage 370/158
Date: 28 Nov, 2023
Blog: Facebook
370/158 (integrated microcode) channel was one of the slowest as well 303x channel director used for all 3031/3032/3033, which was a 158 engine with just the integrated channel microcode, i.e. a 3031 was two 158 engines ... one with just 370 microcode and the other with just integrated channel microcode).

As undergraduate I had modified CP67 to do chained paging I/O optimized for maximum transfer per revolution (originally did FIFO single transfer per I/O). Standard CCW sequence was read/write 4k, tic, search, read/write 4k ... channel commands all being serially fetched from processor memory and processed all while disk was rotating. Was OK if consecutive records were on the same track, however if they were on different track (but same cylinder), it became read/write 4k, tic, seek head, search, read/write 4k. To handle the extra time to fetch/process seek head, the actually format had short, dummy block between 4k blocks. The problem with 3330 track was three 4k blocks can be formatted with short dummy block ... but not long enough for dummy blocks to handle the slow 158 integrated channel processing ... delaying the rotation of the next 4k block coming to the disk head while the seek head CCW was being handled.

I wrote VM370/CMS program to format a test 3330 cylinder with maximum dummy block size possible (between 4k data blocks) and then do transfer I/O channel program trying to transfer consecutive data blocks from different tracks in single revolution. Then it would repeat reformatting with smaller and smaller dummy blocks size (to see smallest dummy block that could be used).

I got it run on a number of IBM and customer configurations, different IBM 370s and various customer IBM & non-IBM 370s with various IBM (3830) controller and non-IBM controllers and disks. The official 3330 spec called for 110-byte dummy block to handle seek head and read/write next consecutive data block in same rotation ... however 3330 track size only allowed for 101-byte dummy blocks with three 4k data blocks. 145, 4341, 168 external channels, etc, would perform the switch 100% of the time (with 101byte dummy blocks). 158 and all 303x could only do it 20%-30% of the time (70%-80% of the time would miss and do a full rotation and for similar reasons 3081 channels didn't do 100% of the time). Some customers reported back that non-IBM 370 with non-IBM controller&disks would do it 100% of the time with 50byte dummy block size.

Something similar, but different, things got worse with 3880 disk controller, special hardware path to handle 3mbyte data transfer but a significant slower processor, than 3830, for everything else. 3090 had configured number of channels for target throughput assuming 3880 was same as 3830 but with 3mbyte/sec transfer. However everything else took significantly longer, driving up channel busy (and cutting throughput). They realize that they will have to significantly increase the number of channels to achieve the target throughput (compensating for the large increase in 3880 channel busy), the increase in the number of channels required an additional TCM and 3090 group semi-facetiously claimed they would bill the 3880 controller group for the increased 3090 manufacturing cost. Marketing eventually respun the large increase in number of channels as 3090 being a wonderful I/O machine (when it was just to offset for the 3880 increase in channel busy).

posts mentioning getting to play disk engineer in bldgs14&15:
https://www.garlic.com/~lynn/subtopic.html#disk

some posts mentioning page I/O and dummy blocks
https://www.garlic.com/~lynn/2023e.html#3 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#114 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2010m.html#15 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010.html#49 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF

--
virtualization experience starting Jan1968, online at home since Mar1970

Another IBM Downturn

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Another IBM Downturn
Date: 28 Nov, 2023
Blog: Facebook
cp/m trivia, before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP67/CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

a couple recent posts mentioning ms/dos & cp/m
https://www.garlic.com/~lynn/2023.html#99 Online Computer Conferencing
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022b.html#111 The Rise of DOS: How Microsoft Got the IBM PC OS Contract
https://www.garlic.com/~lynn/2021k.html#22 MS/DOS for IBM/PC

... and person doing DEC VMS was hired by m'soft to do NT.

a couple posts mentioning DEC, VMS, and person hired to do NT
https://www.garlic.com/~lynn/2022b.html#12 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2019e.html#137 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2010.html#0 Problem with XP scheduler?

trivia: when it was announced to add virtual memory to all 370s, a few people spun off from the science center on the 4th flr, moved to the 3rd flr and took over the IBM Boston Programming Center (among other things had done CPS) to become the VM/370 development group; when they outgrew 3rd flr they move out to the former, empty IBM SBC bldg at Burlington Mall on rt128.

After FS implosion and mad rush to get stuff back into the 370 product pipelines, the head of POK managed to convince corporate to kill vm370 product, shutdown the development group and transfer all he people for POK for MVS/XA (or supposedly MVS/XA won't ship on time). They weren't planning to tell the group about the shutdown/move until shortly before (to minimize the numbers that might escape). It manages to leak early and several managed to escape into the area (joke was that head of POK was major contributor to the infant VMS organization).

There then was hunt for source of the leak, fortunately for me, nobody gave up leak source. NOTE: Endicott managed to salvage the VM370 product mission (for the mid-range), but had to reconstitute a development group from scratch. Some internal datacenters were getting bullied by POK trying to force them into moving to MVS.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

a few posts mentioning head of POK getting VM/370 killed and shutting down development group
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021c.html#64 CMS Support
https://www.garlic.com/~lynn/2019b.html#22 Online Computer Conferencing
https://www.garlic.com/~lynn/2018d.html#5 DOS & OS2
https://www.garlic.com/~lynn/2013i.html#29 By Any Other Name
https://www.garlic.com/~lynn/2011p.html#82 Migration off mainframe
https://www.garlic.com/~lynn/2011k.html#9 Was there ever a DOS JCL reference like the Brown book?
https://www.garlic.com/~lynn/2007s.html#33 Age of IBM VM
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2002m.html#9 DOS history question

turn of the century, mainframe hardware as few percent of revenue and dropping

2012 articles that mainframe hardware was 4% percent of revenue (and still dropping), but mainframe group was 25% of revenue (mostly software and services, and 40% of profit)
https://www.businessinsider.com/ibms-million-dollar-computers-are-the-profit-makers-that-just-wont-die-2012-9
https://www.bloomberg.com/news/articles/2012-08-28/ibm-bets-1-billion-on-new-mainframe-in-shrinking-market
https://www.datacenterdynamics.com/en/news/ibms-hardware-revenue-nosedive-continues/

since then IBM mostly gives change from previous financials, so it is a little harder to track

some recent posts reference mainframe hardware sales
https://www.garlic.com/~lynn/2022b.html#63 Mainframes
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#54 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance

Late 80s, senior disk engineer got a talk scheduled at world-wide, annual, internal communication group conference, supposedly on 3174 performance, but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (fiercely fighting off client/server and distributed computing augmented with their corporate strategic responsibility for everything that crossed datacenter walls). Communication group stranglehold on datacenters weren't just affecting disks and a couple years later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex that (mostly) reverses the breakup (although it wasn't long before the disk division is gone).

communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

tome starting with Learson trying (& failed) to block the bureaucrats, careerists, and MBAs destroying the Watson legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM FSD

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM FSD
Date: 29 Nov, 2023
Blog: Facebook
I took two credit hr intro to computers/fortran and within a year I was hired fulltime responsible for os/360, a 360/67 for tss/360, arrived to replace 709/1401 ... however tss/360 wasn't production, so ran as 360/65 with os/360. Then before I graduate, I'm hired fulltime into small group in Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing in independent business unit to better monetize the investment, including offering services to non-Boeing entities). I think Renton datacenter is largest in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room (somebody recently commented that Boeing was getting 360/65s like other organizations get keypunches).

In the early 80s, I'm introduced to John Boyd and would sponsor his briefings at IBM ... some more detail:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
one of his stories was about being very vocal that the electronics across the trail wouldn't work ... possibly as punishment he is put in command of "spook base" (about the same time I'm at Boeing) ... reference (gone 404, but lives on at wayback machine):
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

Boyd biography has "spook base" a $2.5B "windfall" for IBM (ten times renton). A story about IBM managing sales rep earnings (after the switch from straight commission to quota): When Big Blue Went to War (1965-1975)
https://www.amazon.com/When-Big-Blue-Went-War-ebook/dp/B07923TFH5/
loc192-99:
We four marketing reps, Mike, Dave, Jeff and me, in Honolulu (1240 Ala Moana Boulevard) qualified for IBM's prestigious 100 Percent Club during this period but our attainment was carefully engineered by mainland management so that we did not achieve much more than the required 100% of assigned sales quota and did not receive much in sales commissions. At the 1968 100 Percent Club recognition event at the Fontainebleau Hotel in Miami Beach, the four of us Hawaiian Reps sat in the audience and irritably watched as eight other "best of the best" IBM commercial marketing representatives from all over the United States receive recognition awards and big bonus money on stage. The combined sales achievement of the eight winners was considerably less than what we four had worked hard to achieve in the one small Honolulu branch office. Clearly, IBM was not interested in hearing accusations of war profiteering and they maintained that posture throughout the years of the company's wartime involvement.

... snip ...

recent item mentioning FSD and national labs (in Cray thread)
https://www.garlic.com/~lynn/2023g.html#22 Vintage Cray
https://www.garlic.com/~lynn/2023g.html#25 Vintage Cray
item about IBM and FAA ATC
https://www.garlic.com/~lynn/2023f.html#84 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2023f.html#86 FAA ATC, The Brawl in IBM 1964

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Boyd posts & WEB URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Another IBM Downturn

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Another IBM Downturn
Date: 29 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#27 Another IBM Downturn

after leaving IBM was brought in as consultant for client/server startup that wanted to do payment transactions. Two of the former Oracle people that we had worked on HA/CMP commercial/RDBMS cluster scale-up ... and had been in the Ellison meeting when AWD told them we would have 16-way by mid-92 and 128-way by ye-92, were there responsible for something called commerce server. The startup had invented something they called "SSL" they wanted to use for payment transactions, I had responsibility for everything between webservers and financial payment networks, the result is now frequently called "electronic commerce".

Later at end of century, a financial transaction outsourcing company asked us to go to seattle for a year to work with various companies on WEB electronic commerce efforts. I also would drop in on Sequent (who happened to mention that they did nearly all the NT work for multiprocessor scale-up past two processors). One of the Seattle area companies did Kerberos commercial work and had a contract to do Kerberos (active directory) for m'soft ... the CEO of that company we had known at IBM, at one time previously head of POK mainframe and then Boca PS2/OS2. We also had booth and demos at the Miami BAI world retail banking show ... archived post with press release
https://www.garlic.com/~lynn/99.html#224

recent comment about doing some consulting for (Sequent CTO) Steve Chen end of the century
https://www.garlic.com/~lynn/2023g.html#22 Vintage Cray

there was a project to do outsourcing online banking for the financial industry with m'soft, but the initial forecast for banks and account sign-up met that it would have to use SUN servers (instead of NT) and everybody elected me to tell m'soft CEO. A couple days before my meeting with m'soft CEO, some m'soft executives changed the business plan ... and would only signup customers that NT could handle (and only increase as NT performance improved).

post regarding all the help that POK needed with MVS/XA delivery date, MVS CSA started out "common segment area" in every application 16mbyte virtual address space ... then grew to "common system area" and was on verge exploding to 8mbytes (size somewhat proportional to size of system and number of concurrent applications and number of subsystems). With an 8mbyte image of the MVS kernel in every application 16mbyte virtual address space, having CSA explode to 8mbytes also, would leave nothing for applications.
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
web gateway forfinancial payment network
https://www.garlic.com/~lynn/subnetwork.html#gateway
x9.59 posts
https://www.garlic.com/~lynn/subpubkey.html#x959
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM OS/VU

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM OS/VU
Date: 29 Nov, 2023
Blog: Facebook
IBM Virtual Universe Operating System - OS/VU
http://lbdsoftware.com/ibm_virtual_universe_operating_s.html

After I joined IBM Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
https://en.wikipedia.org/wiki/CP-67
https://en.wikipedia.org/wiki/CP/CMS
https://en.wikipedia.org/wiki/Conversational_Monitor_System
https://en.wikipedia.org/wiki/IBM_CP-40
https://www.garlic.com/~lynn/cp40seas1982.txt
https://www.leeandmelindavarian.com/Melinda#VMHist

one of my hobbies was enhanced production operating systems for internal datacenters. After the decision to add virtual memory to all 370s (modulo 195), group formed to morph CP67 into VM370 ... which simplified and/or drop features. In 1974, I started migrating stuff to VM370 initially for VM370 release2-based, CSC/VM ... which included kernel re-org for multiprocessor support ... but not actual multiprocessor support itself (until later for VM370 release3-based, CSC/VM). For some reason, AT&T LongLines obtained pre-multiprocessor CSC/VM and over the years, brought it along to the latest 370 systems.

Early 1980s, IBM senior AT&T marketing rep tracks me down about helping AT&T Long Lines ... which had propagated my CSC/VM around a lot of installation inside AT&T. There is lots of stuff about IBM 3081 being quick&dirty effort after Future System implodes ... and was going to be multiprocessor only. IBM initially was afraid that the whole ACP/TPF market was going to move to the latest single processor Amdahl machine (which had about some processing power as aggregate of two processor 3081K) because ACP/TPF didn't have multiprocessor support ... however AT&T Long Lines was in similar circumstance ... my early (VM370 release2-based) CSC/VM didn't have multiprocessor support until later (originally for US consolidated HONE, to add a 2nd processor to each of the systems in their single-system-image, loosely-coupled conplex; trivia: when facebook originally moves into Silicon Valley, it is into a new bldg built next door to the former consolidated US HONE datacenter).

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

some past posts mentioning AT&T Longlines
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing
https://www.garlic.com/~lynn/2021k.html#63 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2019d.html#121 IBM Acronyms
https://www.garlic.com/~lynn/2017k.html#33 Bad History
https://www.garlic.com/~lynn/2017d.html#80 Mainframe operating systems?
https://www.garlic.com/~lynn/2017d.html#48 360 announce day
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015c.html#27 30 yr old email
https://www.garlic.com/~lynn/2015.html#85 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2013b.html#37 AT&T Holmdel Computer Center films, 1973 Unix
https://www.garlic.com/~lynn/2012f.html#59 Hard Disk Drive Construction
https://www.garlic.com/~lynn/2011g.html#7 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2008l.html#82 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008i.html#14 DASD or TAPE attached via TCP/IP
https://www.garlic.com/~lynn/2008.html#41 IT managers stymied by limits of x86 virtualization
https://www.garlic.com/~lynn/2008.html#30 hacked TOPS-10 monitors
https://www.garlic.com/~lynn/2008.html#29 Need Help filtering out sporge in comp.arch
https://www.garlic.com/~lynn/2007v.html#15 folklore indeed
https://www.garlic.com/~lynn/2007u.html#6 Open z/Architecture or Not
https://www.garlic.com/~lynn/2007g.html#54 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006b.html#21 IBM 3090/VM Humor
https://www.garlic.com/~lynn/2005p.html#31 z/VM performance
https://www.garlic.com/~lynn/2004m.html#58 Shipwrecks
https://www.garlic.com/~lynn/2004e.html#32 The attack of the killer mainframes
https://www.garlic.com/~lynn/2003d.html#46 unix
https://www.garlic.com/~lynn/2003.html#17 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2002p.html#23 Cost of computing in 1958?
https://www.garlic.com/~lynn/2002i.html#32 IBM was: CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002c.html#11 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2001f.html#3 Oldest program you've written, and still in use?
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/96.html#35 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/95.html#14 characters

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Datacenter

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Datacenter
Date: 30 Nov, 2023
Blog: Facebook
Large cloud operator will have a dozen or more megadatacenters around the world, each with a half million or more server blades, each server blade more processing than max. configured mainframe ... enormous optimization and automation, a megacenter having total staff of 70-80 people. For decades they have claimed they assemble their own blades at 1/3rd the cost of brand name blades. IBM sold off its server business shortly after press that server chip makers were shipping at least half their product directly to megadatacenters. Megadatacenter business is so large, they started demanding server chips specifically designed to their requirements.

As an undergraduate in the 60s, I was hired into small group in the Boeing CFO office to help with the formation of BCS, consolidate all dataprocessing into an independent business unit to better monetize investment. I thought Renton was largest datacenter, a couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room (recent comment about Boeing was getting 360/65s like other companies were getting keypunches). Disaster plan has Renton being duplicated at new 747 plant at Paine Field up in Everett; Mt. Rainier heats up and resulting mud slide would take out Renton. Badges had color coded bar that gave level (which included parking lots you could use) and color coded engraved lettering that gave security clearance.

When I graduate, I join the IBM Science Center and there was lots of work that had been going on for 7x24 online services ... two CP67 (virtual machine) commercial spin-offs of the science center that were moving up value stream specializing in online (highly secure) services for financial & wallstreet ... and Cambridge got online business planner users in Armonk (loading the most valuable corporate assets on the cambridge system). We had to demonstrate strong security since profs, staff, and students from local cambridge/boston institutions were also using the cambridge system.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

a few posts (this year) mentioning Boeing CFO/BCS:
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#115 IBM RAS
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#20 I've Been Moved
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#99 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#101 Operating System/360
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#67 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#66 Economic Mess and IBM Science Center
https://www.garlic.com/~lynn/2023c.html#15 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#57 Almost IBM class student
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#5 1403 printer

--
virtualization experience starting Jan1968, online at home since Mar1970

Storage Management

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Storage Management
Date: 30 Nov, 2023
Blog: Facebook
Late 70s, SHARE did LSRAD white paper ... dated DEC79 ... I scanned it for putting up on bitsaver ... but since it was published shortly after congress extended the copyright period ... I had a devil of the time finding a SHARE official that would approve putting it up on bitsaver.

I had been writing memos about CKD (especially multi-track searches) was 60s trade-offs between plentiful I/O resources and limited real memory for caching indexes ... but by the mid-70s the trade-off was starting to invert (now no real CKD have been made for decades, but still being simulated on industry standard fixed-block disks). Early 80s, I distributed a memo that between 360 announce and the early 80s, disk relative system throughput had declined by an order of magnitude (systems got 40-50 times faster but disks only got 3-5 times faster). A GPD/disk executive took exception and assigned the division performance group to refute my claim ... after a few weeks they came back and effectively said I had slightly understated the situation. Their analysis was respun for SHARE presentation on configuring DASD for improving throughput, 16Aug1984, SHARE 63, B874.

Later I did some work on storage management with national labs for large supercomputing centers; LANL Datatree, LLNL Unitree, and NCAR Mesa Archival. This was about the time that CMSBACK that I had originally done for internal datacenters in the late 70s, was being upgraded with PC and workstation clients for release to customers as WDSF, then picked up and renamed ADSM, then TSM, now "Storage Protect".

posts mentioning cmsback, backup/archive, storage management
https://www.garlic.com/~lynn/submain.html#backup

some recent posts specifically mentioning datatree, unitree, and/or mesa archival
https://www.garlic.com/~lynn/2023g.html#25 Vintage Cray
https://www.garlic.com/~lynn/2023e.html#106 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023c.html#19 IBM Downfall
https://www.garlic.com/~lynn/2023.html#21 IBM Change
https://www.garlic.com/~lynn/2022h.html#84 CDC, Cray, Supercomputers
https://www.garlic.com/~lynn/2022f.html#26 IBM "nine-net"
https://www.garlic.com/~lynn/2022b.html#67 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#16 Channel I/O
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2021h.html#93 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2021g.html#2 IBM ESCON Experience
https://www.garlic.com/~lynn/2021c.html#63 Distributed Computing
https://www.garlic.com/~lynn/2021c.html#52 IBM CEO

some posts mentioning LSRAD
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2015f.html#82 Miniskirts and mainframes
https://www.garlic.com/~lynn/2014j.html#53 Amdahl UTS manual
https://www.garlic.com/~lynn/2013h.html#85 Before the PC: IBM invents virtualisation
https://www.garlic.com/~lynn/2013h.html#82 Vintage IBM Manuals
https://www.garlic.com/~lynn/2013e.html#52 32760?
https://www.garlic.com/~lynn/2012p.html#58 What is holding back cloud adoption?
https://www.garlic.com/~lynn/2012o.html#36 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing
https://www.garlic.com/~lynn/2012i.html#40 GNOSIS & KeyKOS
https://www.garlic.com/~lynn/2012i.html#39 Just a quick link to a video by the National Research Council of Canada made in 1971 on computer technology for filmmaking
https://www.garlic.com/~lynn/2012f.html#58 Making the Mainframe more Accessible - What is Your Vision?
https://www.garlic.com/~lynn/2011p.html#146 IBM Manuals
https://www.garlic.com/~lynn/2011p.html#11 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#10 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011n.html#70 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011n.html#62 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011.html#89 Make the mainframe work environment fun and intuitive
https://www.garlic.com/~lynn/2011.html#88 digitize old hardcopy manuals
https://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2010q.html#33 IBM S/360 Green Card high quality scan
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index
https://www.garlic.com/~lynn/2009n.html#0 Wanted: SHARE Volume I proceedings
https://www.garlic.com/~lynn/2009.html#70 A New Role for Old Geeks
https://www.garlic.com/~lynn/2009.html#47 repeat after me: RAID != backup
https://www.garlic.com/~lynn/2007d.html#40 old tapes
https://www.garlic.com/~lynn/2006d.html#38 Fw: Tax chooses dead language - Austalia
https://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual

--
virtualization experience starting Jan1968, online at home since Mar1970

Interest Payments on the Ballooning Federal Debt vs. Tax Receipts & GDP: Not as Bad as in 1982-1997, but Getting There

From: Lynn Wheeler <lynn@garlic.com>
Subject: Interest Payments on the Ballooning Federal Debt vs. Tax Receipts & GDP: Not as Bad as in 1982-1997, but Getting There
Date: 01 Dec, 2023
Blog: Facebook
Interest Payments on the Ballooning Federal Debt vs. Tax Receipts & GDP: Not as Bad as in 1982-1997, but Getting There
https://wolfstreet.com/2023/11/29/us-government-interest-payments-on-the-ballooning-debt-vs-tax-receipts-gdp-not-as-bad-as-in-1982-1997-but-getting-there/
The magnificently ballooning US government debt is rapidly approaching $34 trillion (now at $33.84 trillion), up from $33 trillion in mid-September, and up from $32 trillion in mid-June, amid a tsunami of issuance of Treasury securities to fund the stunning government deficits.

... snip ...

2002, republican house lets fiscal responsibility act lapse (spending couldn't exceed tax revenue, on its way to eliminating all federal debt). CBO 2010 report that 2003-2009, tax revenue was cut $6T and spending increased $6T for $12T gap (compared to fiscal responsible budget), first time taxes were cut to not pay for two wars. Sort of confluence of the Federal Reserve and Too-Big-To-Fail wanted huge federal debt, special interests wanted huge tax cut, and military-industrial complex wanted huge spending increase. 2005, US Comptroller General started including in speeches that nobody in congress was capable of middle school arithmetic (for how badly they were savaging the budget). The following administration managed to lower some annual deficits (mostly some reduction in spending), but tax revenue had yet to be restored.

2009, IRS press said that it was going after $400B owed by 52,000 wealthy Americans on trillions illegally stashed overseas. Then spring 2011, the new speaker of the house said it was cutting the budget for the IRS department responsible for recovering the $400B (and fines) from the 52,000 wealthy Americans. After that there was some press about a couple overseas banks (that facilitated the tax evasion) have been fined a few billion ... but nothing about recovering the $400B (and fines).

2018 administration had more huge tax cuts for large corporations claiming that it would go for employee bonuses and hiring. The website for the "poster child" corporation for worker bonuses claimed that workers would receive up to a $1000 bonus. NOTE: if every worker actually received the full $1000 bonus, it would be less than 2% of the tens of billion from its corporate tax cut (the rest going for stock buybacks and executive compensation).

there are jokes about US congress being the most corrupt institution on earth, in large part from the way members of certain house committees are able to collect "donations" from special interests.

fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
comptroller general posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general
tax fraud, tax evasion, tax loopholes, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback

--
virtualization experience starting Jan1968, online at home since Mar1970

Storage Management

From: Lynn Wheeler <lynn@garlic.com>
Subject: Storage Management
Date: 01 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#32 Storage Management

late 80s, a senior disk engineer got a talk scheduled at world-wide, internal, annual communication group conference ... supposedly on 3174 performance ... but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales ... with data fleeing customer datacenters to more distributed computing friendly platforms. The disk division had come up with a number of solutions that were constantly being vetoed by the communication group. The communication group had stranglehold on datacenters with their corporate strategic responsibility for everything that cross datacenter walls and were fiercely fighting off client/server and distributed computing, trying to preserve their dumb terminal paradigm.

As partial countermeasure, the GPD/ADstar VP of software was investing in distributed computing startups that would use IBM disks ... and would periodically ask us to drop by his investments to see if we could lend a hand. He had responsibility for ADSM and one of his investments was Mesa Archival (NCAR spin-off startup). However the communication group datacenter stranglehold wasn't just disks and early 90s, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex that (mostly) reverses the breakup (although it wasn't long before the disk division is gone).

Communication group stranglehold and dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage TSS/360

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage TSS/360
Date: 23 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#66 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360

Science Center thot it would get the time sharing & virtual memory mission ... but it went to new group called TSS/360. CSC tried to get a 360/50 to do hardware modifications for virtual memory, but extra 360/50s were all going to FAA ATC effort ... and so had to settle for 360/40 ... more detail:
https://www.garlic.com/~lynn/cp40seas1982.txt
when 360/67 came available standard with virtual memory, CP40 morphs into CP67 ... lots more history
https://www.leeandmelindavarian.com/Melinda#VMHist
other IBM mainframe history
https://en.wikipedia.org/wiki/History_of_IBM_mainframe_operating_systems

Late 60s, there were many more machines running CP67 than TSS360 ... CSC had 12 people in the CP67/CMS group while there was 1200 people in the TSS/360 org ... including two online CP/67 commercial service bureau spinoffs from CSC.

Original 360/67 announcement was up to four processors ... and still see it in the control register formats in the 360/67 functional characteristics document on bitsavers.
http://bitsavers.org/pdf/ibm/360/functional_characteristics/A27-2719-0_360-67_funcChar.pdf

All but one multiprocessors were "duplex" (two processor), that one was a three processor for MOL at Lockheed in sunnyvale. The standard multiprocessor control registers showed the channel controller configuration switch settings ... the MOL triplex could also change the configuration settings by changing control register settings (the IBM SE on the Lockheed account then transfers to science center ... and worked on cp/67 multiprocessor support)

cp/m&msdos trivia, before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP67/CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, multiprocessor, tightly-coupled pots
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Timeslice, Scheduling, Interdata

From: Lynn Wheeler <lynn@garlic.com>
Subject: Timeslice, Scheduling, Interdata
Date: 02 Dec, 2023
Blog: Facebook
I was undergraduate at univ in 60s and had taken 2credit hr intro to fortran/computers, at the end of the class, I was hired to rewrite 1401 MPIO for 360/30 (univ. shutdown datacenter on weekends and I would have whole place dedicated, although 48hrs w/o sleep made monday classes a little hard, i was given a bunch of hardware & software manuals and got to design & develop my own monitor, interrupt handlers, device drivers, storage management, etc). Univ. had been sold 360/67 for tss/360 to replace 709/1401 and got a 360/30 replacing 1401 temporarily pending 360/67. Within a year, 360/67 arrived and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition). Later, CSC came out to install CP/67 (3rd installation after CSC itself and MIT lincoln labs), which I mostly would play with during my weekend dedicated time. I spend a few months rewriting a lot of pathlengths trying to improve OS/360 running in virtual machines. I then rewrote a lot of DASD I/O (from straight FIFO to ordered seek and multiple page transfers per I/O), and then did new scheduling (dynamic adaptive resource management and scheduling) and page replacement algorithms (CSC picks up a lot and ships in distributed CP67).

CP67 had time-slicing but its scheduling was extremely heavy-weight and overhead increased non-linear with number of users, I had to significantly increase the quality of the algorithm but at the same time, overhead nearly linear with amount of dispatches (independent of number of users).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock

archived post with piece of 60s SHARE presentation on OS/360 and CP/67 work
https://www.garlic.com/~lynn/94.html#18

Many years later, the infant OS/2 group sent email to endicott vm/370 group (mid-range 370s) asking how to do dispatching/scheduling, the Endicott forwarded the email to IBM Kingston (high-end 370 mainframe), who forwarded it to me.
Date: 11/24/87 17:35:50 To: wheeler FROM: xxxxxx Dept xxx, Bldg xxx Phone: xxx, TieLine xxx SUBJECT: VM priority boost

got your name thru yyy yyy who works with me on OS/2. I'm looking for information on the (highly recommended) VM technique of boosting priority based on the amount of interaction a given user is bringing to the system. I'm being told that our OS/2 algorithm is inferior to VM's. Can you help me find out what it is, or refer me to someone else who may know?? Thanks for your help.

Regards, xxxxxx (xxxxxx at BCRVMPC1)


... snip ... top of post, old email index
Date: Fri, 4 Dec 87 15:58:10 est From: wheeler Subject: os2 dispatching

fyi ... somebody in boca sent a message to endicott asking about how to do dispatch/scheduling (i.e. how does vm handle it) because os2 has several deficiencies that need fixing. VM Endicott forwarded it to VM Kingston and VM IBM Kingston forwarded it to me. I still haven't seen a description of OS2 yet so don't yet know about how to go about solving any problems.


... snip ... top of post, old email index
Date: Fri, 4 Dec 87 15:53:29 est From: wheeler To: somebody at bcrvmpc1 (i.e. internal vm network node in boca) Subject: os2 dispatching

I've sent you a couple things that I wrote recently that relate to the subject of scheduling, dispatching, system management, etc. If you are interested in more detailed description of the VM stuff, I can send you some descriptions of things that I've done to enhance/fix what went into the base VM system ... i.e. what is there now, what its limitations are, and what further additions should be added.


... snip ... top of post, old email index

... at the univ. I wanted to have single dial-in phone number of all terminals ("hunt group"), while I could change port scanner type for 2741, 1052, and TTY/ASCII, but IBM took a short cut and hardwired the line speed for each port. This kicking off a univ. program to build a clone controller, built channel interface board for Interdata/3 programmed to emulate IBM controller with the addition it could dynamic line speed. Later it was upgraded with an Interdata/4 for channel interface and cluster of Interdata/3s for handling ports. Interdata (and later Perkin-Elmer) sold it as clone IBM controller.
https://en.wikipedia.org/wiki/Interdata
Interdata, Inc., was a computer company, founded in 1966 by a former Electronic Associates engineer, Daniel Sinnott, and was based in Oceanport, New Jersey. The company produced a line of 16- and 32-bit minicomputers that were loosely based on the IBM 360 instruction set architecture but at a cheaper price.[2] In 1974, it produced one of the first 32-bit minicomputers,[3] the Interdata 7/32. The company then used the parallel processing approach, where multiple tasks were performed at the same time, making real-time computing a reality.[4]

... ship ...

Interdata bought by Perkin-Elmer
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
spun off in 1985 as concurrent computer corp.
https://en.wikipedia.org/wiki/Concurrent_Computer_Corporation

ibm mainframe clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

AL Gore Invented The Internet

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: AL Gore Invented The Internet
Date: 02 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022h.html#3 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2022h.html#91 AL Gore Invented The Internet

this year-old posts/thread got some new comments today ... so a little adenda

After NSF, Erich went over to be senior member of "Council On Competitiveness" ... and we would periodically drop in to see him ... I referred to it as one of those K-street lobbying groups ... but it was more like H-street.

I was on Greg's XTP TAB and there were some military participation and they needed standards. We took it to (ISO chartered) ANSI X3S3.3 (standards for OSI level 3&4) as HSP (high speed protocol) and were told they had ISO requirement to only standardize things that conform to OSI; XTP/HSP didn't because 1) it supported internetworking layer (non-existent in OSI), 2) skipped the OSI layer4/layer3 interface and 3) went directly to LAN/MAC interface (non-existent in OSI somewhere in middle of layer 3). We then had joke that ISO could standardize stuff that couldn't even be implemented while IETF required two interoperable implementations to proceed in standards process. I had machine/demo in (non-IBM) booth at Interop'88 ... before show opened, all the floor nets were crashing ... didn't get diagnosed until long into the night ... provision about it shows up in RFC1122. Case was in SUN booth immediate right corners and con him into installing SNMP on my machine.

After leaving IBM, was brought in as consultant to small client/server startup, two of the former Oracle people that we worked on HA/CMP cluster scale-up, were responsible for something called "commerce server" and wanted to do payment transactions on the server. The startup had invented something they called "SSL" they want to use, now frequently called "electronic commerce". I had responsibility for everything between webservers and the financial industry payment networks. Afterwards I put together talk about "Why Internet Isn't Business Critical Dataprocessing" about all the procedures, software, countermeasures, etc I had to do. I had been helping Postel with some of the RFC processes and he sponsored my talk.

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
Interop '88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
"electronic commerce" payment network gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning "Why Internet Isn't Business Critical Processing
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021j.html#10 System Availability
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017d.html#92 Old hardware
https://www.garlic.com/~lynn/2016h.html#4 OODA in IT Security
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable

--
virtualization experience starting Jan1968, online at home since Mar1970

Computer "DUMPS"

From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer "DUMPS"
Date: 02 Dec, 2023
Blog: Facebook
In the early 80s, I wanted to demonstrate REX(X) was not just another pretty scripting language (before renamed REXX and released to customers). I decided on redoing a large assembler application (dump reader & fault analysis) in REX with ten times the function and ten times the performance (lot of hacks done to make interpreted REX run faster than assembler), working half time over three months elapsed. I finished early so started writing automated script that searched for most common failure signatures). It included a pseudo dis-assembler ... converting storage areas into instruction sequences and given pointer to DSECT MACLIB member, would format storage according to the DSECT. I had thought that it would be released to customers but for what ever reasons it wasn't even though it was in use by nearly every internal datacenter and customer PSR ... I finally got permission to give talks on the implementation at user group meetings ... and within a few months similar implementations started showing up at customer shops.

Later I got email from the 3090 service processor (3092) group who were planning on shipping it as part of the 3092.

DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx

some recent posts mentioning 3092 & dumprx
https://www.garlic.com/~lynn/2023f.html#45 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#28 IBM Reference Cards
https://www.garlic.com/~lynn/2023e.html#32 3081 TCMs
https://www.garlic.com/~lynn/2023d.html#74 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#29 IBM 3278
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#59 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#41 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#26 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#101 PSR, IOS3270, 3092, & DUMPRX
https://www.garlic.com/~lynn/2022h.html#34 Mainframe Development Language
https://www.garlic.com/~lynn/2022g.html#7 3880 DASD Controller
https://www.garlic.com/~lynn/2022f.html#69 360/67 & DUMPRX
https://www.garlic.com/~lynn/2022e.html#2 IBM Games
https://www.garlic.com/~lynn/2022d.html#108 System Dumps & 7x24 operation
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021j.html#84 Happy 50th Birthday, EMAIL!
https://www.garlic.com/~lynn/2021j.html#24 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021h.html#55 even an old mainframer can do it
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021d.html#2 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#58 MAINFRAME (4341) History

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 03 Dec, 2023
Blog: Facebook
IBM Cambridge Science Center had modified a 360/40 for virtual memory and implemented CP/40
https://www.garlic.com/~lynn/cp40seas1982.txt
... later when 360/67 standard with virtual memory became available, CP/40 morphs into CP/67; then CSC came out installs CP67 at univ where I was , 3rd installation after CSC itself and MIT Lincoln Labs ... I mostly played with it at my 48hr weekend dedicated time. Initially CP67 source was hosted on OS/360, assembled, resulted TXT decks placed in card tray with BPS loader and IPL'ed, the core image was written to disk and the disk IPL'ed for CP67. Individual modules had slash and module name written across top of TXT deck so when specific module was being changed, they could be replaced in card tray for IPL. Later CP67 source was moved to CMS and virtual load deck could be created and IPL'ed.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

I had taken two credit hr intro to fortran/computers and at end of semester go a student job redoing 1401 MPIO to 360/30 (univ. datacenter shutdown on weekends and I got the whole place to myself, was given lots of hardware&software documents and I designed&implemented my own monitor, interrupt handlers, device drivers, error recovery, storage management, etc and within a few weeks had 2000 card assembler program) Univ. was running 709/1401 for all academic (mostly fortran) and administrative (cobol) work and had been sold a 360/67 for tss/360 replacing 709/1401. They got a 360/30 temporarily replacing the 1401, pending 360/67 availability. Less than year later, 360/67 arrived and I was hired fulltime responsible for OS/360 (360/67 running as 360/65, tss/360 never came to fruition).

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into a independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thought Renton datacenter was largest in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Disaster plan was to duplicate Renton up at new 747 plant at Paine field in Everett (Mt. Rainier heats up and the resulting mud slide takes out the Renton datacenter). Somebody recently commented that Boeing was getting 360/65s like other companies bought keypunches. Lots of politics between Renton director and Boeing CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room to install a 360/67 for me to play with when I'm not doing other things).

.. lots of univ. were sold (virtual memory) 360/67 for tss/360 ... tss/360 never really came to production fruition ... and so many places just used it as 360/65 for os/360. stanford (ORVYL) and univ of mich. (MTS) did a virtual memory system for 360/67 (later stanford ported wylbur editor to MVS).

ORVYL and WYLBUR
https://en.wikipedia.org/wiki/ORVYL_and_WYLBUR
more documents
https://web.stanford.edu/dept/its/support/wylorv/
ORVYL for 370
https://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML

Univ of Michigan MTS
https://en.wikipedia.org/wiki/Michigan_Terminal_System

CP67
https://en.wikipedia.org/wiki/CP-67
CP/CMS
https://en.wikipedia.org/wiki/CP/CMS
Melinda's virtual machine history page/documents
https://www.leeandmelindavarian.com/Melinda#VMHist
other IBM mainframe history
https://en.wikipedia.org/wiki/History_of_IBM_mainframe_operating_systems

trivia: MTS was originally scaffolded off of MIT Lincoln Labs LLMPS
https://web.archive.org/web/20110111202017/http://archive.michigan-terminal-system.org/myths
Germ of Truth. Early versions of what became UMMPS were based in part on LLMPS from MIT's Lincoln Laboratories. Early versions of what would become MTS were known as LTS. The initial "L" in LTS and in LLMPS stood for "Lincoln".

... snip ...

MIT Urban lab in tech sq (across quad from 545 with both multics and science center) had 360/67 running cp67 ... story about making mod to CP67 crashing it 27times in single day (automatic reboot/start taking a couple minutes)
https://www.multicians.org/thvv/360-67.html

Multics was still crashing but salvaging filesystem taking an hr or more ... CP67 example prompting the new storage system
https://www.multicians.org/nss.html

note folklore that some of the Multics Bell people did simplified Multics as Unix ... with some features adopted from Multics (including filesystem salvage).

The Urban lab crashing was partly my fault. CP67 installed at the univ. had 1052 & 2741 but univ. had TTY/ASCII terminals so I added TTY support (that played some games with one byte fields for line lengths ... aka TTY was less than 80) which was picked up and distributed by science center. Somebody down at Harvard using Urban lab got a ASCII terminal with 1200 line length. The Urban lab change overlooked some games with one byte field for line lengths which resulted invalid line length calculations.

Note: some of the MIT CTSS/7094
https://www.multicians.org/thvv/7094.html
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr to do Multics, and others went to the IBM science center on the 4th flr to do virtual machines, internal network (later tech used for the corporate sponsored univ BITNET), lots of online apps, invented GML in 1969 (decade later morphs into ISO standard SGML, and after another decade morphs into HTML at CERN).

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml

Boeing Huntsville had been sold a two-processor 360/67 (for tss/360, also got several 2250 large graphic displays for CAD/CAM) ... but had it configured as two systems for running OS/360. Long running applications (like CAD/CAM) would have similar problem with storage management and modified MVT13 to build virtual memory table (but didn't do any actual paging, just for managing storage addresses). This is similar to later justification for adding virtual memory to all 370s. Summer of 1969 the system was moved to Seattle.

When I graduate, I join science center, which had been doing lots of work on CP/67 (along with the two commercial CP/67 science center spin-offs that were quickly moving up the value stream specializing in financial and wallstreet customers) for 7x24, highly secure, online operation (sort of precursor to modern cloud). Science Center got online business planner users from Armonk corporate hdqtrs that loaded the most valuable corporate data and we had to demonstrate super security, in part because there were also online staff, profs, and student users from Boston/Cambridge area univ (there were some gov. agency CP67 customers that also required very high security).

This was when IBM rented/leased machines and charged based on "system meter" time which ran whenever any CPUs and/or channels were running. There was also lots of work on dark room, lightly loaded, off-shift with no human present and allowing system meter to stop ... but system instantly available whenever there was arriving characters (see current cloud megadatacenters doing similar optimization for minimizing power/cooling during light load). The whole system had to be idle for at least 400ms for system meter to stop. Note, years after IBM had switched from rent to sales, MVS still had a 400ms timer task that guaranteed that the system meter would never stop

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

Decade ago, I was asked to track down decision to add virtual memory to all 370s and found staff to executive making decision, basically MVT storage management was so bad that regions had to be specified four times larger than used, as a result typical customer 1mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. Building virtual memory table (like Boeing Huntsville) along with paging support, allowed concurrently running regions to increase by a factor of four times (with little or no paging) ... resulting in VS2/SVS ... old archived post with some of the email exchange
https://www.garlic.com/~lynn/2011d.html#73
for 370 virtual memory, CP67 becomes VM370, DOS/360 becomes DOS/VS, MVT becomes VS2/SVS then VS2/MVS, MFT becomes VS1.

trivia, before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP67/CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 03 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe

Mid-80s, the IBM communication group was fiercely fighting off client/server and distributed computing. Late 80s, a senior disk engineer got a talk schedule at an annual, world-wide, internal communication group conference supposedly on 3174 performance, but opened his talk that the communication group was going to responsible for the demise of the disk division. The disk division was seeing data fleeing datacenters to more distributed computing friendly platforms with drop in disk sales. The disk division had come up with a number of solutions, but they were constantly vetoed by the communication group with their corporate strategic ownership of everything that crossed the datacenter walls. The communication group stranglehold on datacenters wasn't just disks and a couple years later IBM has one of the largest losses in the history of US companies ... and IBM was being reorganized into the 13 "baby blues" in preparation for breaking up the company:
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking us to help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, that (mostly) reverses the breakup (although it wasn't long before the disk division is gone).

communication group fighting for its dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Around turn of century, IBM financials has mainframe hardware representing a few percent of revenue and falling (in much of the 80s, mainframe hardware represented majority of revenue). In z12 time-frame, mainframe hardware representing a couple percent of revenue and still falling, although the mainframe group represented 25% of revenue (nearly all software and services) and 40% of profit.

In 1980s, I get con'ed by STL (now SVL) to do channel-extender support (install 3270 channel attach controllers in off-site bldg where they were moving 300 people from the IMS group, with service back to STL datacenter). Then in 1988, IBM branch office asks if I can help standardize some fibre stuff that LLNL was playing with, which quickly becomes fibre channel standard (FCS, including some stuff I had done in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec). Then POK gets their stuff released with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy-weight protocol that radically reduces the native throughput, which is eventually released as FICON. The most recent public benchmark I can find is z196 "Peak I/O" benchmark that used 104 FICON to get 2M IOPS (although IBM documentation says to keep SAPs, system assist processors that actually do I/O, below 70% CPU ... which would have capped I/O at 1.5M IOPS). About the same time a FCS was announced for E5-2600 blades claiming over million IOPS (two such FCS have higher throughput than 104 FICON running over 104 FCS). Note also no mainframe CKD DASD have been made for decades, all being simulated on industry standard fixed-block disks.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

Max configured z196 went for $30M and (industry standard, no. iterations compared to 370/158 assumed to be 1MIPS) benchmarked at 50BIPS ($600k/BIPS). By comparison z196-era E5-2600 blades (with same industry standard) benchmarked at 500BIPS and IBM had base list price of $1815 ($3.63/BIPS). For couple decades large cloud megadatacenters (each with half million or more blades) have claimed that the assemble their own systems for 1/3rd ($1.21/BIPS, compared to mainframe $600k/BIPS) the price of brand name blades. Shortly after industry press had article that server chip makers were shipping at least half their products directly to cloud megadatacenters, IBM sells off its server business. Since then, cloud industry standard system blades have increased performance spread over mainframes ... also a large cloud operator will have dozen or more megadatacenters around the world (enormous optimization and automation with something like 70-80 total staff/megadatacenter and each having processing equivalent of a few million, max configured mainframes).

NOTE: instead of mainframe benchmarks, more recent numbers are change since previous systems):
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000 z990, 32 processors, 9BIPS, (281MIPS/proc), 2003 z9, 54 processors, 18BIPS (333MIPS/proc), July2005 z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008 z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010 z12, 101 processors, 75BIPS (743MIPS/proc), Aug2012 z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015 z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017 z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019 • pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS) z16, 200 processors, 222BIPS* (1111MIPS/proc), Sep2022 • pubs say z16 1.17 times z15 (1.17*190BIPS or 222BIPS)

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 03 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe

acp/pars trivia, my wife did short stint as chief architect for Amadeus (EU airline res system built off old Eastern "System/One"), she wasn't there long, she sided with EU decision on using x.25 and IBM communication group got her replaced, however EU went with x.25 anyway and got their replacement, replaced.

other acp/pars: after leaving IBM was brought into the largest airline res system to look at the ten things that they can't do. They initially focus on ROUTES... represented about 25% of mainframe processing. I'm given lots of detail and leave with a complete softcopy of OAG (all commercial scheduled flts in the world). I claim much of the current was still 60s tech trade-offs ... starting from scratch with brand new tech trade-offs, after about a month had implementation that was 100 times faster and then after another month had all ten impossible things they couldn't do (and about 10 times faster, although some transactions subsumed multiple previous human interactions into single operation). size for RS/6000 990, ten could handle all (ROUTE) transactions for all airlines in the world; benchmark iterations compared to reference 1MIPS machine
(1993) 990 claim: 126MIPS (ten 990s: 1.26BIPS) (1993) eight processor ES/9000-982 claim: 408MIPS (51MIPS/processor)

then hand-wringing began, ROUTES was few hundred people (in part because of 60s tech trade-offs) and new implementation only needed a few tens of people.

some posts mentioning AMADEUS and ROUTES:
https://www.garlic.com/~lynn/2023d.html#80 Airline Reservation System
https://www.garlic.com/~lynn/2023c.html#8 IBM Downfall
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022h.html#10 Google Cloud Launches Service to Simplify Mainframe Modernization
https://www.garlic.com/~lynn/2022c.html#76 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
https://www.garlic.com/~lynn/2012h.html#52 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2011d.html#43 Sabre; The First Online Reservation System
https://www.garlic.com/~lynn/2008p.html#41 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technologies?
https://www.garlic.com/~lynn/2008i.html#19 American Airlines
https://www.garlic.com/~lynn/2007p.html#45 64 gig memory
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Koolaid

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Koolaid
Date: 03 Dec, 2023
Blog: Facebook
as undergraduate in 60s, Univ. hired fulltime responsible for os/360 (360/67 running as 360/65) and then I was hired fulltime into small group in the Boeing CFO office to help with Boeing Computer Service (consolidate all dataproceessing in a independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thought Renton datacenter was possibly largest in the world with couple hundred million in 360s. Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room for a 360/67 for me to play with when I'm not doing other stuff). At univ and Boeing I was regular SHARE attendee.

When I graduate, I join IBM science center (instead of staying at Boeing) and drank the koolaid getting 3piece suites for interaction with customers ... including the director of one of the largest financial datacenters on the east coast liked me to come by and talk technology. At one point the branch manager horribly offended the customer and in retaliation they ordered an Amdahl machine (up until then Amdahl had been selling into technical/scientific/univ market and this would be the 1st true blue commercial customer). I was then asked to go onsite at the customer for a year (to help obfuscate why the customer was ordering the Amdahl machine). I talk it over with the customer and then decline the offer. I was then told that the branch manager was a good sailing buddy with IBM CEO and if I didn't do it, I could forget career, promotions, and raises (just one of the many times I was given the message). I gave up professional/business attire and customers would comment it was nice to see something different than the IBM empty suites.

old post starting with Learson trying (& failing) to block the bureaucrats, careerists and MBAs from destroying the Watson legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

science center post
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning branch manager offending customer
https://www.garlic.com/~lynn/2023c.html#56 IBM Empty Suits
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2022g.html#66 IBM Dress Code
https://www.garlic.com/~lynn/2022e.html#60 IBM CEO: Only 60% of office workers will ever return full-time
https://www.garlic.com/~lynn/2022e.html#14 IBM "Fast-Track" Bureaucrats
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021k.html#105 IBM Future System
https://www.garlic.com/~lynn/2021j.html#93 IBM 3278
https://www.garlic.com/~lynn/2021j.html#4 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#81 IBM Downturn
https://www.garlic.com/~lynn/2021e.html#66 Amdahl
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2021.html#39 IBM Tech
https://www.garlic.com/~lynn/2021.html#8 IBM CEOs
https://www.garlic.com/~lynn/2018e.html#27 Wearing a tie cuts circulation to your brain
https://www.garlic.com/~lynn/2018d.html#6 Workplace Advice I Wish I Had Known
https://www.garlic.com/~lynn/2018.html#55 Now Hear This--Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2016e.html#95 IBM History
https://www.garlic.com/~lynn/2016.html#41 1976 vs. 2016?
https://www.garlic.com/~lynn/2013l.html#22 Teletypewriter Model 33

--
virtualization experience starting Jan1968, online at home since Mar1970

Wheeler Scheduler

From: Lynn Wheeler <lynn@garlic.com>
Subject: Wheeler Scheduler
Date: 03 Dec, 2023
Blog: Facebook
not me: I did dynamic adaptive resource management for cp67 as undergraduate in the 60s ... ibm picked up and included in distributed CP67. TSS/360 did state machine implementation and some of that work may have also been applied to OS/360. After I graduate, I join the IBM science center and one of my hobbies was enhanced production operating systems for internal datacenters. In the decision to add virtual memory to all 370s, decision was to do vm370 ... in the morph of CP67->VM370 lots of features were simplified and/or dropped. In 1974, I started moving bunch of stuff from CP67 to VM370 for internal datacenters.

With the implosion of Future System
http://www.jfsowa.com/computer/memo125.htm
there was mad rush to get stuff back into the 370 product pipeline and a few things from my internal CSC/VM was picked up for VM370 Release3.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Note the 23Jun1969 unbundling announcement started to charge for (application) software (but managed to make the case that kernel software should be free). Then during Future System, internal politics was shutting down 370 work and the lack of new IBM 370 is credited with giving the clone 370 makers their market foothold. With FS implosion, it was also decided to transition to charging for all kernel software starting with new incremental add-ons and my scheduler (and bunch of other stuff) was selected to be guinea pig.

23jun1969 unbundling announce
https://www.garlic.com/~lynn/submain.html#unbundle

I had done an automated benchmarking facility that could specify a large variety of configurable benchmarks. One of the science center co-workers had done an APL-based sophisticated analytical model of system performance and had also collected years of activity data from internal CP67 and VM370 systems (would be precursor to capacity planning). It was deployed on world-wide online sales&marketing support HONE as the Performance Predictor ... branch people could enter customer configuration and workload data and ask "what-if" questions regarding changes to configuration and/or workload.

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

For my charged-for "resource manager", we worked out a set of 1000 synthetic benchmarks that uniformly covered the domain of configuration and workload from repository of observed internal datacenters. A modified version of the performance predictor would predict the results of each benchmark and then after it ran, would check the prediction with actual results (validating the resource manager as well as the performance predictor). Then another modification of the performance predictor would decide on benchmark parameters ... looking for possible anomalous situations ... for another 1000 benchmarks. It took three month elapsed time to run all 2000 benchmarks.

benchmark posts
https://www.garlic.com/~lynn/submain.html#benchmark

It was suppose to be initially release for VM370 Release3PLC4 but first there was a review by an expert from corporate that was apparently infused heavily with MVS. At the time, MVS had a large matrix of manual system tuning parameters, and there were presentations made at SHARE regarding results of various tuning parameter settings. He said he wouldn't sign-off on my dynamic adaptive resource management because I didn't have any manual tuning parameters (and everybody knows that manual tuning parameters are the state-of-the-art). I tried to explain dynamic adaptive to no avail. So (as a joke) I put in some manual tuning knobs, documented formulas and source code. The joke from Operation Research is "degrees of freedom" ... the dynamic adaptive code could compensate for all possible manual tuning knob settings.

Nearly 20yrs later was on a marketing tour of the far east (for our HA/CMP product) and going up in elevator in large HK bank building. Young IBMer in the back asked me if I was Wheeler from the "Wheeler Scheduler" ... he said it was studied at the Univ of Waterloo. I asked him if it included the joke.

dynamic adapter resource management (wheeler scheduler)
https://www.garlic.com/~lynn/subtopic.html#fairshare
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

Amdahl CPUs

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Amdahl CPUs
Date: 03 Dec, 2023
Blog: Facebook
claim Amdahl actually made better IBM CPUs. ACS/360, folklore that executives shutdown ACS/360 was afraid that it would advance state of the art too fast and IBM would loose control of the market, shortly later Amdahl leaves IBM and started his own company
https://people.computing.clemson.edu/~mark/acs_end.html

IBM was then distracted by Future System effort and internal politics was shutting down 370 efforts (lack of new IBM 370s, is credited with giving clone 370 makers their market foothold, folklore IBM sales&marketing had to fall back to lots of FUD). Then when FS implodes there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel
http://www.jfsowa.com/computer/memo125.htm
requiring enormous amount of more FUD from IBM sales&marketing. recent comment about trying to blame me for the first true blue commercial Amdahl order (up until had been selling into tech/science/univ market)
https://www.garlic.com/~lynn/2023g.html#42
and older
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

a few posts mentioning IBM sales&marketing FUD
https://www.garlic.com/~lynn/2023c.html#77 IBM Big Blue, True Blue, Bleed Blue
https://www.garlic.com/~lynn/2023c.html#23 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2022h.html#93 IBM 360
https://www.garlic.com/~lynn/2022g.html#22 3081 TCMs
https://www.garlic.com/~lynn/2022f.html#109 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#93 IBM 3278
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021g.html#5 IBM's 18-month company-wide email system migration has been a disaster, sources say
https://www.garlic.com/~lynn/2021.html#39 IBM Tech
https://www.garlic.com/~lynn/2018d.html#22 The Rise and Fall of IBM
https://www.garlic.com/~lynn/2017d.html#5 IBM's core business
https://www.garlic.com/~lynn/2014m.html#170 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014f.html#73 Is end of mainframe near ?
https://www.garlic.com/~lynn/2013h.html#40 The Mainframe is "Alive and Kicking"
https://www.garlic.com/~lynn/2011e.html#32 SNA/VTAM Misinformation

--
virtualization experience starting Jan1968, online at home since Mar1970

Wheeler Scheduler

From: Lynn Wheeler <lynn@garlic.com>
Subject: Wheeler Scheduler
Date: 04 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler

trivia: charging for kernel software; ... during transition to charge for all kernel software, new (optional?) add-ons would be charged for, but new direct hardware support would still be free (and couldn't require prerequisite charged-for software). I included a lot of stuff in (vm370R3-based, charged for) dynamic adaptive resource manager (besides scheduler), including kernel reorg necessary for multiprocessor support (but didn't include actual multiprocessor support). Come vm370 release4, they wanted to ship (free) multiprocessor support, but required the kernel reorg as part of the (charged for) resource manager. Eventual solution was to move something like 90% of the code (from charged-for resource manager) into the free release4 ... w/o changing the price of the resource manager for release4. For release5, the resource manager was merged into other code for (charged for) SEPP.

dynamic adapter resource management (wheeler scheduler)
https://www.garlic.com/~lynn/subtopic.html#fairshare
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
23jun1969 unbundling announce
https://www.garlic.com/~lynn/submain.html#unbundle

--
virtualization experience starting Jan1968, online at home since Mar1970

Amdahl CPUs

From: Lynn Wheeler <lynn@garlic.com>
Subject: Amdahl CPUs
Date: 04 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#44 Amdahl CPUs

... claims they had to do TCM liquid cooling for 3081 in order to pack the enormous amount of circuits in reasonable space ... sowa article refs the number of 3081 circuits were enough to build 16 370/168s (where Amdahl cpus were more in line with his cpus at IBM)
http://www.jfsowa.com/computer/memo125.htm

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

some recent posts mentioning TCM liquid cooling:
https://www.garlic.com/~lynn/2023g.html#26 Vintage 370/158
https://www.garlic.com/~lynn/2023g.html#23 Vintage 3081 and Water Cooling
https://www.garlic.com/~lynn/2023b.html#98 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2022c.html#107 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022.html#20 Service Processor
https://www.garlic.com/~lynn/2021i.html#66 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021c.html#66 ACP/TPF 3083
https://www.garlic.com/~lynn/2021c.html#58 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Printer

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Printer
Date: 04 Dec, 2023
Blog: Facebook
After leaving IBM, I spent time as rep to financial industry standards organization, including crypto committees which had a couple members from gov. agencies ... and we sometimes had meetings at agencies ... where your name & details had to be on the visitor list. One time checking in at the visitor center ... I noticed that the visitor list was on computer fanfold paper and cover was VM separator page (agency had been virtual machine customer for 30 some years, back to the 60s).

This particular agency had been active in SHARE ... and on VMSHARE ... 3-letter SHARE code wasn't quite the agency's 3-letters ... VMSHARE archive (TYMSHARE provided their CMS-based online computer conferencing system "free" to SHARE starting in Aug1976)
http://vm.marist.edu/~vmshare

vmshare trivia: I had cut a deal with TYMSHARE for monthly tape dump of all VMSHARE files for putting up on the internal network and systems inside IBM ... biggest problem I had was with the lawyers that were concerned about internal employees being exposed to unfiltered customer opinions.

recent posts mentioning gov. agency and vmshare:
https://www.garlic.com/~lynn/2023f.html#87 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2023b.html#14 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#43 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2022g.html#43 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#16 Early Internet
https://www.garlic.com/~lynn/2022f.html#37 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022b.html#126 Google Cloud
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021e.html#55 SHARE (& GUIDE)

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 04 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#41 Vintage Mainframe

periodic CSA tome reposted; decision to add virtual memory to all 370s, initial mapping MVT to VS2/SVS (little different than running MVT in 16mbyte virtual machine), then redone for VS2/MVS; os/360 was heavy pointer passing so mapped 8mbyte kernel image into every 16mbyte application space, then because needed way to pass stuff back&forth between applications & subsystems (in different address spaces), created the 1mbyte common segment area (CSA) with image in every 16mbyte area ... leaving 7mbytes ... however CSA space requirements were somewhat proportional to number of subsystems and concurrent applications, by 3033 had grown to 5-6mbytes (leaving 2-3mbytes), and had been renamed common system area ... and was threatening to become 8mbytes (leaving 0mbytes for application).
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
couple others
https://www.garlic.com/~lynn/2023g.html#29 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360

Possibly contributing to head of POK convincing corporate to kill vm370, shutdown the development group and move everybody to POK for MVS/XA (or otherwise supposedly MVS/XA wouldn't ship on time ... the bloated MVS kernel image and CSA on verge not leaving any space in 16mbytes for an application). In the 80s, customers weren't converting to MVS/XA like they were supposed to ... something like the original MVS:
http://www.mxg.com/thebuttonman/boney.asp

Amdahl was starting to make some headway being able to run both MVS & MVS/XA concurrently. Early 80s, I got permission to give presentations on how the 138/148 ECPS was done to user group meetings, including monthly BAYBUNCH meetings hosted by Stanford SLAC. After SLAC meetings, Amdahl people would corner me for more information. They described how they created MACROCODE (370-like instruction set that ran in microcode mode), initially to respond to the series of minor 3033 microcode changes that were required for MVS to run. It was then being used to implement HYPERVISOR ... virtual machine subset all done in microcode allowing them to run different concurrent operating systems.

POK had done the VMTOOL in support of MVS/XA development, but was never intended to ship to customers ... however with Amdahl success, they decided to make VMTOOL available as VM/MA and then VM/SF. However, it wasn't until nearly the end of 80s, that IBM was able to respond to Amdahl HYPERVISOR with PR/SM & LPAR on 3090.

other recent posts mentioning Amdahl, MACROCODE, HYPERVISOR
https://www.garlic.com/~lynn/2023f.html#114 Copyright Software
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)

--
virtualization experience starting Jan1968, online at home since Mar1970

REXX (DUMRX, 3092, VMSG, Parasite/Story)

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: REXX (DUMRX, 3092, VMSG, Parasite/Story)
Date: 05 Dec, 2023
Blog: Facebook
early on, before renamed REXX and released to customers, I wanted to show REX was not just a pretty scripting language, I decided to re-implement large assembler dump/problem application, objective was working half time over 3months, have ten times the function and ten times the performance (slight of hand for interpreted rex); I finished early and started automated library that looked for common failure signatures. Eventually I thought it would be released to customers (replacing assembler version) but for what ever reason it wasn't, even though it was in use by nearly every internal datacenter and PSR. Eventually I was able to get permission to give talks at customer user group meetings on how I did the implementation ... and after few months, similar implementations started to appear.

Later, was contacted by the 3090 service processor ("3092" started out as 4331&3370FBA running highly modified version of VM370R6, all service screens done in IOS3270; morphs into pair of 4361s) group, that wanted to ship as part of 3092 ... old email
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

other trivia: author of VMSG (picked up by PROFS for the email client), also did Parasite/Story ... tiny CMS applications used VM370 psuedo device for simulated 3270 and a HLLAPI-like language (well before IBM/PC) ... could emulate login on local machine or "dial passthru" to connect anywhere on the internal network ... old archived posts with overview and examples
https://www.garlic.com/~lynn/2001k.html#35
story to retrieve RETAIN info
https://www.garlic.com/~lynn/2001k.html#36

other posts mentioning vmsg, profs, parasite, story
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2019d.html#108 IBM HONE
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018.html#20 IBM Profs
https://www.garlic.com/~lynn/2017k.html#27 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2017g.html#67 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2014k.html#39 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013d.html#66 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012d.html#17 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011o.html#30 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2011m.html#44 CMS load module format
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 05 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe

Turn of century financials had mainframe hardware was a few percent of revenue and dropping. z12 time frame, mainframe hardware was a couple percent of revenue and still dropping ... but mainframe group was 25% of revenue (nearly all software and services) and 40% of profit. Back in the 80s, mainframe hardware was majority of the revenue ... then comes the communication group with stranglehold on mainframe datacenters and in 1992 IBM has one of the largest losses in history of US companies ... and was being reorged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex that (mostly) reverses the breakup (although it wasn't long before the disk division is gone).

Note also in 1992 AMEX spun off lots of its mainframe datacenters and financial transaction outsourcing in one of the largest IPOs up until that time (many of the people had formally reported to the new IBM CEO). Around the turn of century, I was brought into one of their largest datacenters to look at performance issues (some 40+ max-configured IBM mainframes all running the same 450K cobol statement application, none older than 18months, constant rolling upgrades), that datacenter represented significant percentage of IBM hardware sales

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning 450k cobol statement app
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023c.html#99 Account Transaction Update
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#68 How Gerstner Rebuilt IBM
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021c.html#49 IBM CEO
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 05 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#50 Vintage Mainframe

... well, we had HSDT project starting in early 80s, T1 and faster computer links (both satellite and terrestrial) and taking lots of flack from the communication group (limited to 56kbits/sec). We had been working with NSF director and was suppose to get the $20M to interconnect the NSF supercomputing centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). 28Mar1986 Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (no CPD content and being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

We were visiting customer execs ... just before one stop off at communication group in Raleigh, we were visiting GM/EDS who said they had made strategic decision to move off SNA ... which we passed that on to Raleigh, they initially strongly argued ... then they left the room and came back and said, well GM/EDS has, but it doesn't make any difference since they have already spent their year's allocation for 37x5 SNA boxes.

Communication group was fiercely fighting off client/server and distributed computing and trying to block the release of mainframe TCP/IP ... when possibly some influential customers got it passed. Then the communication group changed their game plan and said that since they had corporate responsibility for everything that crossed the datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then did the support for RFC1044 and in some tuning tests at Cray Research between a Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something 500 times improvement in bytes moved per instruction executed).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

Late 80s, senior disk engineer got a talk scheduled at world-wide, annual, internal communication group conference, supposedly on 3174 performance, but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division saw drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (with their corporate responsibility for everything that crossed datacenter walls). For partial countermeasure, the GPD/Adstar VP of software was investing in distributed computing startups that would use IBM disks ... and he would periodically ask us to stop by some of his investments.

Communication group stranglehold on datacenters weren't just affecting disks and a couple years later, IBM has one of the losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company. Then the board brought in former AMEX president as CEO who mostly reverses the breakup (although it wasn't too long before the disk division was gone).

communication group trying to preserve their dump terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
AMEX, Private Equity, IBM related Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

Early 90s, the communication group hired silicon valley contractor to implement TCP/IP support directly in VTAM. What he initially demoed was significantly faster than LU6.2. They told him that everybody knows that a "proper" TCP/IP implementation is much slower than LU6.2 and they would only be paying for a "proper" implementation.

some posts mentioning proper TCP/IP is slower than LU6.2
https://www.garlic.com/~lynn/2023f.html#82 Vintage Mainframe OSI
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023b.html#56 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose
https://www.garlic.com/~lynn/2022h.html#115 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#86 Mainframe TCP/IP
https://www.garlic.com/~lynn/2021e.html#55 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021c.html#97 What's Fortran?!?!
https://www.garlic.com/~lynn/2019c.html#35 Transition to cloud computing
https://www.garlic.com/~lynn/2017j.html#33 How DARPA, The Secretive Agency That Invented The Internet, Is Working To Reinvent It
https://www.garlic.com/~lynn/2017i.html#35 IBM Shareholders Need Employee Enthusiasm, Engagemant And Passions
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives

--
virtualization experience starting Jan1968, online at home since Mar1970

Why True Democratic Systems Are Incompatible with Class-Based Orders.....Like Capitalism

From: Lynn Wheeler <lynn@garlic.com>
Subject: Why True Democratic Systems Are Incompatible with Class-Based Orders.....Like Capitalism
Date: 06 Dec, 2023
Blog: Facebook
Why True Democratic Systems Are Incompatible with Class-Based Orders.....Like Capitalism
https://www.nakedcapitalism.com/2023/12/why-true-democratic-systems-are-incompatible-with-class-based-orders-like-capitalism.html
Democracy is incompatible with class-divided economic systems. Masters rule in slavery, lords in feudalism, and employers in capitalism. Whatever forms of government (including representative-electoral) coexist with class-divided economic systems, the hard reality is that one class rules the other. The revolutionaries who overthrew other systems to establish capitalism sometimes meant and intended to install a real democracy, but that did not happen. Real democracy--one person, one vote, full participation, and majority rule--would have enabled larger employee classes to rule smaller capitalist classes. Instead, capitalist employers used their economic positions (hiring/firing employees, selling outputs, receiving/distributing profits) to preclude real democracy. What democracy did survive was merely formal. In place of real democracy, capitalists used their wealth and power to secure capitalist class rule. They did that first and foremost inside capitalist enterprises where employers functioned as autocrats unaccountable to the mass of their employees. From that base, employers as a class purchased or otherwise dominated politics via electoral or other systems.

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage 2321, Data Cell

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage 2321, Data Cell
Date: 06 Dec, 2023
Blog: Facebook
I took 2 credit hr intro to fortran/computers and at the end of the course got job redoing 1401 MPIO for 360/30. Univ was sold 360/67 (for TSS/360) to replace 709/1401 and got a 360/30 temporarily replacing 1401 pending availability of 67s. Univ. shutdown datacenter on weekends and I would have the place dedicate (although 48hrs w/o sleep made monday clases hard). I was given a bunch of hardware/software manuals and got to design & implement monitor, device drivers, interrupt handlers, storage management etc ... and within a few weeks had 2000 card assembler program. Within a year the 360/67 arrived and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition).

The univ library got a ONR grant to do online catalog and used part of the money to get a 2321 (hitting the IPL button, would get 2321 "kerchunk, kerchunk, ..." as the volser of each bin was read). Much later, after leaving IBM in 90s, did some work with one of the San Jose engineers that developed 2321 (was part of the large departure with Shugart in 1969 for Memorex).

IBM 2321 Data Cell
https://en.wikipedia.org/wiki/IBM_2321_Data_Cell

some recent posts mentioning library 2321
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#24 Video terminals
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023.html#108 IBM CICS
https://www.garlic.com/~lynn/2022h.html#110 CICS sysprogs
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022g.html#87 CICS (and other history)
https://www.garlic.com/~lynn/2022f.html#8 CICS 53 Years
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#78 US Takes Supercomputer Top Spot With First True Exascale Machine
https://www.garlic.com/~lynn/2022d.html#8 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#39 After IBM
https://www.garlic.com/~lynn/2022c.html#3 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022b.html#59 CICS, BDAM, DBMS, RDBMS
https://www.garlic.com/~lynn/2022.html#38 IBM CICS
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2021j.html#65 IBM DASD
https://www.garlic.com/~lynn/2021h.html#71 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021g.html#44 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021d.html#36 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021.html#80 CICS

--
virtualization experience starting Jan1968, online at home since Mar1970

REX, REXX, and DUMPRX

From: Lynn Wheeler <lynn@garlic.com>
Subject: REX, REXX, and DUMPRX
Date: 06 Dec, 2023
Blog: Facebook
similar recent posts in other threads
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#38 Computer "DUMPS"
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)

At univ. as undergraduate, I was responsible for os/360 ... it was 360/67 running in 360/65 ... and univ. shutdown datacenter on weekends and I would have it dedicated, although after 48hrs w/o sleep, made monday morning classes hard. Then cambridge came out to install CP67/CMS (3rd after cambridge itself and MIT Lincoln labs) and I mostly got to play with it on weekends. I first did a lot of rewriting pathlengths for OS/360 in virtual machine. My benchmark ran 322secs "stand-alone", 856secs in virtual machine ... CP/67 CPU 534secs, after a few months I had CP67 CPU down to 113secs. Archived post with part of SHARE presentation on CP67 work
https://www.garlic.com/~lynn/94.html#18

also mentioning some of the OS/360 work I had been doing, student Fortran ran in under second on 709; initially moved to OS/360 ran in over a minute (3step FORTGCLG) ran over minute, I install HASP and cut it in half, I then start doing custom stage2 SYSGENS: 1) run in production jobstream, 2) carefully order statements for dataset and PDS member placements for optimized arm seek and multi-track search (cutting another 2/3rds to 12.9secs, student fortran never got better than 709 until I install Univ Waterloo WATFOR).

Then I started work on CMS performance, most of which was in filesystem I/O (primarily CCW translation). I define a single CCW that did all the filesystem operation and didn't return until it was finished, presenting SIO with CSW stored, CC=1. I demonstrated performance improvement to Cambridge and got chastised for violating 360 architecture. To conform to 360 architecture, I had to use DIAGNOSE instruction which is defined in the 360 architecture as "model dependent" (and used for the fabrication of a 360 virtual machine model).

I also modified HASP, implementing 2741&TTY terminal support and an editor (from scratch with CMS edit syntax, totally different implementation because the environment was so different) for CRJE.

Early 80s, I wanted to demo REX (before renamed and released to customers) wasn't just another pretty scripting language. I chose the large assembler-based dump & problem analysis application. Objective was working half-time over three months reimplement with ten times the function and ten times the performance (some slight of hand making interpreted REX faster than assembler). I finished early, so created library of automagic scripts that searched for common failure signatures (researching failures, identified common assembler problem was not keeping track of & managing register contents).

I had expected it to be released to customers replacing the existing application, but for what ever reason it wasn't (even though it was in use by nearly every internal datacenter and PSR). Eventually I got permission to give talks at user group meetings on how I did the implementation and shortly similar implementations started appearing. Later the 3090 service processor (3092, started out as 4331/3370FBA with heavily modified VM370/CMS Rel6, but morphed into pair of redundant 4361s) group contacted me about releasing it with the 3092.

dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

trivia: a person at Palo Alto Science Center was responsible for HX technology and then when STL (now SVL) was outsourcing PLI to a vendor there was some anger about decision to provide them with the HX optimization technology.

note: PL/S was one of the casualties of the FS effort in the 70s (internal politics killing off 370 activity) ... then when FS imploded and 370 was being resurrected ... PL/S was slow to get going. This contributed to difficulty getting relational implementation on MVS ... that and EAGLE was the official grand strategic database for MVS ... and so there wasn't a lot of interest for relational (aka DB2) on MVS until after EAGLE had failed ... aka original SQL/relational was System/R on VM370 and then technology transfer (under radar while company preoccupied with EAGLE) to Endicott for SQL/DS ... after EAGLE imploded was when there was request how fast could get System/R on MVS.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

AUSMINIUM

From: Lynn Wheeler <lynn@garlic.com>
Subject: AUSMINIUM
Date: 07 Dec, 2023
Blog: Facebook
.... previous decade (during FS) I had written tome about certain lab was in danger of reaching black hole status ... but couldn't fit in analogy if they were unable to ship products how they could go out of business ... then there was article about black holes could evaporate

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
Date: Tue Nov 14 11:50:00 1989
From: lynn
Subject: Heavy Red-Tape

AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY

Administratium experts from around the company, while searching piles of red-tape in and around Austin, recently uncovered great quantities of Heavy Red-Tape. While there have been prior findings of Heavy Red-Tape at other red-tape sites, it only occured in minute quantities. The quantities of Heavy Red-Tape, in and around Austin have allowed Administratium experts to isolated what they believe to be a new element that they are tentatively calling AUSMINIUM.

At this time, plant officials are preparing an official press release declaring that there is no cause for alarm and absolutely NO truth to the rumors that because of the great concentration of Heavy Red-Tape in the area that there is imminent danger of achieving critical mass and the whole area collapsing into a black hole. Plant officials are stating that there is no evidence that large quantities of Heavy Red-Tape can lead to the spontaneous formation of a black-hole. They point to the lack of any scientific studies unequivalently showing that there are any existing black-holes composed of Heavy Red-Tape.

The exact properties of Heavy Red-Tape and ausminium are still under study.

.... attachment/ref:

SCIENTIST DISCOVERS NEW ELEMENT - ADMINISTRATIUM

The heaviest element known to science was recently discovered by University physicists. The element, tentatively named Administratium (AD), has no protons or electrons, which means that its atomic number is 0. However, it does have 1 neutron, 125 assistants to the neutron, 75 vice-neutrons and 111 assistants to the vice-neutrons. This gives it an atomic mass number of 312. The 312 particles are held together in the nucleus by a force that involves the continuous exchange of meson-like particles called memos.

Since it has no electrons, Administratium is inert. However, it can be detected chemically because it seems to impede every reaction in which it is present. According to one of the discoverers of the element, a very small amount of Administratium made one reaction that normally takes less than a second take over four days.

Administratium has a half-life of approximately 3 years, at which time it does not actually decay. Instead, it undergoes a reorganization in which assistants to the neutron, vice-neutrons, and assistants to the vice-neutrons exchange place. Some studies have indicated that the atomic mass number actually increases after each reorganization.

Administratium was discovered by accident when a researcher angrily resigned from the chairmanship of the physics department and dumped all of his papers in the intake hatch of the University's particle accelerator. "Apparently, the interaction of all of those reports, grant forms, etc. with the particles in the accelerator created the new element." an unnamed source explained.

Research at other laboratories seems to indicate that Administratium might occur naturally in the atmosphere. According to one scientist, Administratium is most likely to be found on college and university campuses, and in large corporation and government centers, near the best-appointed and best-maintained building.


... snip ... top of post, old email index

posts mentioning ausminium
https://www.garlic.com/~lynn/2023.html#0 AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY
https://www.garlic.com/~lynn/2021.html#64 SCIENTIST DISCOVERS NEW ELEMENT - ADMINISTRATIUM
https://www.garlic.com/~lynn/2004b.html#29 The SOB that helped IT jobs move to India is dead!

AWD (workstation) supposedly was an IBU (independent business unit) free from standard IBM red-tape ... however, every time ran into bureaucrat ... they would (effectively) say that while AWD may be free from "other" IBM red-tape ... AWD wasn't free of their red-tape. Reference to Learson trying to counter the bureaucrats and careerists destroying Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Future System, 115/125, 138/148, ECPS

From: Lynn Wheeler <lynn@garlic.com>
Subject: Future System, 115/125, 138/148, ECPS
Date: 07 Dec, 2023
Blog: Facebook
Early 70s, after joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters. One of my 1st overseas trips was paid for by IBM Boeblingon to give briefings on dataproceesing. Then early 1975 (FS imploding) I get con'ed into doing five processor 370/125 (and same time Endicott cons me into doing ECPS for 138/148). Boeblingen had been disciplined for doing 115/125 design, nine position memory bus for microprocessors. For 115, all the microprocessors would be the same with different microcode loads for 370 instruction (avg ten native instructions per 370 instruction) and I/O controllers. 125 was same except the microprocessor for 370 was 50% faster (120kips 370 instead of 80kips 370). Since there were no configurations that used all nine positions, they wanted to do a five processor multiprocessor (with four positions left for I/O controller microprocessors).

Endicott escalated the Boeblingen work claiming it would overlap 148 throughput (I had to argue both sides), but five 370 processor 125 was terminated.

In the 80s, Amdahl engineer was visiting 370 clone maker in Germany ... and they showed him an IBM Boeblingon confidential document on "ROMAN" chip set (3 chips implementing 370 with performance of 168-3), he told them that was illegal, confiscated it, and mailed it to me in San Jose, so I could mail it to IBM in Germany.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
125 5-processor posts
https://www.garlic.com/~lynn/submain.html#bounce
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

initial analysis for 138/148 ecps, was told that 148 had 6kbytes of microcode space and would translate (kernel) 370 instructure approx byte-for-byte, needed to identify the highest executed 6kbytes of kernel pathlengths for re-implementing in microcode ... was approx 80% of kernel cpu execution ... old archived post with analysis
https://www.garlic.com/~lynn/94.html#21

--
virtualization experience starting Jan1968, online at home since Mar1970

Future System, 115/125, 138/148, ECPS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Future System, 115/125, 138/148, ECPS
Date: 07 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#56 Future System, 115/125, 138/148, ECPS

trivia: after decision to add virtual memory to all 370s, also decided to morph CP67 into VM370; some of the people split off from the IBM Science Center on the 4th flr and took over the IBM Boston Programming Center on the 3rd flr (they had been responsible for os/360 "CPS" and associated 360/50 microcode) for the VM370 development group; when they outgrew the 3rd flr, they moved out to the empty IBM SBC bldg in burlington mall (off rt128).

During IBM Future System period, internal politics was killing off 370 efforts (lack of new 370 is credited with giving clone 370 makers their market foothold) and then when FS implodes there was mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts.
http://www.jfsowa.com/computer/memo125.htm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Also the head of POK manages to convince corporate to kill the VM370 product, shutdown the development group and move all the people to POK for MVS/XA (or supposedly MVS/XA wouldn't ship on time). They weren't planning on telling the people until the very last moments to minimize the numbers that might escape, however the shutdown/move managed to leak and several managed to escape into the Boston area (including the infant VMS group, joke was the head of POK was manager contributor to VMS). Endicott managed to save the vm370 product mission but had to reconstitute a development group from scratch (and then POK was bullying internal datacenters that vm370 would no longer run on future generations of high-end IBM mainframes).

IBM 4341/4331 sold in the same mid-range market as VAX and in about the same numbers for small unit orders. A big difference was large corporations with orders of hundreds of vm/4300s at a time for placing out in departmental areas (sort of the leading edge of the distributed computing tsunami); inside IBM, departmental conference rooms became scarce with so many converted to vm/4300 rooms. Decade of VAX sales, sliced&diced by year, model, US/non-US in this archived post:
https://www.garlic.com/~lynn/2002f.html#0

more trivia II: I had transferred to San Jose Research in the late 70s and got to wander around IBM & non-IBM datacenters in silicon valley ... including bldg 14&15 (disk engineering & product test) across the street. They were running 7x24, prescheduled stand-alone testing and mentioned that they had recently tried MVS, but it had 15min mean-time-between failure (requiring manual re-ipl) in that environment. I volunteered to rewrite I/O supervisor to allow any amount of on-demand, concurrent testing (greatly improving productivity).

Then bldg15 got 1st engineering 3033 outside POK processor development, and since I/O testing only took a couple percent 3033 CPU, we scrounge up string of 3330s and 3830 to put up our own private online service. Downside was engineering kneejerk to call me when there was a problem, and I had to increasingly play disk engineer diagnosing hardware problems. Then bldg15 gets first engineering 4341 outside Endicott and branch office hears of it and cons me into doing a benchmark in Jan1979 (before customer ship) for national lab that was looking at getting 70 for computer farm (sort of leading edge of coming cluster supercomputing tsunami).

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

Note: quick&dirty 303x, they took the 158 integrated channel microcode for 303x channel director. A 3031 was two 158 engines one with just the 370 microcode and the other with just the channel microcode; A 3032 was a 168 configured for 303x channel director and 3033 initially was 168 logic mapped to 20% faster chips. The 158 integrated channel microcode was very slow ... especially compared to 4341 ... tweaking the 4341 integrated channel microcode, they could use it for 3380/3880 3mbyte/sec testing (even though it was never released).

Note 3090 configured the number of channels assuming the 3880 was similar to 3830 but with 3mbyte/sec transfer. However 3880 was much slower for everything (except data transfer) which greatly increased channel busy ... and they realized they would have to greatly increase the number of channels (to offset 3880 channel busy and achieve the target throughput) ... which required an additional TCM (the 3090 group semi-facetiously claimed they would bill the 3880 group for increase in 3090 manufacturing cost). Marketing eventually respun the huge increase in channels as 3090 being great I/O machine (rather than to offset the huge increase in 3880 channel busy).

more trivia III: 1980, STL (since renamed SVL) was bursting at the seams and moving 300 from the IMS group to offsite bldg with service back to STL. I get con'ed into doing channel extender support so they can place channel-attached 3270 controllers at the offsite bldg (with no perceptible difference between offsite and inside STL). The hardware vendor tries to get IBM to release my support, but there is group in POK playing with some serial stuff that gets it vetoed (afraid that if it was in the market, it would be harder to get their stuff released). In 1988, IBM branch office asks if I can help LLNL gets some serial stuff they are working with, standardized; which quickly becomes fibre channel standard (FCS, including some stuff I had done in 1980, initially 1gbit, full duplex, 200mbyte/sec aggregate).

Then the POK group gets their stuff released with ES/9000 in 1990 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved in FCS and define a heavy-weight protocol that significantly reduces the throughput, eventually released as FICON. Latest public benchmark I've found is z196 "Peak I/O" benchmark using 104 FICON getting 2M IOPS. About same time a FCS was announced for E5-2600 blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM was advertising limiting SAPs (system assist processors that handle actual I/O) CPU to 70%, cutting throughput to 1.5-1.7 IOPS.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Multiprocessor

From: Lynn Wheeler <lynn@garlic.com>
Subject: Multiprocessor
Date: 07 Dec, 2023
Blog: Facebook
POK would say MVS 2-processor had 1.2-1.5 times the throughput of single processor. First off for cross-cache protocol the processor cycle was reduced 10% ... so a 2-processor hardware starts out only 1.8 times a single processor ... then you had all the MVS software multiprocessor overhead.

Then come to 308x, was only going to be multiprocessor machines
http://www.jfsowa.com/computer/memo125.htm
... initially announced 3081D but it had less aggregate MIPS of single processor Amdahl machine. IBM quickly doubles the cache size for 3081K to claim aggregate MIPS more than single processor Amdahl machine ... but both running MVS ... 3081K still had the MVS multiprocessor overhead (while the Amdahl single processor didn't).

One of my hobbies after joining IBM science center was enhanced production operating systems for internal datacenter (and the world-wide, online sales&marketing support HONE systems were long time customer). In the morph of CP67->VM370, lots of features were dropped and/or simplified. In 1974, I started adding bunch of stuff back into VM370 .... including re-org of the kernel for multiprocessor (but not multiprocessor itself). US HONE datacenters were then consolidated in Palo Alto and the 168s HONE systems were enhanced for single-system image, loosely-coupled (large shared disk farm) with load-balancing and fall-over (HONE1-HONE8 with 158 HONEDEV). Initially for HONE, I add tightly-coupled support so a 2nd processor could be added to each system. For HONE I play some tricks with cache affinity dispatching ... and getting twice the throughput (aka improved cache hit ratio offsetting the reduced processor cycle time and some really super short multiprocessor pathlengths).

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 08 Dec, 2023
Blog: Facebook
modulo communication group was fiercely fighting off client/server and distributed computing.

AWD (workstation IBU) had done their own (PC/AT bus) 4mbit token-ring card for the PC/RT. Then for the RS/6000 with microchannel, they were told that they couldn't do their own cards, but had to use the (communication group performance kneecapped) PS2 cards (joke was RS/6000 limited to PS2 cards would have better throughput than PS2/486 for many things). Example was the PS2 microchannel 16mbit token-ring card had lower throughput than the PC/RT 4mbit token-ring card.

Late 80s, senior disk engineer got a talk scheduled at world-wide, annual, internal communication group conference, supposedly on 3174 performance, but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division saw drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (with their corporate responsibility for everything that crossed datacenter walls). Communication group stranglehold on datacenters weren't just affecting disks and a couple years later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex that (mostly) reverses the breakup (although it wasn't long before the disk division is gone).

At IBM, former head of POK was now head of Boca and hired Dataquest (since acquired by Gartner) to do detailed study of PC business and its future ... including a video taped round table discussion of silicon valley experts. I had known the person running the Dataquest study for years and was asked to be one of the silicon valley experts (they promised to obfuscate by vitals so Boca wouldn't recognize me as IBM employee ... and I cleared it with my immediate management). I had also been posting SJMN sunday adverts of quantity one PC prices to IBM forums for a number of years (trying to show how out of touch with reality, Boca forecasts were).

communication group fighting off client/server and distributed computing
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
ibm online computer conferencing (and in late 70s & early 80s being blamed)
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

PDS Directory Multi-track Search

From: Lynn Wheeler <lynn@garlic.com>
Subject: PDS Directory Multi-track Search
Date: 08 Dec, 2023
Blog: Facebook
1979, datacenter for large national grocery chain (hundreds of stores) was having performance problems with their large multi-system loosely-coupled batch operation ... the major trouble was interactions with store controllers crawled to nearly halt ... all the usual IBM performance experts had been through before they finally got around to calling me. I was brought into classroom with large piles of system activity data printouts covering the tables. After about 20 minutes I noticed that the aggregate activity (across systems) for a specific disk was flat lining around seven/second. I asked what the disk was and they said it was system-wide shared 3330 disk containing the store controller application library, little more information was PDS dataset with three cylinder PDS directory. vola ... I had worked on this in the 60s when I was undergraduate and fulltime datacenter employee responsible for OS/360.

In this case, store controller application load did PDS Directory avg. full cylinder multi-track search then half cylinder multi-track seek, then seek and read the application; first full cylinder multi-track search is 317msecs, 2nd half-cylinder multi-track search is 158msecs or .475secs plus the application read or around half second (and during multi-track searches, the disk, controller, and channel were solid busy) for each store controller application ... the aggregate peak avg of seven I/Os per second or two store controller application loads/sec across all systems for all stores in the country. Dataset was then split into multiple datasets, and then replicated into multiple sets, one per system on dedicated system, non-shared 3330s.

Univ. had been sold a 360/67 for tss/360 replacing a 709/1401 ... student fortran jobs ran under second on 709. 360/67 replacement ran as 360/65 with os/360 (tss/360 never came to production fruition). Initially student fortran (FORTGCLG) ran over minute. I install HASP, cutting time in half. I then redo stage2 sysgen: 1) run in production jobstream, 2) reorder statements to place datasets and PDS members to optimize disk arm seeks and multi-track searches, cutting another 2/3rds to 12.9secs. Student fortran never got better than 709 until I install Univ. of Waterloo WATFOR.

recent comment mentioning SHARE presentation
https://www.garlic.com/~lynn/2023g.html#32 Storage Management

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

note: Data-in-virtual
https://www.ibm.com/docs/en/zos/3.1.0?topic=concepts-data-in-virtual
Data-in-virtual is most useful for applications, such as graphics, that require large amounts of data but normally reference only small portions of that data at any given time. It requires that the source of the object be a VSAM linear data set on DASD (a permanent object) or a hiperspace (a temporary object)

... snip ...

sort of limited subset of FS single-level-store ... that had come from TSS/360 & Multics ... during FS I did page-mapped filesystem for CP67/CMS that had 3-4 times throughput of standard filesystem for moderate activity and scaled up much better as load increased (I would claim I learned what not to do from TSS/360). some posts
https://www.garlic.com/~lynn/submain.html#mmap
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

PDS Directory Multi-track Search

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: PDS Directory Multi-track Search
Date: 09 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#60 PDS Directory Multi-track Search

transferred to sjr ... got to play disk engineer across street
https://www.garlic.com/~lynn/subtopic.html#disk

but also asked to do some of System/R, original SQL/relational. IMS down in stl was criticizing System/R that it required twice as much disk space and 4-5 I/Os (for indexes & processing), we criticized IMS requiring significantly higher skill level and people support hrs. Early 80s, significant cut in disk costs and increase main memory sizes (for caching indexes, cuting I/Os) while skill level was becoming scarce and becoming more expensive ... was flipping the comparison. System/R BofA joint study was getting 60 vm/4341s for distributed System/R. Also with the corporate preoccupied with the next great new DBMS, EAGLE ... was able to do tech transfer ("under the radar") to endicott for SQL/DS. Later when EAGLE implodes, there was request how fast could System/R be ported to MVS ... which is eventually released as DB2 (originally for decision/support *only*).
https://www.garlic.com/~lynn/submain.html#systemr

Other large corporations were ordering hundreds of vm/4341s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). MVS seeing the distributed VM/4341 market, but the mid-range, non-datacenter DASD was FBA ... eventually came out with 3375 CKD simulated on 3370 ... however it didn't do MVS much good, the distributed VM/4341 market was looking at scores of systems per support person, while MVS was still scores of support people per MVS system. Also saw explosion of non-IBM RDBMS on servers, workstations, PCs.

DASD, CKD, FBA, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd

Playing disk engineer including with early 1st engineering 3033 and engineering 4341 (outside processor development labs) for disk testing in bldg15 ... but also branch asked me to do national lab benchmark on vm/4341 looking at getting 70 for compute farm (sort of leading edge of the coming cluster supercomputing tsunami).

--
virtualization experience starting Jan1968, online at home since Mar1970

Silicon Valley Mainframes

From: Lynn Wheeler <lynn@garlic.com>
Subject: Silicon Valley Mainframes
Date: 09 Dec, 2023
Blog: Facebook
A former co-worker at SJR left IBM and was doing lots of consulting work in Silicon Valley including at a large chip shop working for Senior VP of engineering. He had significantly redone the C-compiler for IBM mainframe and ported most of UCB/unix chip tools to mainframe. One day IBM marketing rep stopped by and asked him what he was doing and he replied mainframe ethernet support so that they can use SGI graphic workstations as front-end to back-end IBM mainframes. The marketing rep told him he should be doing token-ring instead or otherwise they might find their mainframe service not as timely as in the past. I almost immediately got an hour call full of 4-letter words and the next morning the SVP had a press conferencing that they were moving everything off IBM mainframes to SUN servers. There was some number of IBM task forces looking at why silicon valley wasn't using IBM mainframes (but they weren't allowed to look at the marketing rep problem).

past posts mentioning silicon valley moving off mainframes
https://www.garlic.com/~lynn/2023b.html#50 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2021j.html#36 Programming Languages in IBM
https://www.garlic.com/~lynn/2021h.html#69 IBM Graphical Workstation
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring
https://www.garlic.com/~lynn/2018f.html#49 PC Personal Computing Market
https://www.garlic.com/~lynn/2017g.html#12 Mainframe Networking problems
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!

--
virtualization experience starting Jan1968, online at home since Mar1970

CP67 support for 370 virtual memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP67 support for 370 virtual memory
Date: 09 Dec, 2023
Blog: Facebook
Science Center had added original 370 instructions support to CP67 for 370 virtual machine option. Then after there was decision to add virtual memory to 370, there was joint project with Cambridge and Endicott to expand CP67 370 virtual machine support to full 370 virtual memory architecture which was "CP67-H", then there was modification to CP67 to run on 370 virtual memory architecture which was "CP67-I". Because Cambridge also had profs, staff, students from Boston/Cambridge institutions, CP67-L ran on the real 360/67, CP67-H ran in a CP67-L 360/67 virtual machine and CP67-I ran in a CP67-H 370 virtual machine (countermeasure to leaking unannounced 370 virtual memory). This was in regular operation a year before the first enginneering machine (370/145) with virtual memory was operational and CMS run in CP67-I virtual machine (also CP67-I was used to verify the engineering 370/145 virtual memory) ... aka
CMS running in CP67-I virtual machine CP67-I running in CP67-H 370 virtual machine CP67-H running in a CP67-L 360/67 virtual machine CP67-L running on real 360/67

Later three engineers came out to Cambridge and added 2305 & 3330 device support to CP67-I ... for CP67-SJ ... which was in wide use on (internal) 370 virtual memory machines. As part of all this, original multi-level source update support had been added to CMS.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Mid-80s, Melinda asked me if I had copy of the original multi-level update implementation ... I had several archive/backup tapes (triple replicated) of stuff from 60s&70s in the Almaden Research tape library and was able to pull off the original multi-level source. It was really fortunate because within a few weeks Almaden had an operational problem where random tapes were being mounted as scratch ... and all copies of my archive/backup tapes were overwritten. Melinda's history site
https://www.leeandmelindavarian.com/Melinda#VMHist

archived post with copy of email exchange
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2021k.html#email850908
https://www.garlic.com/~lynn/2021k.html#email850909
other Melinda email about HSDT and NSFnet
https://www.garlic.com/~lynn/2021k.html#email860404
https://www.garlic.com/~lynn/2021k.html#email860407
we had been working with NSF director and was suppose to get $20M to interconnect the NSF supercomputers and then congress cuts the budget, some other things happen and eventually a RFP is released (in part based on what we already had operational). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Cobol, 3rd&4th Generation Languages

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Cobol, 3rd&4th Generation Languages
Date: 10 Dec, 2023
Blog: Facebook
even before SQL (& RDBMS) originally done on VM370/CMS (aka System/R on 370/145 at IBM SJR, later tech transfer to Endicott for SQL/DS and to STL for DB2) there were other "4th Generation Languages", one of the original 4th generation languages, Mathematica made available through NCSS (cp67/cms online commercial spinoff)
http://www.decosta.com/Nomad/tales/history.html
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a report that would have taken many hundreds of lines of Cobol to produce. The product grew in capability and in revenue, both to NCSS and to Mathematica, who enjoyed increasing royalty payments from the sizable customer base. FOCUS from Information Builders, Inc (IBI), did even better, with revenue approaching a reported $150M per year. RAMIS moved among several owners, ending at Computer Associates in 1990, and has had little limelight since. NOMAD's owners, Thomson, continue to market the language from Aonix, Inc. While the three continue to deliver 10-to-1 coding improvements over the 3GL alternatives of Fortran, Cobol, or PL/1, the movements to object orientation and outsourcing have stagnated acceptance.

... snip ...

CP67 commercial spinoffs of the IBM Cambridge Science Center in the 60s was NCSS ... which is later bought by Dun & Bradstreet
https://en.wikipedia.org/wiki/National_CSS
above mentions
https://en.wikipedia.org/wiki/Nomad_software

other history
https://en.wikipedia.org/wiki/Ramis_software
When Mathematica (also) makes Ramis available to TYMSHARE for their VM370-based commercial online service, NCSS does their own version
https://en.wikipedia.org/wiki/Nomad_software
and then follow-on FOCUS from IBI
https://en.wikipedia.org/wiki/FOCUS
Information Builders's FOCUS product began as an alternate product to Mathematica's RAMIS, the first Fourth-generation programming language (4GL). Key developers/programmers of RAMIS, some stayed with Mathematica others left to form the company that became Information Builders, known for its FOCUS product

... snip ...

4th gen programming language
https://en.wikipedia.org/wiki/Fourth-generation_programming_language

this mentions "first financial language" at IDC (another 60s commercial cp67/cms spinoff from the IBM cambridge science center)
https://www.computerhistory.org/collections/catalog/102658182
as an aside, a decade later, person doing FFL joins with another to form startup and does the original spreadsheet
https://en.wikipedia.org/wiki/VisiCalc

TYMSHARE topic drift ...
https://en.wikipedia.org/wiki/Tymshare
In Aug1976, Tymshare started offering its CMS-based online computer conferencing free to (user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
System/R (sql/relational) posts
https://www.garlic.com/~lynn/subtopic.html#systemr

some posts mentioning ramis&nomad
https://www.garlic.com/~lynn/2023.html#13 NCSS and Dun & Bradstreet
https://www.garlic.com/~lynn/2022f.html#116 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#49 4th generation language
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021g.html#23 report writer alternatives
https://www.garlic.com/~lynn/2021f.html#67 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2021c.html#29 System/R, QBE, IMS, EAGLE, IDEA, DB2
https://www.garlic.com/~lynn/2019d.html#16 The amount of software running on traditional servers is set to almost halve in the next 3 years amid the shift to the cloud, and it's great news for the data center business
https://www.garlic.com/~lynn/2019d.html#4 IBM Midrange today?
https://www.garlic.com/~lynn/2018e.html#45 DEC introduces PDP-6 [was Re: IBM introduces System/360]
https://www.garlic.com/~lynn/2018d.html#3 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2018c.html#85 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2018.html#24 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017j.html#83 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017j.html#39 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
https://www.garlic.com/~lynn/2017j.html#29 Db2! was: NODE.js for z/OS
https://www.garlic.com/~lynn/2017c.html#85 Great mainframe history(?)
https://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2016e.html#107 some computer and online history
https://www.garlic.com/~lynn/2015h.html#27 the legacy of Seymour Cray

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 10 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe

the big one
https://www.ibm.com/ibm/history/exhibits/3033/3033_intro.html

IBM was required to ship in same sequence as ordered. Folklore was 1st 3033 order was VM/370 ... which would have been horrible loss of face for POK MVS org. Solution supposedly they left loading dock in order, but delivery vans route were fiddled so the MVS system arrived and installed 1st.

note only about a year before, the head of POK had managed to convince corporate to kill VM/370 product, shutdown the development group, and move the people to POK for MVS/XA (Endicott managed to save the VM/370 product mission ... but had to recreate a development group from scratch)

a few other posts this year mentioning head of POK getting vm370 product killed
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#27 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023f.html#103 Microcode Development and Writing to Floppies
https://www.garlic.com/~lynn/2023f.html#87 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2023f.html#75 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#50 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#95 370/148 masthead/banner
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#104 IBM Term "DASD"
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#55 IBM VM/370
https://www.garlic.com/~lynn/2023c.html#44 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#64 Another 4341 thread
https://www.garlic.com/~lynn/2023b.html#11 Open Software Foundation
https://www.garlic.com/~lynn/2023.html#87 IBM San Jose
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2023.html#34 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia

--
virtualization experience starting Jan1968, online at home since Mar1970

2540 "Column Binary"

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 2540 "Column Binary"
Date: 12 Dec, 2023
Blog: Facebook
I took two credit hr intro to fortran/computers. at end of semester was hired to port 1401 MPIO to 360/30. The univ was sold 360/67 (for tss/360) to replace 709 (tape->tape) and 1401 (reader->7trk200bpi, 7trk200bpi->printer/punch, aka 6bit bytes plus parity), tapes moved manually back&forth between 709 drives and 1401 drives. Temporarily pending 360/67, the 1401 was replaced with 360/30 ... which had 1401 emulation (and continue to run all 1401 programs). I assume I was part of getting 360 experience. They gave me a bunch of hardware&software manuals, and I got to design&implement my own monitor, interrupt handlers, device drivers, error recovery, storage management, etc. They shutdown the datacenter on weekends and I would have the whole place dedicated (although 48hrs w/o sleep made monday classes hard) ... after a few weeks I had 2000 card 360 assembler program.

Easy to read/punch BCD EBCDIC subset, but had to recognize 709 "binary" (2540 read error, retry with column binary) ... two six bit bytes/column and switch to column binary (in 360 memory as 160 8-bit bytes). reader->tape, 80byte&160byte; tape->printer/punch 80byte,133byte(?),160byte ... read max. length, with SILI (suppress incorrect length indicator) and calculate length read with original max minus residual.

somebody had done green card in CMS IOS3270 ... I did q&d conversion to HTML, this shows "Data mode" for 3525 CCW, ebcdic & "card image" (aka column binary)
https://www.garlic.com/~lynn/gcard.html#23

trivia: I did 360/30 MPIO in two version (using assembler option), "stand-alone" with BPS loader and OS/360 with DCB macros; stand-alone took 30mins to assembler (360/30 os/360 PCP), but OS/360 version took hour to assemble (each DCB macro took 5-6mins).

posts mention "column binary":
https://www.garlic.com/~lynn/2022h.html#30 Byte
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#87 Punch Cards
https://www.garlic.com/~lynn/2021f.html#79 Where Would We Be Without the Paper Punch Card?
https://www.garlic.com/~lynn/2021e.html#44 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2017c.html#73 Movie Computers
https://www.garlic.com/~lynn/2017c.html#72 Movie Computers
https://www.garlic.com/~lynn/2017c.html#68 Movie Computers
https://www.garlic.com/~lynn/2017c.html#65 Movie Computers
https://www.garlic.com/~lynn/2014m.html#152 Is true that a real programmer would not stoop to wasting machine capacity to do the assembly?
https://www.garlic.com/~lynn/2014j.html#92 curly brace languages source code style quides
https://www.garlic.com/~lynn/2012l.html#84 72 column cards
https://www.garlic.com/~lynn/2011g.html#70 History of byte addressing
https://www.garlic.com/~lynn/2010h.html#72 1130, was System/3--IBM compilers (languages) available?
https://www.garlic.com/~lynn/2010h.html#36 IBM 029 service manual
https://www.garlic.com/~lynn/2008k.html#47 IBM 029 keypunch -- 0-8-2 overpunch -- what hex code results?
https://www.garlic.com/~lynn/2008b.html#77 Usefulness of bidirectional read/write?
https://www.garlic.com/~lynn/2006c.html#17 IBM 610 workstation computer
https://www.garlic.com/~lynn/2004f.html#49 can a program be run withour main memory?
https://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2002o.html#19 The Hitchhiker's Guide to the Mainframe
https://www.garlic.com/~lynn/2001h.html#72 ummmmm
https://www.garlic.com/~lynn/2001h.html#24 "Hollerith" card code to EBCDIC conversion
https://www.garlic.com/~lynn/2001b.html#20 HELP
https://www.garlic.com/~lynn/2000b.html#6 ascii to binary
https://www.garlic.com/~lynn/2000.html#79 Mainframe operating systems
https://www.garlic.com/~lynn/99.html#59 Living legends
https://www.garlic.com/~lynn/99.html#13 Old Computers
https://www.garlic.com/~lynn/98.html#9 ** Old Vintage Operating Systems **
https://www.garlic.com/~lynn/95.html#4 1401 overlap instructions

--
virtualization experience starting Jan1968, online at home since Mar1970

Waiting for the reference to Algores creation documents/where to find- what to ask for

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Waiting for the reference to Algores creation documents/where to find- what to ask for
Newsgroups: alt.folklore.computers
Date: Tue, 12 Dec 2023 14:45:45 -1000
I got HSDT project (T1 & faster computer links), early 80s ... then fall of 1982 (just before 1jan1983 cut-over to tcp/ip) IBM SJR gateway to (NSF fundec) CSNET ...
https://en.wikipedia.org/wiki/CSNET

and was working with the NSF Director and was suppose to get $20M to interconnect the NSF supercomputer centers, then congress cuts the budget, various other things happened ... finally RFP was released (in part based on what we already had running). 28Mar1986 Preliminary Announcement:

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

NCSA, came out of 1986 OASC ... origins of MOSAIC
https://www.ncsa.illinois.edu/
https://research.illinois.edu/researchunit/national-center-supercomputing-applications

... I was told that the AUP (acceptable use) for NSFNET and regional networks was in part, commercial entities had contributed tax deductable resources (in case of NSFNET 4-5 times the RFP), in which case the tax deductable resources couldn't be used for commercial purposes.

Mar1991, AUP altered allowing commercial traffic.
https://www.nsf.gov/od/lpa/nsf50/nsfoutreach/htm/n50_z2/pages_z3/28_pg.htm

Dec1991 legislation promoting gov. agencies commercializing gov. technology (as part of making US more competitive). I'm participating in National Information Infrastructure meetings
https://en.wikipedia.org/wiki/National_Information_Infrastructure
& High Performance Computing meetings at LLNL
https://en.wikipedia.org/wiki/High_Performance_Computing_Act_of_1991

US Gov. wanted companies to participate in the NII on their own nickel .... then Singapore invited all the same participants to build a fully gov funded one there.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

NSFNET RFP awarded 24nov87 and RFP kickoff meeting 7Jan1988
https://www.garlic.com/~lynn/2000e.html#email880104 Kickoff Meeting 7Jan1988

News articles mentioning Gore in post about NSFNET RFP kickoff meeting 7Jan1988
John Markoff, NY Times, 29 December 1988, page D1
Paving way for data 'highway' Carl M Cannon, San Jose Mercury News, 17 Sep 89, pg 1E
https://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet?

some NII posts
https://www.garlic.com/~lynn/2023g.html#25 Vintage Cray
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2022h.html#12 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#3 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2021k.html#84 Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#83 Internet Old Farts
https://www.garlic.com/~lynn/2021h.html#25 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server

--
virtualization experience starting Jan1968, online at home since Mar1970

Assembler & non-Assembler For System Programming

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Assembler & non-Assembler For System Programming
Date: 13 Dec, 2023
Blog: Facebook
IBM Los Gatos VLSI tools labs was using Metaware's compiler-compiler (TWS). for languages and two engineers used it to implement mainframe Pascal for VLSI tools (later evolved into VS/Pascal product).

Early 80s, with HSDT (t1 and faster computer links, both terrestrial and satellite, I had part of wing out in Los Gatos with offices and labs) and I had problem with RSCS/VNET using VM370 spool, I needed spool to run much faster ... aka 4k blocks using synchronous diagnose (RSCS/VNET was non-runnable during transfer) that was queued on spool volumes possible in use with lots of other users ... effectively RSCS/VNET likely getting only 5-8 4k block transfers/sec (32k/sec ... a single full-duplex T1 needed 300k/sec from spool and might possibly have several such links).

I re-implemented a VM370 spool in Pascal running in virtual address space, supporting asynchronous operation, contiguous allocation, multiple block transfers, read-ahead and write-behind ... along with a couple other features, hash/indexed lookup of spool files (instead of linear, sequential search) .... much higher sustained throughput with much lower CPU use. I also ran a advanced-technology symposium at San Jose Research 4-5Mar1982 that included using higher-level languages for migrating VM370 kernel functions to virtual address functions. Post mentioning adtech symposium
https://www.garlic.com/~lynn/96.html#4a

Was also scheduled to work with the corporate network committee to deploy it on backbone platforms ... but was preempted by communication group which got around to forcing the corporate network committee to convert to SNA (lobbying corporate executive committee with all sorts of misinformation about need to convert the internal network to SNA) ... 1st meeting I was to present to ... had just been changed to non-technical attendees, only management.

The communication group was also fighting off the release of mainframe tcp/ip (also done in pascal) ... when apparently some influential customers got the decision changed ... and the communication group changed strategy, claiming that since they had corporate responsibility for everything that crossed the datacenter walls, it had to be released through them; what ships got aggregate sustained 44kbytes/sec using nearly whole 3090 processor. I then did changes for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

trivia: one of the Los Gatos engineers left IBM and sometime later find him general manager of the SUN group that included JAVA, the other left and joined metaware (IBM Palo Alto used a C-compiler from metaware for porting Berkeley BSD to PC/RT).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
801/risc, Iliad, ROMP, RIOS, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Metaware, AOS, PC/RT posts
https://www.garlic.com/~lynn/2023c.html#98 Fortran
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2021i.html#45 not a 360 either, was Design a better 16 or 32 bit processor
https://www.garlic.com/~lynn/2021c.html#95 What's Fortran?!?!
https://www.garlic.com/~lynn/2018e.html#63 EBCDIC Bad History
https://www.garlic.com/~lynn/2018d.html#31 MMIX meltdown
https://www.garlic.com/~lynn/2017f.html#94 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#24 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2015g.html#52 [Poll] Computing favorities
https://www.garlic.com/~lynn/2011i.html#69 Making Z/OS easier - Effectively replacing JCL with Unix like commands
https://www.garlic.com/~lynn/2010n.html#54 PL/I vs. Pascal
https://www.garlic.com/~lynn/2010i.html#28 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2005s.html#33 Power5 and Cell, new issue of IBM Journal of R&D
https://www.garlic.com/~lynn/2004q.html#39 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#38 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History

posts mentioning SFS (spool file system) & Pascal
https://www.garlic.com/~lynn/2017e.html#24 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2011e.html#29 Multiple Virtual Memory
https://www.garlic.com/~lynn/2010k.html#26 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines
https://www.garlic.com/~lynn/2008g.html#22 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2007c.html#21 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)

--
virtualization experience starting Jan1968, online at home since Mar1970

Assembler & non-Assembler For System Programming

From: Lynn Wheeler <lynn@garlic.com>
Subject: Assembler & non-Assembler For System Programming
Date: 13 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#68 Assembler & non-Assembler For System Programming

Within a year of taking 2 credit hr intro to fortran/computers, I was hired fulltime responsible for os/360 (the univ. had just got 360/67 replacing 709/1401, originally for tss/360, but used as 360/65 for OS/360. Univ. shutdown datacenter on weekends and I had the place dedicated (although 48hrs w/o sleep made monday classes hard) and was doing lots of reworking OS/360. For some forgotten reason, I had a couple nights 3rd shift at IBM regional datacenter. During the day, wandered around the bldg and found a MVT debugging class and asked to sit in. However, I kept suggesting better ways and within 20mins the instructor asked me to leave.

Much later, in the early days of REX (before renamed REXX and released to customers), I wanted to show it wasn't just another pretty scripting language. I chose the large assembler-based dump & problem analysis application. Objective was working half-time over three months re-implement with ten times the function and ten times the performance (some slight of hand making interpreted REX faster than assembler). I finished early, so created library of automagic scripts that searched for common failure signatures (researching failures, identified common assembler problems was not keeping track of register contents). I had expected it to be released to customers replacing the existing application, but for what ever reason it wasn't (even though it was in use by nearly every internal datacenter and PSR). Eventually I got permission to give talks at user group meetings on how I did the implementation and shortly similar implementations started appearing. Later the 3090 service processor (3092, started out as 4331/3370FBA with heavily modified VM370/CMS Rel6, but morphed into pair of redundant 4361s) group contacted me about releasing it with the 3092.

I use to sponsor friday's after work mostly IBM'ers from south silicon valley ... but sometimes others ... NCSS 3200? 2-pi? ... signetics sister company? later sold to 4phase?

knew some of the people in silicon valley ... see some friday's after work or after the monthly user group hosted by SLAC.

trivia: I was undergraduate and univ full-time employee responsible for os/360 (been sold 360/67 for tss/360 but ran as 360/65), when a couple people from CSC came out and installed CP/67. I mostly got to play with it in my dedicated weekend window ... rewrote a lot of CP/67 over the next 6months ... when CSC had a week class at Beverley Hills Hilton ... I arrive Sunday and was asked to teach the CP/67 class, the people that were suppose to teach it, had just given notice to form NCSS.

I had also been con'ed into doing some work with Jim Gray and Vera Watson on System/R (original sql/relational) and BofA had signed a System/R "joint study(?)" and getting 60 machines to run it.

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

a couple posts mentionin cp/67, Beverley hills hilton & ncss:
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class

a few posts mentioning taking mvt debugging class
https://www.garlic.com/~lynn/2023d.html#101 Operating System/360
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2021e.html#27 Learning EBCDIC
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#13 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#67 IBM Education Classes
https://www.garlic.com/~lynn/2006i.html#0 The Pankian Metaphor

--
virtualization experience starting Jan1968, online at home since Mar1970

MVS/TSO and VM370/CMS Interactive Response

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVS/TSO and VM370/CMS Interactive Response
Date: 13 Dec, 2023
Blog: Facebook
VM ids (CMS users) have their own address space ... a large portion of the general CMS code is in shared segment, so is possibly already in storage ... so possibly only ten page faults ... but page fault, page I/O, and I/O supervisor pathlengths are very small fraction that of MVS. CP&CMS terminal I/O pathlength then is a few hundred instructions. Part of MVS terminal I/O (of any kind) is enormous pathlength in VTAM ... and wasn't "captured" ... MVS would have wait state and accounted for CPU use, subtracting wait state and captured from elapsed ... could be 60%.

When I transferred to SJR in the 70s, I could wander around IBM and non-IBM datacenters in silicon valley, including disk engineer and product test across the street. They had been running 7x24, pre-scheduled stand-alone testing and had mentioned that they had recently tried MVS, but it had 15min MTBF (requiring manul re-ipl) in that environment. I voluntered to rewrite I/O supervisor so it was bullet proof and never fail, allowing any amount of on-demand, concurrent testing ... greatly improving productivity. I also significantly reduced pathlengths and elapsed time between ending interrupt and channel redrive (after interrupt), which was already significantly shorter than MVS (later I would periodically joke that major motivation for SAPs to do the I/O was the enormous kernel pathlengths that had been inherited from MVS).

Also showed up with new 3274/3278 compared to 3272/3277. For 3278 they moved lots of electronics back into the shared controller (reducing 3278 manufacturing cost), significantly driving up coax protocol chatter and elapsed time. This was in the days of studies showing quarter second interactive response improved productivity. 3272/3277 had .086sec hardware response ... so to get interactive .25sec, system response had to be no more than .164sec (several of my enhanced systems were getting .11sec interactive system response). The 3274/3278 protocol chatter drove hardware response to .3-.5sec (somewhat dependent amount of data written), making quarter second impossible. A complaint to the 3278 product administrator got a response that 3278 wasn't for interactive computing but "data entry" (aka electronic keypunch). Later IBM/PC 3277 emulation cards had 4-5 times the upload/download throughput of 3278 cards. Note MVS/TSO users never noticed since their system response was rarely even 1sec (so any change from 3272/3277 to 3274/3278 wouldn't have been noticed).

posts mentioning getting to play disk engineer in bldg 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

MVS/TSO and VM370/CMS Interactive Response

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVS/TSO and VM370/CMS Interactive Response
Date: 13 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response

4341 with only 15 TSO users, private tso address space pages could remain in real storage large percentage of time w/o having to be paged (eliminating huge page I/O penalty and pathlength).

When I joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters and online sales&marketing support HONE system was long time customer. It started out with CP67/CMS and cambridge science center had also ported APL\360 to CMS for CMS\APL ... redoing storage management from 16kbyte swapped workspaces to large demand paged virtual memory and added APIs for system services (like file I/O) enabling real-world applications ... and CMS\APL based sales&marketing applications came to dominate all HONE activity ... and I was also asked to do some of the initial installs in other parts of the world. In the morph from CP67->VM370 there was lots of stuff that was simplified and/or dropped (including multiprocessor support). I then migrated a bunch of stuff to VM370 (including kernel re-org for multiprocessor, but not actual multiprocessor itself) and HONE moved to VM/370 (from CP67/CMS) and the APL\CMS that had been redone at the Palo Alto Science Center. And then all the US HONE datacenters were consolidated in Palo Alto and expanded to HONE1-HONE8 370/168 (in single-system image with load balancing and fall-over) and HONEDEV 370/158. I then added multiprocessor back in so US HONE could add 2nd processor to each system.

HONE had also done Sequoia, a 600kbyte APL\CMS application that was sort of a super-PROFS user interface, tailored for computer illiterate sales&marketing ... that appeared in every workspace. Then there was an activity to rewrite some of the major used sales&marketing applications partially in Fortran. Note Palo Alto Science Center was just across the back parking lot from HONE (trivia: when FACEBOOK 1st moved into Silicon Valley, it was into a new bldg built next door to the former US HONE datacenter), & PASC had been doing also sorts of APL work (and helped HONE with APL optimization) ... but one of their people had also done FORTRANQ compiler optimization that eventually ships as FORTRANHX.

This was in period after the implosion of the IBM Future System effort and mad rush to get stuff back into the 370 product pipelines including quick&dirty 3033&3081 activity in parallel. Head of POK managed to convince corporate to kill VM370 product, shutdown the development group and migrate all the people to POK for MVS/XA. Endicott eventually managed to salvage the product mission for the mid-range, but had to reconstitute a development group from scratch ... and various people in POK were periodically badgering HONE that VM370 would no longer be supported on high-end machines and they needed to convert to MVS.

At the time I believed that US HONE was largest single system image in the world and world-wide HONE was the largest user of APL

HONE and APL posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

MVS/TSO and VM370/CMS Interactive Response

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVS/TSO and VM370/CMS Interactive Response
Date: 14 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#71 MVS/TSO and VM370/CMS Interactive Response

... MVS had another problem with response ... not just pathlength but the extensive use of multi-track search ... locking up channel, controller, and disk (i.e. locking up controller & channel ... included block access to associated disks). One installation had 168MVS & 158VM with interconnected controllers but hard and fast rule never to mount a MVS pack on a VM string. One morning, operator incorrectly mounted MVS pack on VM string and within 5mins the datacenter was getting irate calls from CMS users all over the bldg about what happened to system response. Eventually isolated to the MVS pack (and related MVS pervasive use of multi-track search) on VM DASD string and demand that the MVS pack be removed from VM string. Operations said it was doing some production work and they would wait until 2nd shift. VM group had a highly optimized one pack VS1 system for running under VM and mounted it on a MVS string ... and even tho it was running on a 158 heavily loaded VM system, it managed to bring the MVS 168 to its knees and alleviated the CMS response problem (and operations agreed to move the MVS pack off a VM string).

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

I've mentioned several times when the US HONE systems were consolidated in palo alto in the mid-70s, it was enhanced to eight 168 single-system-image, loosely-coupled, complex with large disk farm and load-balancing and fall-over .... I considered it to be the largest such complex in the world. This was in the period when head of POK had convinced corporate to kill VM370, shutdown the development group and transfer all the people to POK for MVS/XA ... so HONE was a deep embarrassment to the POK organization.

In 2009 when some of the support was allowed to be released to customers ... I would make snide remarks: "From the IBM annals of release no software before its time"

hone and apl posts
https://www.garlic.com/~lynn/subtopic.html#hone

a few recent posts mentioning head of POK convincing corporate to kill VM/370
https://www.garlic.com/~lynn/2023g.html#27 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023c.html#104 IBM Term "DASD"
https://www.garlic.com/~lynn/2023c.html#78 IBM TLA
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#44 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023b.html#64 Another 4341 thread
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#63 IBM Software Charging Rules
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#84 VMworkshop.og 2022
https://www.garlic.com/~lynn/2022c.html#45 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers

--
virtualization experience starting Jan1968, online at home since Mar1970

MVS/TSO and VM370/CMS Interactive Response

From: Lynn Wheeler <lynn@garlic.com>
Subject: MVS/TSO and VM370/CMS Interactive Response
Date: 14 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#71 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#72 MVS/TSO and VM370/CMS Interactive Response

other trivia: MVS shared DASD, reserve/release was high overhead, long latency operation for loosely-coupled operation. ACP/TPF to boost it got logical locking implemented in the 3830 controller (sort of like dec vax cluster implementation). Problem was that it only worked for that controller limiting it to four channel/system operation ... so it didn't work across string switch with a pair of 3830 controllers (eight channel/system operation). For HONE in eight channel/system operation instead of reserve/release ... used a channel program that implemented the processor compare-and-swap instruction semantics.

HONE and/or APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
original sql/relational implementation, system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

email mentioning loosely-coupled acp/tpf 3830 controller locking
https://www.garlic.com/~lynn/2008i.html#email800325

posts mentioning acp/tpf loosely-coupled, shared dasd 3830 controller logical/symbolic locking
https://www.garlic.com/~lynn/2022b.html#8 Porting APL to CP67/CMS
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#76 IBM ITPS
https://www.garlic.com/~lynn/2021g.html#71 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018e.html#94 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2017k.html#40 IBM etc I/O channels?
https://www.garlic.com/~lynn/2016.html#81 DEC and The Americans
https://www.garlic.com/~lynn/2016.html#63 Lineage of TPF
https://www.garlic.com/~lynn/2011p.html#76 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011l.html#33 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011k.html#84 'smttter IBMdroids
https://www.garlic.com/~lynn/2011i.html#77 program coding pads
https://www.garlic.com/~lynn/2011b.html#12 Testing hardware RESERVE
https://www.garlic.com/~lynn/2010k.html#54 Unix systems and Serialization mechanism
https://www.garlic.com/~lynn/2010b.html#41 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009s.html#30 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009l.html#66 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2008j.html#28 We're losing the battle
https://www.garlic.com/~lynn/2008j.html#17 We're losing the battle

--
virtualization experience starting Jan1968, online at home since Mar1970

MVS/TSO and VM370/CMS Interactive Response

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVS/TSO and VM370/CMS Interactive Response
Date: 14 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#71 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#72 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#73 MVS/TSO and VM370/CMS Interactive Response

note: some of CICS tricks with OS/360 (and descendants) was to do as much OS/360 system services as possible at startup (acquire storage, open files, etc), and then perform as much as possible their own system services.

posts mentioning CICS and/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

After leaving IBM, I was brought into the largest ACP/TPF airline res system to look at the ten impossible things they could do. They started with ROUTES (find direct and/or connecting flts from origin to destination) which accounted for 25% of workload. I was given copy of the full OAG (all commercial schedule flt segments in the world). After about a month, starting from scratch I had existing functions running 100 times faster on UNIX (I claimed that ACP/TPF implementation represented technology tradeoffs made in the 60s ... and I could make different trade-offs starting from scratch). After another month I had all ten impossible things, but it was only about ten times faster per transaction ... but several previous transactions had been consolidated into single transaction ... so elapsed time was was faster (could show that all ROUTES transactions in the world could be handled by ten RS/6000 990s). Then the hand-wringing started ... part of 60s trade-offs was significant human processing (hundreds of people) messaging data kept on MVS IMS ... and then weekly ACP/TPF shutdown to copy data to ACP/TPF ... all that was eliminated (I could compress the full OAG for keeping in memory and process it directly).

industry standard benchmark, number of iterations compared to 370/158 (not actual count of instructions, but program iterations):
(1993) 990: 126MIPS (ten 990s: 1.26BIPS); (1993) eight processor ES/9000-982 claim: 408MIPS

posts mentioning ACP/TPF airline res ROUTES
https://www.garlic.com/~lynn/2023g.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#13 Vintage Future System
https://www.garlic.com/~lynn/2023c.html#8 IBM Downfall
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022h.html#58 Model Mainframe
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#76 IBM ITPS
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015f.html#5 Can you have a robust IT system that needs experts to run it?
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
https://www.garlic.com/~lynn/2013g.html#87 Old data storage or data base
https://www.garlic.com/~lynn/2011d.html#43 Sabre; The First Online Reservation System
https://www.garlic.com/~lynn/2011c.html#42 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2010j.html#53 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)

--
virtualization experience starting Jan1968, online at home since Mar1970

The Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us
Date: 15 Dec, 2023
Blog: Facebook
The Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us
https://www.theatlantic.com/magazine/archive/2024/01/ibm-greatest-capitalist-tom-watson/676147/

... previous (from 1995) The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

and on the PC subject: before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP/67 & CMS (precursor to VM370/CMS) at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

early in ACORN (IBM/PC), Boca said that they weren't interested in software ... and an add-hoc IBM group of some 20-30 was formed in silicon valley to do software ... would touch base with Boca every month to make sure nothing had changed. Then one month, Boca changes its mind and says if you are doing ACORN software, you have to move to Boca ... and the whole effort imploded

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

and my tome from last year ... Learson trying to block the bureaucrats, careerists and MBAs from destroying the Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

posts mentioning CP/M, Kildall, Opel
https://www.garlic.com/~lynn/2023g.html#35 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#27 Another IBM Downturn
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023.html#99 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#30 IBM Change
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#72 IBM/PC
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022d.html#90 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#44 CMS Personal Computing Precursor
https://www.garlic.com/~lynn/2022b.html#111 The Rise of DOS: How Microsoft Got the IBM PC OS Contract
https://www.garlic.com/~lynn/2021k.html#22 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2019e.html#136 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2019d.html#71 Decline of IBM
https://www.garlic.com/~lynn/2018f.html#102 Netscape: The Fire That Filled Silicon Valley's First Bubble
https://www.garlic.com/~lynn/2012.html#100 The PC industry is heading for collapse

--
virtualization experience starting Jan1968, online at home since Mar1970

Another IBM Downturn

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Another IBM Downturn
Date: 15 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#27 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#29 Another IBM Downturn

early os2 (dec87, somebody at bcrvmpc1) sent email to vm endicott asking how to do scheduling (had been told vm's was better). Endicott forwarded it to VM IBM Kingston, who then forwarded it to me. I had rewritten CP67 (precursor to VM370) scheduler as undergraduate in the 60s that IBM picked up and shipped. After I graduate and join ibm science center, one of my hobbies was enhanced production operating systems for internal datacenters. In the morph of CP67->VN370, lots of stuff was simplified and/or dropped. I then start adding a bunch of stuff back into VM370 (for internal datacenters). During FS era, internal politics was shutting down 370 stuff (which is credited with giving clone 370 makers their market foothold) ... then when FS imploded, there was mad rush to get stuff back into the 370 product pipelines. In the IBM 23Jun1969 unbundling, they start charging for software (but make the case that kernel software would still be free). With the rise of clone 370 makers, the implosion of FS, and mad rush to get stuff back in 370 product pipelines, there was decision to transition to charging for kernel software (starting with "brand new" kernel addons) and a bunch of my (internal) stuff was selected for initial guinea pig ... including my scheduler.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
23un1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management, scheduling, etc posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

note: AWD (workstation independent business unit) did their own (16bit at-bus) 4mbit token-ring card for PC/RT workstation. Then for RS/6000 (w/microchannel) was told they couldn't use their own cards but had to use PS2 cards (which had been severely performance kneecapped by the communication group attempting to preserve their dumb terminal paradigm fiercely fighting off client/server and distributed computing). Example was the microchannel 16mbit token-ring card had lower card throughput than the PC/RT 4mbit T/R card (jokes that PC/RT 4mbit T/R server would have higher throughput than RS/6000 16mbit T/R server). AWD had done "SLA" (serial fiber with protocol similar to ESCON with a number of (incompatible) improvements and we convince a high-speed router company into adding SLA interface (my wife had been asked to co-author response to gov. request for large, super secure, campus-like, distributed environment where she included TCP/IP, ethernet, high-end routers, and 3-tier networking and we were then out giving similar pitches to IBM customer executives) ... they could already handle things like up to 16 10mbit ethernet interfaces, FDDI, telco T1&T3, IBM & non-IBM mainframe channels, etc ... allowing RS/6000 to operate with reasonable server throughput. The new Almaden research facility had been heavily provisioned with CAT4, but had already discovered that 10mbit ethernet cards had significantly higher throughput than microchannel 16mbit token-ring cards (and with lower end-to-end LAN latency; $69 10mbit ethernet card with AMD lance chip easily outperformed $800 16mbit token-ring microchannel card).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801
3-tier networking posts
https://www.garlic.com/~lynn/subnetwork.html#3tier

late 80s, senior disk engineer got a talk scheduled at world-wide, internal, annual, communication group conference, supposedly on 3174 performance ... and open the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing a drop in disk sales with data fleeing to more distributing computing friendly platforms. They had come up with a number of solutions which were all vetoed by the communication group with their corporate strategic responsibility for everything that crosses the datacenter walls. GPD/Adstar VP of software partial countermeasure was investing in distributed computing startups that would use IBM disks (and he would periodically ask us to drop by his investments to see if we could provide any assistance).

communication group & disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal

Communication group datacenter stranglehold wasn't just disks ... and a couple years later, IBM has one of the largest losses in the history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company. We had already left IBM, but get a call from the bowels of Armonk asking if we could help with the company breakup; however before we get started, the board hires former AMEX president as CEO who reverses (some of) the breakup (but it isn't long before the disk division was gone).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
AMEX, Private Equity, IBM related Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT, MVS, MVS/XA & Posix support

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT, MVS, MVS/XA & Posix support
Date: 16 Dec, 2023
Blog: Facebook
decade ago I was asked if I could find the decision to add virtual memory to all 370s. I found staff to executive making the decision, basically MVT storage management was so bad that regions had to be specified four times larger than used ... as a result typical 1mbyte 370/165 would only run four concurrent regions at a time, insufficient to keep it busy and justified.

Mapping MVT to a 16mbyte virtual memory (very similar to running MVT in a CP/67 16mbyte virtual machine) allowed number of concurrent regions to be increased by a factor of four times with little or no paging ... MVT becomes VS2/SVS, MFT becomes VS1 (running in 4mbyte virtual memory), DOS becomes DOS/VS. Biggest hit for VS2/SVS was similar to CP/67 ... passed channel programs had virtual addresses, copies of the channel programs had to be made replacing virtual addresses with real. Ludlow doing initial VS2 implementation crafts a copy of CP67 CCWTRANS into EXCP to perform the function.

For VS2/MVS, each application was given its own 16mbyte virtual address space ... however the OS/360 API heritage was heavily pointer-passing based ... in order to enable MVS kernel to address application storage pointed to ... an 8mbyte image of the MVS kernel was mapped into every application 16mbyte address space (leaving 8mbytes for applications). Then because subsystems were mapped into their own 16mbyte virtual address space, in order for them to access application storage pointed to by the API address ... the 1mbyte common segment (CSA) was created to allocation storage (for API use between applications and subsystems) that was mapped into every 16mbyte virtual address space (leaving 7mbytes for application).

However because required CSA space was somewhat proportional to number of subsystems and concurrent applications, CSA quickly grew becomes "common system area" ... and by 3033 time-frame was frequently 5-6mbytes (leaving 2-3mbytes for applications), with threat of becoming 8mbytes (as systems became larger doing more work, leaving zero for applications). This created intense pressure to ship 370/XA machines and MVS/XA as quickly as possible

Complicating all this was the 23Jun1969 unbundling ... starting to charge for software (however the case made that kernel software would still be free). Then came the period where 360s&370s were all going to be replaced with Future System machines, totally different from 370 ... and internal politics was shutting down 370 projects (the lack of new 370s during the period is credit with giving 370 clone makers, their market foothold). When FS implodes there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033 & 3081 efforts in parallel. more info
http://www.jfsowa.com/computer/memo125.htm

However, FS contributing to the rise of clone 370s and then imploding appeared to result in decision to transition to charging for kernel software ... initially pricing new kernel add-ons ... but eventually all kernel software being charged for in the MVS/SP period (before MVS/XA).

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

Late 80s, senior disk engineer got a talk scheduled at annual, world-wide, internal communication group conference, supposedly on 3174 performance but opened the talk with statement that the communication group will be responsible for the demise of the disk division. GPD/Adstar was seeing drop in disk sales with data fleeing datacenters to more distributed computing platforms. They had come up with solutions that were be vetoed by the communication group with their corporate strategic responsibility for everything that crossed datacenter walls (part of their fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm). Partial countermeasure, GPD/Adstar software VP was investing in distributed computing startups that would use IBM disks (he would periodically ask us to stop by investments to see about lending helping hand). He also said he funded the project to implement MVS posix (running unix apps, didn't directly involve product that crossed datacenter walls so communication group couldn't veto).
https://en.wikipedia.org/wiki/POSIX

communication group stranglehold
https://www.garlic.com/~lynn/subnetwork.html#terminal

A couple years later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with breakup of the company. Before we get started, the board brings in the former president of Amex that (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

trivia: after FS implodes, the head of POK also convinces corporate to kill VM370, shutdown the development group and transfer all the people to POK for MVS/XA (supposedly necessary to ship MVS/XA on schedule, part of avoiding the MVS implosion with MVS kernel & CSA consuming all of applications 16mbyte virtual address spaces). Endicott manages to save the VM370 product mission, but had to reconstitute a development group from scratch ... but MVS group was bullying internal datacenters that they needed to migrate from VM370 to MVS since VM370 would no longer be supported at high-end ... only MVS/XA. POK had done VMTOOL ... minimum virtual machine support for MVS/XA development (and never intended to ship to customers). However, later customers weren't converting to MVS/XA as planned ... similar to when customers weren't converting to MVS as planned:
http://www.mxg.com/thebuttonman/boney.asp

Aggravating situation, Amdahl was deploying (hardware/microcode) HYPERVISOR, allowing MVS & MVS/XA to be run concurrently ... seeing better success with moving customers to MVS/XA. Eventually IBM decision was made to make VMTOOL available as VM/MA & VM/SF (migration aid, system facility). Amdahl single processor machine also had about same throughput as two processor 3081K. IBM didn't respond to the Amdahl HYPERVISOR with PR/SM and LPAR on 3090 until almost the end of the decade.

some posts mentioning 370 virtual memory, vs2/svs, vs2/mvs, csa
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#93 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2018.html#92 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017g.html#101 SEX
https://www.garlic.com/~lynn/2017b.html#8 BSAM vs QSAM
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015h.html#116 Is there a source for detailed, instruction-level performance info?

some posts mentioning mvs, mvs/xa, vmtool, hypervisor, pr/sm, lpar
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021c.html#56 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2014j.html#10 R.I.P. PDP-10?
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2011p.html#114 Start Interpretive Execution
https://www.garlic.com/~lynn/2006h.html#30 The Pankian Metaphor

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT, MVS, MVS/XA & Posix support

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT, MVS, MVS/XA & Posix support
Date: 17 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support

Killing VM/370 was not to have the product in customer shops. The transfer of the people to POK for MVS/XA ... included doing the VMTOOL virtual machine only intended for MVS/XA development and *never* intended for customer use. It was only later when customers weren't migrating to MVS/XA as planned and Amdahl had come out with HYPERVISOR (VM/370 subset in microcode) and having some success with MVS/XA migration being able to run MVS & MVS/XA concurrently ... that VMTOOL was made available as VM/MA & then VM/SF.

Only later was there a proposal for a few hundred people group to upgrade VMTOOL to the feature, function, and performance of VM/370 for VM/XA. Endicott's counter was a sysprog in Rochester had added full 370/XA support to VM/370 ... POK won.

trivia: they weren't planning on telling the people about the development group shutdown and people move to POK until the very last minute ... minimizing the numbers that might escape ... however there was a leak and several went to new DEC VMS group (joke that head of POK was a major contributor to VMS). Then there was a hunt for the source of the leak ... fortunately for me nobody gave up the leaker.

trivia2: After FS imploded (and the Burlington mall group was being moved to POK for MVS/XA), Endicott cons me into helping them with ECPS microcode for 138/148. Then I get con'ed into presenting ECPS to business planners in US, EMEA, WT, etc ... with several overseas business trips. While Endicott managed to save VM/370 product mission (and needed to recreate a development from scratch) ... they then wanted to pre-install VM/370 on every 138/148 shipped (sort of software HYPERVISOR) ... POK got that squashed. In early 80s, I got permission to give presentations on how ECPS was done to user group meetings, including the monthly BAYBUNCH hosted at Stanford SLAC. Afterwards the Amdahl people would grill me on additional details. They said that they had developed MACROCODE (370-like instruction set running in microcode mode) during the 3033 days to quickly respond to a series of 3033 microcode changes that were being required by MVS to run ... and they were currently in the process of using it to develop HYPERVISOR (basically PR/SM & LPAR that IBM was able to ship in the late 80s w/3090) ... which then motivated POK to respond with shipping VMTOOL (as VM/MA & VM/SF) ... and only many years later PR/SM & LPAR with 3090.

trivia3: Also after FS imploded, I got con'ed into 16-way tightly-coupled project and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody told the head of POK that it could be decades before the POK favorite son operating system (MVS) had (effective) 16-way support (POK doesn't ship a 16-way tightly-coupled until after the turn of the century ... some 25yrs later) and some of us were invited to never visit POK again (and the 3033 processor engineers instructed to not be distracted). Once the 3033 was out the door, the processor engineers start on 3090/trout ... and we would keep in touch (even periodically sneak back into POK). A couple emails about how the 3081 SIE instruction (needed by VMTOOL to run virtual machines) was never intended for production use in part because the limited 3081 microcode space required that the SIE microcode had to be constantly "paged" in and out (when Amdahl did SIE it wasn't paged in/out) ... but they were planning on making SIE a production level performance facility for 3090 (but still weren't able to respond to Amdahl's HYPERVISOR with PR/SM & LPAR for 3090 until nearly the end of the decade).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

old email from one of the 3033 processing engineers working on trout/3090 (including 3081 paging microcode and VMTOOL/SIE was never intended for anything production, other than MVS development)
https://www.garlic.com/~lynn/2006j.html#email810630
in this usenet post
https://www.garlic.com/~lynn/2006j.html#27
to comp.arch, alt.folklore.computers, bit.listserv.vmesa-l

note reference in the email to the "memos" ... in the late 70s and early 80s, I was blamed for online computer conferencing on the internal network, it really took off spring of 1981 when I distributed a trip report of visit to Jim Gray at Tandem ... only about 300 directly participated but claims upwards of 25,000 were reading ... folklore is when corporate executive committee was told about it, 5of6 wanted to fire me

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

a few past posts
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT, MVS, MVS/XA & Posix support

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT, MVS, MVS/XA & Posix support
Date: 17 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support

Started GPD/Adstar part of countermeasures to communication group fighting off client/server and distributed computing, (GPD/Adstart software VP) funding distributed computing startups that would use IBM disks and funded MVS POSIX support.
https://en.wikipedia.org/wiki/POSIX

trivia: a big part of explosion in linux is needing unencumbered, freely available source as part of adapting to the emerging cluster system paradigm, both cloud and supercomputers were becoming hundreds & thousands, then hundreds of thousands & millions of systems ...

they also started assembling their own server systems ... claiming 1/3rd the cost of brand name systems. Shortly after press that server chip vendors were delivering at least half their product directly to operations that assemble their own systems ... IBM sells off its server business. The larger cloud operations (with dozen or more megadatacenters around the world, each with half milliion or more systems) now are also getting the system component and chip vendors to do customized implementations.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT, MVS, MVS/XA & Posix support

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT, MVS, MVS/XA & Posix support
Date: 17 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#79 MVT, MVS, MVS/XA & Posix support

I took two credit hr intro to fortran/computers and at end of semester was hired to port 1401 MPIO to 360/30. The univ had 709 running IBSYS tape->tape with 1401 front-end for tape->printer/punch and reader->tape. Student fortran ran under a second on 709. The univ. had been sold 360/67 for tss/360 and got 360/30 (running os/360 PCP) replacing 1401 temporarily pending arrival of 360/67 (for getting 360 experience). Univ. shutdown datacenter for weekend and I had the place dedicated to myself although 48hrs w/o sleep made monday classes hard (given bunch of manuals and got to design/implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had 2000 card assembler program). Within a year the 360/67 arrived and I was hired fulltime responsible for os/360 (tss/360 never came to production, so ran as 360/65).

My first sysgen was MFT9.5. Initially student fortran (FORTGCLG) ran over a minute, I install HASP cutting the time in half. Then I start doing customized sysgens (MFT11, MFT14, MVT15/16, MVT18) so 1) run stage2 in standard jobstream and 2) carefully order placement of datasets and PDS members to optimize arm seek and multi-track search ... cut student fortran another 2/3rds to 12.9secs, student fortran never got better than 709 until I install Univ. of Waterloo WATFOR.

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all data processing into independent business unit). I think Renton datacenter possibly largest in the world, couple hundred million in IBM gear (former large airplane assembly bldg), 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around machine room. Lots of politics between Renton director and Boeing CFO, who only had 360/30 up at Boeing field for payroll (although they enlarge the machine room to install 360/67 to play with when I'm not doing other stuff).

I don't know if POK did it for me or not, but with MVT15/16, I could format pack to place VTOC other than cyl0 ... allowing VTOC in the middle of the disk (to help with arm seek optimize).

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

a couple some recent posts mentioning fortgclg, watfor, boeing cfo, renton
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT, MVS, MVS/XA & Posix support

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT, MVS, MVS/XA & Posix support
Date: 17 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#79 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support

note: Boeing Huntsville had been sold 360/67 "duplex" (two processor) for tss/360 with several 2250
https://en.wikipedia.org/wiki/IBM_2250

but as many other places, configured it as two 360/65s running MVT. Turns out it was early case of horrible MVT storage management (motivating later decision to add virtual memory to all 370s) ... they modified MVT13 to build virtual memory tables (like VS2/SVS) but no actual paging ... just reorg as countermeasure to MVT storage management. OS/360 extreme overhead and problems in system services can also be considered motivation for CICS ... where attempts to perform lots of system services at startup (open all files, obtain large blocks of storage, etc) and then provide its own lightweight system services where possible.

A decade ago, I was asked to track down the decision to add virtual memory to all 370s (MVT storage management was so bad that regions had to be specified four times larger than used, as a result a typical 1mbyte 370/165 only ran four concurrent regions, insufficient throughput to keep system busy and justified ... running MVT in 16mbyte virtual address space (similar to running MVT in CP67 16mbyte virtual machine), enables increasing the number of concurrent regions by factor of four times with little or no paging. Old archive post with pieces of the email exchange (not just 370 virtual memory but also discussion of some things like JES history)
https://www.garlic.com/~lynn/2011d.html#73

cics/bdam posts
https://www.garlic.com/~lynn/submain.html#cics
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning Boeing Huntsville 360/67 duplex
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#52 IBM Vintage 1130
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2010k.html#11 TSO region size
https://www.garlic.com/~lynn/2010c.html#4 Processes' memory
https://www.garlic.com/~lynn/2010b.html#61 Source code for s/360 [PUBLIC]
https://www.garlic.com/~lynn/2007m.html#60 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
https://www.garlic.com/~lynn/2002j.html#22 Computer Terminal Design Over the Years
https://www.garlic.com/~lynn/2001m.html#55 TSS/360

--
virtualization experience starting Jan1968, online at home since Mar1970

Cloud and Megadatacenter

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cloud and Megadatacenter
Date: 17 Dec, 2023
Blog: Facebook
Mid-60s, science center thought it would get virtual memory charter .... but didn't ... so they came up with their own design ... theyw ere hoping to get a 360/50 to modify with virtual memory ... but the spare 360/50s were all going to the FAA ATC project ... so they had to settle for a 360/40 to do CP/40 & CMS. Les's presentation at SEAS
https://www.garlic.com/~lynn/cp40seas1982.txt

when 360/67 standard with virtual memory, came available, CP/40 morphs into CP/67 (precursor to VM/370, note initial morph of CP/67->VM/370 dropped a lot of feature/function ... which had to be added back in later). More history by Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist
other IBM mainframe history
https://en.wikipedia.org/wiki/History_of_IBM_mainframe_operating_systems

Lots of places had been sold 360/67 for TSS/360 ... but TSS/360 never really became production ... and most places ran them as 360/65 with OS/360 ... although two (Stanford and Univ. of Michigan) wrote their own virtual memory operating system for 360/67 (and Boeing Huntsville modified MVT13 to run with virtual memory on 360/67). I was undergraduate and fulltime univ. employee responsible for OS/360 for one of these 360/67 (univ. shutdown datacenter on weekends and I had the whole place dedicated to myself on weekends). Then Science Center came out and installed CP/67 and CMS (3rd after CSC itself and MIT Lincoln Labs). I mostly played with it on weekends, rewriting lot of the pathlengths to improve OS/360 running in virtual machine.

Stand-alone test ran 322secs and initially in virtual machine ran 856secs (CP/67 CPU 534secs), after a few months I had CP/67 CPU down to 113secs. Archived post with some of SHARE presentation on CP/67 (and some OS/360) work.
https://www.garlic.com/~lynn/94.html#18

I then started work on improving CMS activity, rewriting lots of interactive filesystem and CP/67 paging I/O (ordered seek queuing, multiple block transfer in single i/o ordered for transfers per rotation (improved 2301 paging drum from max. of about 70 pages/sec to 270 pages/sec), page replacement algorithm, dynamic adaptive resource management (scheduling), etc.

Still in the 60s, saw two commercial online CP/67 spin-offs from science-center and saw lots of work by the science center and commercial spin-offs for 7x24, dark room, with no human present operation. Also, this was when IBM rented/leased systems, and charged based on the "system meter" that ran whenever any CPU or channel was operational ... so there was special terminal channel programs that allowed system meter to stop when system was idle ... but instantly on when any characters were arriving. Note all CPUs & channels had to be idle for 400ms before system meter would stop ... long after IBM converted from rented/leased to sales, OS360/MVS still had a 400ms timer event that guaranteed system meter would never stop.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

In the 80s, large companies were ordering hundreds of VM/4341s for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). MVS was looking at the large volumes but the only mid-range DASD for non-datacenter was 3370 FBA ... eventually 3375CKD was emulated but it didn't do MVS much good. Operations were looking at scores of VM/4341s per support person, while MVS was stil dozens of staff per system (and communication group was fiercely fighting off client/server and distributed computing, trying to preserve their SNA dumb terminal paradigm).

DASD, CKD, FBA, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd

Similar cloud was their megadatacenters were evolving from hundreds to thousands (and later hundreds of thousands to millions) of systems. Turn of the century they were claiming they assemble their own systems at 1/3rd the cost of brand name systems (shortly after server chip vendor press said they were shipping at least half their product directly to cloud megadatacenters, IBM sells off it server business). They had so drastically reduced their system expense, their costs was increasing becoming power&cooling (industry standard $/transaction ... system and human ... added things like watt/transaction).

Cloud megadatacenter so enormously reduced system costs that they could significantly provision for huge, on-demand, peak use ... large number of systems would be at nearly zero when not needed and then instantly on when needed (analogous to CP67 letting system meter stop when idle, but instantly on when needed). IBM shipped CP67 with full source with source update&maintenance process ... there were claims that there was more customer source code lines in SHARE program than in the base system. Similarly a big explosion in Linux, was they needed freely available, unencumbered source to adapt to the megadatacenter model with enormous automation where 70-80 staff operate a half million or more server systems (tens of thousands systems per staff, each server system can benchmark at ten times a max configured mainframe).

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT, MVS, MVS/XA & Posix support

From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT, MVS, MVS/XA & Posix support
Date: 18 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#79 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#81 MVT, MVS, MVS/XA & Posix support

(wikipedia) list of ibm systems
https://en.wikipedia.org/wiki/History_of_IBM_mainframe_operating_systems

Lots of BPS programs (ipl'ing txt decks) ... including BPS loader. Originally CP/67 source was on os/360 for assembly, the assembler output txt decks were organized in card tray (with diagonal stripe across top with module name to make it easy to identify and replace individual modules). The BPS loader slapped on the front and txt deck would load to memory ... and CPINIT would write memory image to disk ... then when disk was IPL'ed, CPINIT would reverse the process. Later CP/67 source was moved to CMS ... and new systems could be created by punching to virtual punch transferred to virtual reader.

When I was at Boeing, I modified lots of CP/67 to be pageable (reducing fixed memory requirements) ... which had to be broken into 4k pageable executables ... which required a lot more ESD entries which broke the BPS loader 255 limit (and I was constantly coming up with hacks to stay within the 255 limit). After graduation and joining science center ... I found a source copy of the BPS loader in card cabinet up in the 545tech attic ... and modified it to take more than 255 ESD entries. Pageable CP/67 didn't ship to other customers (just internal distributions) ... but it was used for VM/370.

... debugging the original BPS ESD problem I discovered that BPS loader passed the address of the the start of its ESD table (and count of entries) ... so I moved a copy of the ESD table to the end of CP67 pageable area (and added to kernel image to be written to disk) .... and then could use it for some debugging.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
some more history
https://en.wikipedia.org/wiki/IBM_CP-40
https://en.wikipedia.org/wiki/CP-67
https://en.wikipedia.org/wiki/History_of_CP/CMS
https://en.wikipedia.org/wiki/VM_(operating_system)

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage DASD

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage DASD
Date: 19 Dec, 2023
Blog: Facebook
Internally inside IBM they had (intel) "1655" that could emulate CKD 2305 with 1.5mbyte/sec transfer or configure as fixed-block with 3mbyte/sec transfer (vm could use, but MVS never got around to doing fixed-block support).

Earlier I had project to implement 2305-like "multi-exposure" for 3350FH feature ... but there was group in POK that had VULCAN CKD electronic paging device ... and were afraid I might impact their forecast ... and got it canceled. Then VULCAN got canceled (they were told that IBM was selling every memory chip it could make for processor memory ... at a higher markup) ... but it was too late to resurrect multi-exposure for 3350FH.

original 3380 had 20 track spacing between data tracks, then spacing was cut in half for double the cylinders (& capacity), then cut again for triple the cylinders .. then there was "fast" 3380 that was same capacity as original 3380, but arm only had to travel 1/3rd as far.

then the father of risc/801 cons me into helping him with "wide disk head" ... capable of parallel data transfer from/to 16 closely placed data tracks (plus two servo-tracks on each side of the 16 data tracks). Problem was that would be 50mbytes/sec and IBM only supported 3mbytes and even ESCON was only going to be 17mbytes. A couple years later the IBM branch asked me help LLNL (national lab) standardize some stuff they were playing with which quickly becomes fibre channel standard ("FCS", including some stuff that I had done in 1980), initially 1gbit/sec, full-duplex, 200mbytes/sec aggregate (about same time as ESCON was announced, when it was already obsolete).

posts mentioning getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

some past 3350fh, vulcan, 1655 posts
https://www.garlic.com/~lynn/2023f.html#49 IBM 3350FH, Vulcan, 1655
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2021j.html#65 IBM DASD
https://www.garlic.com/~lynn/2021f.html#75 Mainframe disks
https://www.garlic.com/~lynn/2017k.html#44 Can anyone remember "drum" storage?
https://www.garlic.com/~lynn/2017e.html#36 National Telephone Day
https://www.garlic.com/~lynn/2017d.html#65 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future

some posts mentioning "wide disk head"
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#67 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#25 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2022f.html#61 200TB SSDs could come soon thanks to Micron's new chip
https://www.garlic.com/~lynn/2022b.html#70 IBM 3380 disks
https://www.garlic.com/~lynn/2021f.html#44 IBM Mainframe
https://www.garlic.com/~lynn/2021.html#56 IBM Quota
https://www.garlic.com/~lynn/2019b.html#75 IBM downturn
https://www.garlic.com/~lynn/2019b.html#52 S/360
https://www.garlic.com/~lynn/2019.html#58 Bureaucracy and Agile
https://www.garlic.com/~lynn/2018f.html#33 IBM Disks
https://www.garlic.com/~lynn/2018d.html#17 3390 teardown
https://www.garlic.com/~lynn/2018d.html#12 3390 teardown
https://www.garlic.com/~lynn/2018b.html#111 Didn't we have this some time ago on some SLED disks? Multi-actuator
https://www.garlic.com/~lynn/2017g.html#95 Hard Drives Started Out as Massive Machines That Were Rented by the Month
https://www.garlic.com/~lynn/2017d.html#60 Optimizing the Hard Disk Directly
https://www.garlic.com/~lynn/2017d.html#54 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2014l.html#78 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2012e.html#103 Hard Disk Drive Construction
https://www.garlic.com/~lynn/2011.html#60 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2009k.html#75 Disksize history question

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage DASD

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage DASD
Date: 20 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#84 Vintage DASD

there started to be articles that cache miss, memory latency, when measured in count of processor cycles was becoming comparable to 60s disk latency when measured in count of 60s processor cycles .. and was starting to see hardware features allowing concurrent operations like out-of-order instruction execution analogous to 60s multitasking.

In z196 time frame there were IBM articles that at least half Z196 per processor improvement over Z10 was introduction of memory latency compensation features that have been in other platforms for decades. At that time industry standard MIPS benchmark (number of iterations compared to 158 assumed to be one MIPS) for max configured Z196 was 50BIPS. By comparison cloud megadacenter blade server could benchmark at 500BIPS (same benchmark) and clouds operator would have dozen or more megadatacenters around the world ... and each megadatacenter would have half million or more such server blades (each 10 times max configured mainframe) ... with enormous automation... claiming megacenter staffs running only 70-80 people.

recent numbers use pubs statements about change since previous generation

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019
• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)
z16, 200 processors, 222BIPS* (1111MIPS/proc), Sep2022
• pubs say z16 1.17 times z15 (1.17*190BIPS or 222BIPS)

Z196/jul2010, 50BIPS, 625MIPS/processor
Z16/sep2022, 222BIPS, 1111/processor

12yrs, Z196->Z16, 222/50=4.4times


2010, server blades 10times max configured z196 and 2022 have improved more like ten times ... so could be more like 20+ times max configured z16 ... and megadacenter will have half million or more of these server blades (megadatacenters have processing equivalent of few million max configured mainframes)

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

1980 STL (now SVL) was busting at the seams and was moving 300 people from IMS group to offsite bldg. I get con'ed into channel extender support ... allowing placement of channel attached 3270 controllers ... so the have same human factors offsite as in STL.

The hardware vendor then tries to get IBM to release my support... but there is group in POK playing with some serial stuff that was playing with some serial stuff that was concerned that if it was in the market, it will be harder to get their stuff released ... and get it blocked.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

1988, IBM branch office asks me to help LLNL (national lab) get some stuff they are playing with standardized ... which quickly becomes Fibre Channel Standard ("FCS", including some stuff I had done in 1980), initially 1gbit, full duplex, 200mbyte/sec aggregate.

Then after decade, the POK get their stuff released with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec)

Then some POK engineers become involved with FCS and define a heavyweight protocol that significantly reduces the native throughput. The most recent public numbers is z196 "Peak I/O" benchmark getting 2M IOPS using 104 FICON (running over 104 FCS). About the same time a FCS is announced for server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON).

Aggravating things is 1) requirements for CKD which haven't been made for decades ... having to be simulated on industry standard fixed-block disks and 2) recommendations keeping SAPs (system assist processors that do actual I/O) to no more than 70% CPU (around 1.5M IOPS)

FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Shared Memory Feature

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Shared Memory Feature
Date: 21 Dec, 2023
Blog: Facebook
... work in the 70s was to have read/only executable libraries in shared memory ... the tricky part was to have multiple different versions running concurrently and application programs built for specific version being directed to the appropriate version. Original 370 virtual memory architecture had "segment protect" defined in each virtual address table segment table entries (bit in STE entry that pointed to the shared segment's page table) .... it could even allow some address spaces be r/o for segment while other virtual address spaces could be r/w for the same (shared) segment (concurrently, sort of state of the art on some platforms from the 60s) ... and that was how VM370 was being written and tested on early engineering 370s that implemented the full 370 virtual memory architecture.

Then the 370/165 engineers started whining that if they had to implement the full 370 architecture ... it would slip announce by six months .... the POK favorite son operating system (MVT/VS2) couldn't see any need for the full architecture ... so the decision was made to drop back to the 165 subset (which included dropping segment protect) ... and all the other models that had already implemented the full architecture had to drop the extra features and any software already written for the dropped features had to be rewritten (VM370 had to resort to a real kludge to simulate r/o shared segments). remember the original decision to add virtual memory to all 370s was to offset some of the MVT storage management problems and they will still wrapped around storage keys which wasn't full shareable paradigm.

Original SQL/relational in 70s was implemented on vm/370 with the original 370 virtual memory architecture in mind ... the client virtual address spaces could have shared code & data that was read/only ... while server virtual address spaces, some could have the same shared code & data read/write while other server virtual address spaces might only have the shared code & data read/only. Was able to do tech transfer (under the radar while company was preoccupied with the next great DBMS "EAGLE") to Endicott for SQL/DS. Later when "EAGLE" imploded there was request for how fast could "System/R" be ported to MVS (eventually announced as DB2).

I had done page mapped filesystem original CP67/CMS with a lot of virtual memory sharing features ... which I then ported to VM370/CMS some posts
https://www.garlic.com/~lynn/submain.html#mmap
related posts about allowing shared segments to appear at different addresses concurrently in different virtual address spaces
https://www.garlic.com/~lynn/submain.html#adcon
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

some posts mentioning 165 needed to cut back to subset of full 370 virtual memory architecture (as well motivation for adding virtual memory to all 370s).
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2018e.html#95 The (broken) economics of OSS

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Performance Analysis

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Performance Analysis
Date: 22 Dec, 2023
Blog: Facebook
turn of century was brought in to look at performance of financial outsourcer large mainframe datacenter that handled all aspects of credit cards for half of all accounts in the US ... 450K statement cobol application that ran on 40+ max. configured mainframe systems with accounts partitioned across the systems (number systems needed to finish settlement in the overnight batch windows). They had large performancee group that had been managing performance and throughput for decades ... but possibly got somewhat myopic with technology they were using ... I used some totally different technology (that came from science center in the late 60s and early 70s ... some of which led up to capacity planning) found 14% improvement. No system was older than 18months ... constantly doing rolling upgrades (represented significant percentage of ibm mainframe hardware business at the time).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some recent posts mentioning large mainframe datacenter
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023c.html#99 Account Transaction Update
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs

--
virtualization experience starting Jan1968, online at home since Mar1970

MVS/TSO and VM370/CMS Interactive Response

From: Lynn Wheeler <lynn@garlic.com>
Subject: MVS/TSO and VM370/CMS Interactive Response
Date: 22 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#71 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#72 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#73 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#74 MVS/TSO and VM370/CMS Interactive Response

communication group was fighting release of (vm370) mainframe tcp/ip release ... then possibly some influential customers got that changed ... then communication group changed their strategy and since communication group has corporate strategic responsibility for everything that crosses datacenter walls, it had to released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. It gets worse w/MVS port which is done by simulating some vm/370 diagnose instructions.

I then add RFC1044 support and in some tuning tests at Cray Research between a Cray and IBM 4341 get sustained channel throughput using only modest amount of 4341 processor (something like 500 times increase in bytes moved per instruction executed). Later in early 90s, the communication group hires a silicon valley contractor to implement TCP/IP support directly in VTAM. What he initially demos runs much faster than LU6.2. He is then told that everybody knows that a proper TCP/IP implementation runs much slower than LU6.2, and they will only be paying for a proper implementation.

rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
communication group fighting off client/server and distributed
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

Shared Memory Feature

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Shared Memory Feature
Date: 23 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#86 Shared Memory Feature

My big problem doing most of that stuff (file caching, shareable from filesystem image, etc) back in 1970 for memory-mapped CP67/CMS filesystem was CMS used OS/360 assembler & compilers which implemented "relocatable" with RLD entries that were processed at load time updating all adcons ... for CMS met that for relocatable it had to be done as part of loading and therefor executable was modified as part of loading and therefor not directly r/o shareable ... or the executable was a MODULE memory image ... after all the addresses had been updated and shared image was no longer relocatable. I had to do some extreme hacks to make the same CMS page-mapped filesystem memory image shared executable run at arbitrary different addresses concurrently in different address spaces (aka the VM/360 release 3 extreme subset of shared executable met that each image had to have a global system-wide unique fixed virtual address ... and/or certain shared executable images were assigned the same virtual address and couldn't be loaded & run concurrently). One of the good things about TSS/360 was its assembler and compilers generated executables that could be shared loaded at arbitrary virtual addresses w/o requiring any modification at load time (not OS/360 RLD address modification convention).

OS/360 RLD adcon hacks for shared executable
https://www.garlic.com/~lynn/submain.html#adcon
CMS paged-mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

Has anybody worked on SABRE for American Airlines

From: Lynn Wheeler <lynn@garlic.com>
Subject: Has anybody worked on SABRE for American Airlines
Date: 25 Dec, 2023
Blog: Facebook
my wife did short stint as chief architect for Amadeus (EU system built off old Eastern System One). She sided with EU about x.25 (instead of SNA) and the communication group got her replaced. Didn't do them much good, EU went with X.25 anyway ... and EU replaced communication groups replacement.

A couple years after leaving IBM (in 1994) was brought into SABRE to look at the ten impossible things they couldn't do. Started with "ROUTES" (25% of processing CPU). The gave me complete softcopy of OAG (all flt segments for all scheduled commercial flts in the world). After month had reimplemented on RS6000 running 100 times. After another month had all impossible things ... but transactions only 10 times faster (however had collapsed multiple old transactions into single new transaction so elapsed time was much better than just 1/10th faster). Could show it could handle every ROUTES request in the world for every airline on cluster of ten RS6000/990s.

1993 industry standard benchmark ... number of program iterations compared to 370/158
RS6000/990 : 126MIPS (ten 990s: 1.26BIPS)
eight processor ES/9000-982 : 408MIPS


I claimed 60s technology trade-offs for SABRE and starting from scratch I could make totally different trade-offs including reduced staff required by an order of magnitude. That raised some internal issues and I wasn't allowed to look at "FARES"

some past posts referring to acp/tpf/sabre, amadeus, oag, routes
https://www.garlic.com/~lynn/2023g.html#13 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2023d.html#80 Airline Reservation System
https://www.garlic.com/~lynn/2023c.html#8 IBM Downfall
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022h.html#10 Google Cloud Launches Service to Simplify Mainframe Modernization
https://www.garlic.com/~lynn/2022c.html#76 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Christmas Tree Exec, Email, Virus, and phone book

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Christmas Tree Exec, Email, Virus, and phone book
Date: 25 Dec, 2023
Blog: Facebook
2007 archived post, attempt to reproduce blinking colored lights, (before rex renamed and released to customers) 1981 xmas, 3279, rex>=2.50 and fsx ... in html
https://www.garlic.com/~lynn/2007v.html#54

social engineering, required receiver to execute the program. jan1996, MSDC at Moscone center had "Internet" banners everywhere ... but constant refrain in every session was "protect you investment" ... aka visual basic and the automatic execution in data files (even email) ... shortly gave rise to the explosion in viruses. counter was searching incoming files for signatures of known virus patterns .... now grown to millions.

some christma exec posts
https://www.garlic.com/~lynn/2022h.html#72 The CHRISTMA EXEC network worm - 35 years and counting!
https://www.garlic.com/~lynn/2018.html#21 IBM Profs
https://www.garlic.com/~lynn/2017g.html#67 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017e.html#90 Ransomware on Mainframe application ?
https://www.garlic.com/~lynn/2017e.html#47 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2015c.html#81 On a lighter note, even the Holograms are demonstrating

Late 70s, we would have fridays after work and one of the things discussed was how to get employees to use computers ... one of the things came up with was online telephone directories ... Jim Gray and I would each spend one person week on it, Jim would write the lookup program (objective was a second or two to search tens/hundreds of thousands, faster than reaching for paper book and manual lookup) and I would do the procedures to collect machine readable, softcopy of various company directories from various locations, convert to phone book format, and distribute. Jim did radix search of sorted last names (distribution of 1st two letters, so rather than binary search, tries based on calculated probability where that name was in the file).

I then started collecting email addresses and merging into phone book (before locations started maintaining it themselves). At one point, the executive I reported to was departing the company and ask me to send his goodby email ... I finger slipped and accidentally used the wrong email copy list and it went out to over 25,000 (some people around the world never heard of the executive ... at the time he was president of the workstation division).

a couple online phone book posts
https://www.garlic.com/~lynn/2018c.html#41 S/360 announce 4/7/1964, 54yrs
https://www.garlic.com/~lynn/2013f.html#65 Linear search vs. Binary search
https://www.garlic.com/~lynn/2005t.html#44 FULIST

--
virtualization experience starting Jan1968, online at home since Mar1970

Shared Memory Feature

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Shared Memory Feature
Date: 26 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#86 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#89 Shared Memory Feature

os/360 convention from its pre-virtual days. tss/360 designed for virtual had a convention/design that addressed it. For assembler programs I could manually change. to emulate the tss/360 convention ... at the same time I was "fixing" code to run in r/o shared segments.

they adopted significant subset of my implementation for VM/370 release 3 DCSS w/o any CMS memory map filesystem changes ... they got some cms code that was modified for shared segment that happen to also be fixed for adcon free ... but since they didn't include the cp change

OS/360 RLD adcon hacks for shared executable
https://www.garlic.com/~lynn/submain.html#adcon
CMS paged-mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

Why Nations Fail

From: Lynn Wheeler <lynn@garlic.com>
Subject: Why Nations Fail
Date: 26 Dec, 2023
Blog: Facebook
"Why Nations Fail"
https://www.amazon.com/Why-Nations-Fail-Origins-Prosperity-ebook/dp/B0058Z4NR8/

original settlement, Jamestown ... English planning on emulating the Spanish model, enslave the local population to support the settlement. Unfortunately the North American natives weren't as cooperative and the settlement nearly starved. Then they switched to sending over some of the other populations from the British Isles essentially as slaves ... the English Crown charters had them as "leet-man" ... pg27:
The clauses of the Fundamental Constitutions laid out a rigid social structure. At the bottom were the "leet-men," with clause 23 noting, "All the children of leet-men shall be leet-men, and so to all generations."

... snip ...

My wife's father was presented with a set of 1880 history books for some distinction at West Point by the Daughters Of the 17th Century
http://www.colonialdaughters17th.org/

which refer to if it hadn't been for the influence of the Scottish settlers from the mid-atlantic states, the northern/english states would have prevailed and the US would look much more like England with monarch and strict class hierarchy. His Scottish ancestors came over after their clan was "broken". Blackadder WW1 episode had "what does English do when they see a man in a skirt?, they run him through and nick his land". Other history was the Scotts were so displaced that about the only thing left for men, was the military.

Also the English had been using Georgia as penal colony ... but after US independence, they switched to Australia

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

"Why Nations Fail" & "leet-men"
https://www.garlic.com/~lynn/2021f.html#98 No, the Vikings Did Not Discover America
https://www.garlic.com/~lynn/2019e.html#161 Fascists
https://www.garlic.com/~lynn/2019e.html#95 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#10 The 1619 Project
https://www.garlic.com/~lynn/2019.html#44 People are Happier in Social Democracies Because There's Less Capitalism
https://www.garlic.com/~lynn/2019.html#40 Indian Wars
https://www.garlic.com/~lynn/2018f.html#9 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018d.html#95 More Immigration
https://www.garlic.com/~lynn/2018c.html#52 We the Corporations: How American Businesses Won Their Civil Rights
https://www.garlic.com/~lynn/2018b.html#45 More Guns Do Not Stop More Crimes, Evidence Shows
https://www.garlic.com/~lynn/2017j.html#68 The true story behind Thanksgiving is a bloody struggle that decimated the population and ended with a head on a stick
https://www.garlic.com/~lynn/2017i.html#40 Equality: The Impossible Quest
https://www.garlic.com/~lynn/2017f.html#10 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017.html#55 Comanche Empire
https://www.garlic.com/~lynn/2017.html#32 Star Trek (was Re: TV show Mannix observations)
https://www.garlic.com/~lynn/2017.html#12 Separation church and state
https://www.garlic.com/~lynn/2016e.html#123 E.R. Burroughs
https://www.garlic.com/~lynn/2016c.html#38 Qbasic
https://www.garlic.com/~lynn/2015b.html#62 Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2015.html#29 the previous century, was channel islands, definitely not the location of LEO
https://www.garlic.com/~lynn/2014m.html#84 LEO
https://www.garlic.com/~lynn/2014e.html#61 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2012o.html#71 Is orientation always because what has been observed? What are your 'direct' experiences?
https://www.garlic.com/~lynn/2012l.html#17 Cultural attitudes towards failure
https://www.garlic.com/~lynn/2012k.html#7 Is there a connection between your strategic and tactical assertions?
https://www.garlic.com/~lynn/2012h.html#15 Imbecilic Constitution
https://www.garlic.com/~lynn/2012e.html#31 PC industry is heading for more change

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downturn and Downfall

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downturn and Downfall
Date: 27 Dec, 2023
Blog: Facebook
Early 70s, Learson trying (and failed) to block the bureaucrats, careerists, and MBAs from destroying the Watson legacy. Two decades later IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company).

A decade later, early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM, I was also blamed for online computer conferencing on the internal network (folklore is when the corporate executive committee was told, 5of6 wanted to fire me), ... including long dissertations on what top executives were doing wrong (a decade later IBM has one of the largest losses in history of US companies)

1989/1990 the commandant of the Marine Corps leverages Boyd for a make-over of the Corps (at a time when IBM was in desperate need of make-over)

1992, IBM has one of the largest losses in history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex that (mostly) reverses the breakup (although it wasn't long before the disk division, and others are gone).

Longer tome from mid-2022 (30yrs later, on the 70s&80s)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage 370 Clones, Amdahl, Fujitsu, Hitachi

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage 370 Clones, Amdahl, Fujitsu, Hitachi
Date: 28 Dec, 2023
Blog: Facebook
Amdahl was selling into tech/univ market ... following has story about 1st Amdahl order in true-blue commercial market
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

recent posts mentioning Amdahl
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#11 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#23 Vintage 3081 and Water Cooling
https://www.garlic.com/~lynn/2023g.html#30 Vintage IBM OS/VU
https://www.garlic.com/~lynn/2023g.html#42 IBM Koolaid
https://www.garlic.com/~lynn/2023g.html#44 Amdahl CPUs
https://www.garlic.com/~lynn/2023g.html#46 Amdahl CPUs
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#56 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#58 Multiprocessor
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#95 Vintage 370 Clones, Amdahl, Fujitsu, Hitachi

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage 370 Clones, Amdahl, Fujitsu, Hitachi

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage 370 Clones, Amdahl, Fujitsu, Hitachi
Date: 28 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#95 Vintage 370 Clones, Amdahl, Fujitsu, Hitachi

1980 all ibm microcode machines were going to transition from large number of different CISC microprocessor to 801/risc ... in part to have pl.8 common programming language (low/midrange 370, s/38, controllers, etc). for various reasons they all floundered and returned to cisc business as usual (and some number of risc engineers left for other vendors).

... except ROMP 801/risc which was to be for displaywriter follow-on... when that was canceled ... they retargeted for unix, pc/rt ... then mult-chip RIOS for rs/6000. then next was single chip aim/somerset with 64bit follow-on ... and rochester got involved to move as/400 off CISC to RISC

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

posts mentioning s38/as400, 801/risc, somerset
https://www.garlic.com/~lynn/2021d.html#47 Cloud Computing
https://www.garlic.com/~lynn/2019c.html#2 S/38, AS/400
https://www.garlic.com/~lynn/2018d.html#69 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2014b.html#68 Salesmen--IBM and Coca Cola
https://www.garlic.com/~lynn/2013n.html#95 'Free Unix!': The world-changing proclamationmade30yearsagotoday
https://www.garlic.com/~lynn/2013f.html#29 Delay between idea and implementation
https://www.garlic.com/~lynn/2013b.html#3 New HD
https://www.garlic.com/~lynn/2012d.html#23 IBM cuts more than 1,000 U.S. Workers
https://www.garlic.com/~lynn/2012.html#90 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#75 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#35 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011l.html#15 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011h.html#35 Happy 100th Birthday, IBM!
https://www.garlic.com/~lynn/2010j.html#1 History: Mark-sense cards vs. plain keypunching?
https://www.garlic.com/~lynn/2010h.html#12 OS/400 and z/OS
https://www.garlic.com/~lynn/2009q.html#79 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2008r.html#43 another one biting the dust?
https://www.garlic.com/~lynn/2008h.html#40 3277 terminals and emulators
https://www.garlic.com/~lynn/2007q.html#48 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007f.html#27 The Perfect Computer - 36 bits?

--
virtualization experience starting Jan1968, online at home since Mar1970

Shared Memory Feature

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Shared Memory Feature
Date: 28 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#86 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#89 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#92 Shared Memory Feature

Standard industry MIPS benchmark has been executing specific program and number of iterations/sec compared to 370/158 (assumed to be one MIPS) ... not actual count of instructions. Numbers for z-mainframes had been available ... however more recently published numbers have tended to be percent change since prior generation, i.e.


z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019
• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)
z16, 200 processors, 222BIPS* (1111MIPS/proc), Sep2022
• pubs say z16 1.17 times z15 (1.17*190BIPS or 222BIPS)

Z196/jul2010, 50BIPS, 625MIPS/processor
Z16/sep2022, 222BIPS, 1111MIPS/processor

12yrs, Z196->Z16, 222/50=4.4times


2010, server blades 10times max configured z196 and 2022 have improved more like ten times ... so could be more like 20+ times max configured z16 ... and megadacenter will have half million or more of these server blades (megadatacenters have processing equivalent of few million max configured mainframes)

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

1980 STL (now SVL) was busting at the seams and was moving 300 people from IMS group to offsite bldg. I get con'ed into channel extender support ... allowing placement of channel attached 3270 controllers ... so the have same human factors offsite as in STL.

The hardware vendor then tries to get IBM to release my support... but there is group in POK playing with some serial stuff that was playing with some serial stuff that was concerned that if it was in the market, it will be harder to get their stuff released ... and get it blocked.

1988, IBM branch office asks me to help LLNL (national lab) get some stuff they are playing with standardized ... which quickly becomes Fibre Channel Standard ("FCS", including some stuff I had done in 1980), initially 1gbit, full duplex, 200mbyte/sec aggregate.

Then after decade, the POK get their stuff released with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec)

Then some POK engineers become involved with FCS and define a heavyweight protocol that significantly reduces the native throughput; eventually released as FICON (running over FCS). The most recent public numbers is z196 "Peak I/O" benchmark getting 2M IOPS using 104 FICON (running over 104 FCS). About the same time a FCS is announced for server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON).

Aggravating things is 1) requirements for CKD which haven't been made for decades ... having to be simulated on industry standard fixed-block disks and 2) recommendations keeping SAPs (system assist processors that do actual I/O) to no more than 70% CPU (around 1.5M IOPS).

trivia: the original 3380 (3mbyte/sec transfer) had 20 track spacings between each data track, then they cut spacing in half to double capacity and then cut spacing again for triple original 3380 capacity.

Around mid-80s, the father of risc/801 cons me into helping him with "wide disk head" ... capable of parallel data transfer from/to 16 closely spaced data tracks (plus two servo-tracks, one on each side of the 16 data tracks). Problem was that would be about 50mbytes/sec and IBM mainframe only supported 3mbytes and even ESCON was only going to be 17mbytes ... while HIPPI (standardized Cray 100mbyte/sec channel) and FCS would have handled it.

I also had HSDT project (T1 and faster computer links), was working with NSF director and was suppose to get $20M to interconnect NSF supercomputer centers; then congress cuts the budget, some other things happen and then they release RFP (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

Earlier in the 80s, had T1 satellite link between Los Gatos VLSI lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston that had a bunch of floating point system boxes (latest ones had 40mbyte/sec disk arrays).
https://en.wikipedia.org/wiki/Floating_Point_Systems

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Shared Memory Feature

From: Lynn Wheeler <lynn@garlic.com>
Subject: Shared Memory Feature
Date: 28 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#86 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#89 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#92 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#97 Shared Memory Feature

.... it was rewriting quite a bit of (assembler) code for r/o shared segment and no RLDs for location independent and same shared segments concurrently in different virtual address spaces at different virtual addresses.

.... old archived post on the subject
https://www.garlic.com/~lynn/2001f.html#9
including email exchange with author of IOS3270, FULIST, and BROWSE about adapting to r/o shared segment with no adcons (address free)
https://www.garlic.com/~lynn/2001f.html#email781010
https://www.garlic.com/~lynn/2001f.html#email781011

no RLDs, no adcon posts
https://www.garlic.com/~lynn/submain.html#adcon
paged mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Mascot

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Mascot
Date: 28 Dec, 2023
Blog: Facebook
VM Mascot
https://en.wikipedia.org/wiki/VM_(operating_system)#VM_mascot

In the early 1980s, the VM group within SHARE (the IBM user group) sought a mascot or logo for the community to adopt. This was in part a response to IBM's MVS users selecting the turkey as a mascot (chosen, according to legend, by the MVS Performance Group in the early days of MVS, when its performance was a sore topic). In 1983, the teddy bear became VM's de facto mascot at SHARE 60, when teddy bear stickers were attached to the nametags of "cuddlier oldtimers" to flag them for newcomers as "friendly if approached". The bears were a hit and soon appeared widely.[32] Bears were awarded to inductees of the "Order of the Knights of VM", individuals who made "useful contributions" to the community.[33][34]

... snip ...

Knights
http://mvmua.org/knights.html
Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist

23jun1969 unbundling announce included starting to charge for software (but made the case that kernel software should still be free). Then in early 70s during the FS period (totally different than 370 and going to completely replace 370), internal politics was killing off 370 efforts (the lack of new 370 during the period is credited with giving clone 370 system makers their market foothold). When FS imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. With the rise of clone 370 makers, there was a decision to transition to charge for kernel software, starting with incremental add-ons (when I joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters, some of which was chosen to be the guinea pig for charged-for incremental kernel add-ons). By the early 80s, the transition was complete and full kernel was charged for ... and the start of "OCO-wars" (customers complaining of "object code only" decision).

TYMSHARE started offering their CMS-based online computer conferencing system (precursor to social media) "free" to SHARE as VMSHARE in Aug1976 ... "OCO-wars" references in archive
http://vm.marist.edu/~vmshare

23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

some posts mentioning had to include joke in the code
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2015e.html#5 Remember 3277?
https://www.garlic.com/~lynn/2006x.html#10 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2002k.html#66 OT (sort-of) - Does it take math skills to do data processing ?

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Mascot

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Mascot
Date: 29 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#99 VM Mascot
recent reference
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe

trivia: after FS implodes, the head of POK also convinces corporate to kill VM370, shutdown the development group and transfer all the people to POK for MVS/XA (supposedly necessary to ship MVS/XA on schedule, part of avoiding the MVS implosion with MVS kernel & CSA consuming all of applications 16mbyte virtual address spaces). Endicott eventually manages to save the VM370 product mission, but had to reconstitute a development group from scratch ... but MVS group was bullying internal datacenters that they needed to migrate from VM370 to MVS since VM370 would no longer be supported at high-end ... only MVS/XA. POK had done VMTOOL ... minimum virtual machine support for MVS/XA development (and never intended to ship to customers). However, later customers weren't converting to MVS/XA as planned ... similar to when customers weren't converting to MVS as planned:
http://www.mxg.com/thebuttonman/boney.asp

Aggravating situation, Amdahl was deploying (hardware/microcode) HYPERVISOR in early 80s, allowing MVS & MVS/XA to be run concurrently ... seeing better success with moving customers to MVS/XA. Eventually IBM decision was made to make VMTOOL available as VM/MA & VM/SF (migration aid, system facility). Amdahl single processor machine also had about same throughput as two processor 3081K. IBM didn't respond to the Amdahl HYPERVISOR with PR/SM and LPAR on 3090 until almost the end of the decade.

Amdahl had done MACROCODE easing implementation of HYPERVISOR during 3033 period ... 370-like instructions running in microcode mode for greatly simplifying and speeding up making microcode changes (responding to series of new 3033 microcode required by MVS to operate).

FS posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Mascot

From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Mascot
Date: 29 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#99 VM Mascot
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot

SHARE '67 ... not 1967. CSC had gotten 360/40 and added virtual memory hardware and did CP40/CMS
https://www.garlic.com/~lynn/cp40seas1982.txt

then when 360/67 (standard with virtual memory) became available, CP40 morphs into CP67 (precursor to vm370).

I was undergraduate at one of the places (UNIV) that had gotten 360/67 for tss/360 but ran it as 360/65 with os/360 and univ. had hired me fulltime responsible for os/360 (univ shutdown datacenter on weekends and I had the whole place dedicated to myself, although 48hrs w/o sleep made monday morning classes hard).

CSC came out Jan1968 to install CP67 (3rd installation after Cambridge itself and MIT Lincoln Labs) ... and I mostly played with it during my weekend windows. CP67 was publicly announced at 1968 spring SHARE in Houston and I was asked to attend. June 1968, CSC had class at Beverly Hills Hilton. I arrive Sunday and asked to teach the class (CSC people that were to teach the class had given notice on Friday to join NCSS, commercial online CP67 service bureau). During the spring 68, I had rewritten lots of CP67 pathlengths to improve OS/360 running in virtual machine (bare machine 322secs elapsed, original in virtual machine 856secs, CP67 CPU 534secs, by June had CP67 CPU down to 113secs). Part of fall68 SHARE presentation
https://www.garlic.com/~lynn/94.html#18

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Melinda history
https://www.leeandmelindavarian.com/Melinda#VMHist

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Mascot

From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Mascot
Date: 30 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#99 VM Mascot
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot
https://www.garlic.com/~lynn/2023g.html#101 VM Mascot

.... other trivia: I had long argument with the POK performance group about some tweaking that they were doing to the virtual memory page replacement algorithm (for vs2/svs) ... that it was making wrong choices (replacing non-changed pages before changed pages since it required less work, however it met that high-use shared kernel linkpack pages would be replaced before low-used application data pages) .... they eventually respond that it wouldn't make any difference since SVS was never expected to do more than 4-5 page I/Os a second. However, transition from SVS to MVS, work load and paging rate was increasing and later 70s somebody in POK got a large reward for fixing it (raising the question about POK if they would purposefully do the wrong thing so they could later get awards for fixing it).

page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock

some specific posts mentioning the vs2/svs tweak
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2021c.html#38 Some CP67, Future System and other history
https://www.garlic.com/~lynn/2021b.html#59 370 Virtual Memory
https://www.garlic.com/~lynn/2017j.html#84 VS/Repack
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2014i.html#96 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2014c.html#71 assembler
https://www.garlic.com/~lynn/2012p.html#4 Query for IBM Systems Magazine website article on z/OS community
https://www.garlic.com/~lynn/2012c.html#40 Where are all the old tech workers?
https://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Addresses?
https://www.garlic.com/~lynn/2007p.html#74 GETMAIN/FREEMAIN and virtual storage backing up
https://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2007c.html#56 SVCs
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!

--
virtualization experience starting Jan1968, online at home since Mar1970

More IBM Downfall

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: More IBM Downfall
Date: 30 Dec, 2023
Blog: Facebook
little topic drift, early 80s, co-worker left IBM SJR and was doing contracting work in silicon valley, lots of it for a major VLSI company. He did a lot of work on AT&T C compiler (bug fixes and code optimization) getting running on CMS ... and then ported a lot of the BSD chip tools to CMS. One day the IBM rep came through and asked him what he was doing ... he said ethernet support for using SGI workstations as graphical frontends. The IBM rep told him that instead he should be doing token-ring support or otherwise the company might not find its mainframe support as timely as it has been in the past. I then get a hour long phone call listening to four letter words. The next morning the senior VP of engineering calls a press conference to say the company is completely moving off all IBM mainframes to SUN servers. There were then IBM studies why silicon valley wasn't using IBM mainframes ... but they weren't allowed to consider branch office marketing issues.

maybe more than you want to know: Amdahl was doing ACS/360 ... folklore is that IBM execs shut it down because it would advance state of the art too fast and IBM would loose control of the market; Amdahl leaves shortly later (has some ACS/360 features that show up with ES/9000 in the 90s).
https://people.computing.clemson.edu/~mark/acs_end.html

early 70s, IBM has "Future System" effort, completely different from 370 and was going to replace it; internal politics was shutting down 370 projects (claim is lack of new 370 during the period gave clone makers/Amdahl their market foothold; joke IBM sales/marketing had to seriously practice FUD) ... some FS:
http://www.jfsowa.com/computer/memo125.htm
when FS implodes there is mad rush to get stuff back into 370 product pipelines, including quick&dirty 3033&3081 efforts in parallel.

this tome from year ago, mentions 1st true-blue, commercial Amdahl order (had previously been selling into univ&tech markets):
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

A single 3081D processor was slower than 3033 and two processor 3081D was slower than single processor Amdahl machine. They double processor cache size for 3081K claiming 1.4 times 3081D ... so two processor aggregate about same as Amdahl single processor ... however lower MVS throughput since IBM was claiming MVS two-processor (multiprocessor ovehead) was only about 1.2-1.5 times throughput of single processor. Initially they were only planning on offering multiprocessor 308x models ... but then concerned that the airline ACP/TPF didn't have multiprocessor support and that market could all go to Amdahl. Eventually they offered (single processor) 3083 (basically a 3081 with one of the processors removed).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some past posts referencing silicon valley not using IBM mainframes
https://www.garlic.com/~lynn/2023g.html#62 Silicon Valley Mainframes
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#7 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#125 Google Cloud
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021h.html#69 IBM Graphical Workstation
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring
https://www.garlic.com/~lynn/2018f.html#49 PC Personal Computing Market
https://www.garlic.com/~lynn/2017g.html#12 Mainframe Networking problems
https://www.garlic.com/~lynn/2017.html#60 The ICL 2900
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016g.html#53 IBM Sales & Marketing
https://www.garlic.com/~lynn/2016.html#42 1976 vs. 2016?
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013g.html#45 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013b.html#31 Ethernet at 40: Its daddy reveals its turbulent youth
https://www.garlic.com/~lynn/2011p.html#119 Start Interpretive Execution

some posts mentioning killing vm370, mvs/xa, Amdahl MACROCODE and HYPERVISOR
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes

--
virtualization experience starting Jan1968, online at home since Mar1970

More IBM Downfall

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: More IBM Downfall
Date: 30 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#103 More IBM Downfall

Something like Learson trying (& failed) to block the bureaucrats, careerists, and MBAs from destroying the Watson culture.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

2016, one of the "The Boeing Century" articles was about how the merger with MD has nearly taken down Boeing and may yet still (infusion of military industrial complex culture into commercial operation)
https://issuu.com/pnwmarketplace/docs/i20160708144953115

The Coming Boeing Bailout?
https://mattstoller.substack.com/p/the-coming-boeing-bailout
Unlike Boeing, McDonnell Douglas was run by financiers rather than engineers. And though Boeing was the buyer, McDonnell Douglas executives somehow took power in what analysts started calling a "reverse takeover." The joke in Seattle was, "McDonnell Douglas bought Boeing with Boeing's money."

... snip ...

Crash Course
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution
Sorscher had spent the early aughts campaigning to preserve the company's estimable engineering legacy. He had mountains of evidence to support his position, mostly acquired via Boeing's 1997 acquisition of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft plant in Long Beach and a CEO who liked to use what he called the "Hollywood model" for dealing with engineers: Hire them for a few months when project deadlines are nigh, fire them when you need to make numbers. In 2000, Boeing's engineers staged a 40-day strike over the McDonnell deal's fallout; while they won major material concessions from management, they lost the culture war. They also inherited a notoriously dysfunctional product line from the corner-cutting market gurus at McDonnell.

... snip ...

Boeing's travails show what's wrong with modern capitalism. Deregulation means a company once run by engineers is now in the thrall of financiers and its stock remains high even as its planes fall from the sky
https://www.theguardian.com/commentisfree/2019/sep/11/boeing-capitalism-deregulation

disclaimer: before I graduate I had been hired fulltime into small group in the Boeing CFO office to help with forming Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize the investment) ... then when I graduate I join the science center (instead of staying at Boeing).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some past posts mentioning MD take-over of Boeing
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#69 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#40 Boeing Built an Unsafe Plane, and Blamed the Pilots When It Crashed

some recent posts mentioning Boeing Computer Services:
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#42 IBM Koolaid
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#28 IBM FSD
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023f.html#115 IBM RAS
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#20 I've Been Moved
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy

--
virtualization experience starting Jan1968, online at home since Mar1970

VM Mascot

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM Mascot
Date: 30 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#99 VM Mascot
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot
https://www.garlic.com/~lynn/2023g.html#101 VM Mascot
https://www.garlic.com/~lynn/2023g.html#102 VM Mascot

As a undergraduate in the 60s ... I rewrote a lot of CP/67 ... including the page replacement algorithm and did dynamic adaptive resource management (dispatching, scheduling, page thrashing control, ordered arm seek queuing, pathlengths, etc). In the morph of CP67->VM370 lots of stuff was dropped and/or greatly simplified. Now one of my hobbies after joining IBM was enhanced production operations ... including the world-wide online sales&marketing support HONE systems were long time customers. I spent some amount of time in 1974, moving lots of CP67 into VM370.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

Then with FS and rise of clone 370 makers, followed by FS implosion and decision to transition to charging for kernel software (after 23june1969 decision to charge for software, but kernel software should still be free), some amount of my stuff (including dynamic adaptive resource management) was selected to be guinea pig for kernel add-on (on the way for charging for all kernel software).

23june1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

A corporate guru (infused in MVS & SRM) did a review and said he wouldn't sign off because everybody knew that manual tuning knobs was the state of the art (and I didn't have any). MVS SRM had huge array of random tuning knobs and they were making presentations at SHARE about effects of different (random?) combinations of values had on different (effectively) static workloads. I tried to explain dynamic adaptive but it fell on deaf ears. So I put in a few manual tuning knobs (all accessed by a "SRM" command ... part of ridiculing MVS) that were part of a Operations Research joke ... because the dynamic adaptive code had greater degrees of freedom than the manual tuning knobs ... and could offset any manual set value.

dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

After transferring to SJR in 2nd half of 70s, I was allowed to wander around IBM and non-IBM datacenters, including disk engineering & product test (bldgs 14&15) across the street. The were doing stand-alone mainframe testing, prescheduled, 7x24; they said that they had tried MVS but it had 15min MTBF (requiring manual re-ipl) in that environment. I offered to rewrite I/O supervisor making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity. I then wrote some "internal-only" research reports and happen to mention the MVS MTBF ... bringing the wrath of the MVS organization down on my head. Later when 3880/3380 was about to ship, FE had 57 error simulation (that they believe were likely to occur) and MVS was failing (requiring manual re-ipl) in all 57 cases (and in 2/3rds no indication about what caused the failure) ... I didn't feel badly.

posts getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
email mentioning 57 errors
https://www.garlic.com/~lynn/2007.html#email801015

Jim Gray was aware of my paging algorithm work as undergraduate in the 60s (that was completely different from the academic and ACM literature being done at the time). At Dec81 ACM SIGOPS, he asked if I could halp a Tandem co-worker get his Stanford Phd (something similar to what I had done in the 60s) because the forces from the 60s academic/ACM work were lobbying against it (I had detailed numbers comparing my stuff from the 60s to their stuff). I went to send the data, but executives blocked sending it for nearly a year (even though it was work as undergraduate before joining IBM). However, I had been blamed for online computer conferencing on the internal network in the late 70s and early 80s (folklore is when the corporate executive committee was told, 5of6 wanted to fire me), and I hoped it was a form of punishment for online computer conferencing and not that they were meddling in an academic dispute.

paging algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

old archive posts with copy of response that i was allowed to send
https://www.garlic.com/~lynn/2006w.html#email821019

--
virtualization experience starting Jan1968, online at home since Mar1970

Shared Memory Feature

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Shared Memory Feature
Date: 30 Dec, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023g.html#86 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#89 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#92 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#97 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#99 Shared Memory Feature

As Future System was imploding I got asked to work on 5-CPU 370/125 and 138/148 ECPS. 115&125 had 9 position memory bus for microprocessors ... 115 had all microprocessors the same with different microcode loads (controllers, 370). 125 was the same but the microprocessor running 370 microcode was 50% faster. No configurations ran with more than a small few microprocessor. They wanted to have five microprocessor running 370, leaving four positions for controllers. It was canceled when Endicott started complaining that the five processor 125 would overlap throughput of 148 (in escalation meetings, I had to argue both sides).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
5-way 125 posts
https://www.garlic.com/~lynn/submain.html#bounce

Then got corralled into working on 16-way shared memory multiprocessor and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK it could be decades before the POK favorite son operating system (MVS) had effective 16-way machine. Then some of us were invited to never visit POK again and 3033 processor engineers were instructed to not be distracted. POK doesn't ship a 16-way machine until after turn of the century (z900).

We had monthly user group meetings hosted by Stanford SLAC. SLAC also hosted the first US webserver on VM370
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
SLAC/Gustavson was involved with SCI and I was asked to participate ...
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface

it defined memory bus with 64 positions. Convex (bought by HP) did 64 two-processor (snake/risc) shared cache boards for 128-way shared memory multiprocessor. Data General and Sequent did 64 four-processor (initially i486) shared cache boards for 256-way shared memory multiprocessor ... I did some consulting for Steve Chen when he was CTO at Sequent ... before being bought and shutdown by IBM (note IBM earlier had been funding Chen when he founded Chen Supercomputing)
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
IBM, Chen & non-cluster, traditional supercomputer
https://techmonitor.ai/technology/ibm_bounces_steve_chen_supercomputer_systems

SMP, loosely-coupled, shared memory, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

Note: The last product did at IBM was HA/CMP,
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

which started out HA/6000 for the NYTimes to migrate their newspaper system (ATEX) off DEC VAXCluster to RS/6000 (Nick Donofrio had stopped by and my wife presented him five hand drawn charts and he approved the project and funding) I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix) ... 128-way with FCS and multiple 64-way non-blocking FCS switches to shared disk farm. Early Jan1992, meeting with Oracle CEO, AWD/Hester tells them that we will have 16-system clusters by mid92 and 128-system clusters by ye92. During Jan92, I'm giving pitches to FSD about HA/CMP work with national labs and end of Jan, FSD tells the IBM Kingston Supercomputer group that they are going with HA/CMP. Almost immediately cluster scale-up is transferred for announce as IBM cluster supercomputer for technical/scientific *ONLY* and we are told we can't work on anything with more than four processors (we leave IBM a few months later). Note RS/6000 RIOS chip set didn't support cache consistency tightly-coupled shared memory multiprocessor (wasn't until later that got capability for clusters of shared memory multiprocessors). Mixed in with all of this, mainframe DB2 were complaining if we were allowed to proceed, it would be at least 5yrs ahead of them. Computerworld news 17feb1992 (from wayback machine) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
cluster supercomputer for technical/scientific *ONLY*
https://www.garlic.com/~lynn/2001n.html#6000clusters1
more news 11may1992, IBM "caught" by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

trivia: my wife was in the JES group and one of the catchers for ASP/JES3 and then was con'ed into going to POK to be responsible for loosely-coupled architecture where she did peer-coupled shared data architecture". She didn't remain long because 1) periodic battles with communication group trying to force her into using VTAM for loosely-coupled operation, 2) little uptake (until much later with sysplex and parallel sysplex) except for IMS hot-standby. She has story about asking Vern Watts who he would ask permission to do the implementation, he replied "nobody, I'll just tell them when it is all done"

peer-coupled shared data posts
https://www.garlic.com/~lynn/submain.html#shareddata

--
virtualization experience starting Jan1968, online at home since Mar1970

Cluster and Distributed Computing

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Cluster and Distributed Computing
Date: 31 Dec, 2023
Blog: Facebook
When I transfer to SJR in 2nd half of 70s, I got to wander around IBM and non-IBM datacenters in silicon valley, including disk engineering & product test (bldgs 14&15) across the street. They were running stand-alone, 7x24, prescheduled mainframe testing (mentioned that they had tried MVS, but it had 15min MTBF in that environment, requiring re-ipl). I offer to rewrite I/O supervisor, making it bullet proof and never fail, allow any amount of ondemand, concurrent testing, greatly improving productivity. Product test got very early engineering 3033 and 4341. Testing only took percent or two CPU of 3033, so we scrounge a 3830 controller and 3330 string and put up our own private online service. Branch hears about engineering 4341 and cons me into do benchmark for national lab, looking at getting 70 for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).

Starting in the early '80s, 4300s sold in the mid-range market with DEC VAX machines and in about the same number for small unit orders. The big difference was companies with large orders of hundreds of VM/4341s (sort of the leading edge of the coming distributed computing tsunami), inside IBM, conference rooms became scarce being converted into VM/4341 rooms. MVS was looking at the exploding distributed computing market ... however the only new CKD was datacenter 3380 (and the mid-range was 3370 FBAs that MVS didn't support). Eventually they came out with CKD 3375 (CKD simulation on 3370). However, it didn't do MVS much good, distributed VM/4341s had scores of systems per support person while MVS was scores of support people per system. Now no new CKD disks have been made for decades, all being simulated on industry standard fixed-block disks.

I was also working with Jim Gray and Vera Watson doing some of the work on the original sql/relational implementation, System/R. Bank of America was early installation, getting 60 VM/4341s for distributed database operation. At the time, corporation was pre-occupied with the next new DBMS, "EAGLE" and was able to do tech transfer ("under the radar) to Endicott for SQL/DS. Then when "EAGLE" implodes, they want to know how fast can System/R be ported to MVS, eventually released as DB2 (originally for decision support only).

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

addenda

usenet would have URLs to post for references w/o having to duplicate reference ... facebook hasn't gotten to that level (aka I was blamed for online computer conferencing ... precursor to modern social media, on the IBM network in the late 70s and early 80s, >40yrs ago ... folklore is when the corporate executive committee was told, 5of6 wanted to fire me, one of the outcomes was the official software and sanctioned internal IBM forums)

Mar/Apr '05 eserver magazine article about postings, gone 404, but lives on at wayback machine (before Facebook, trivia: when facebook 1st moved into silicon valley, it was into a new bldg built next door to the former US online sales&marketing support HONE datacenter; one of the first and long time customer after joining IBM for my internal enhanced production operating systems)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
Mainframe Hall of Fame
https://www.enterprisesystemsmedia.com/mainframehalloffame

online computer conferencing
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, next, index - home