List of Archived Posts

2025 Newsgroup Postings (10/06 - )

Mainframe and Cloud
Mainframe skills
PS2 Microchannel
Switching On A VAX
Mainframe skills
Kuwait Email
VM370 Teddy Bear
Mainframe skills
IBM Somers
IBM Interactive Response
IBM Interactive Response
Interesting timing issues on 1970s-vintage IBM mainframes
Interesting timing issues on 1970s-vintage IBM mainframes
IBM CP67 Multi-leavel Update
IBM DASD, CKD and FBA
IBM DASD, CKD and FBA
CTSS, Multics, Unix, CSC
IBM DASD, CKD and FBA
IBM Mainframe TCP/IP and RFC1044
IBM Mainframe TCP/IP and RFC1044
IBM HASP & JES2 Networking
IBM Token-Ring
IBM Token-Ring
IBM Token-Ring
IBM Mainframe TCP/IP and RFC1044
Opel
Opel
Opel
IBM Germany
IBM Thin Film Disk Head
IBM Germany
IBM 3274/3278
What Is A Mainframe
What Is A Mainframe
Linux Clusters
Linux Clusters
Linux Clusters
Linux Clusters
Amazon Explains How Its AWS Outage Took Down the Web
Amazon Explains How Its AWS Outage Took Down the Web
IBM Boca and IBM/PCs
IBM 360/85
IBM 360/85
IBM 360/85
IBM SQL/Relational
IBM Think
IBM 360/85
IBM 360/85
The Weird OS Built Around a Database
IBM S/88
IBM Disks
IBM VTAM/NCP
IBM VTAM/NCP
IBM Downfall
IBM Workstations
IBM ACP/TPF
Tymshare
IBM 360/30 and other 360s
IBM 360/30 and other 360s
IBM 360/30 and other 360s
Doing both software and hardware
"Business Ethics" is Oxymoron
IBM Mainframe Projects
LSRAD Report
IBM Module Prefixes
Computing Power Consumption
IBM S/88
Mainframe to PC
Mainframe to PC
IBM CEO 1993
IBM CEO 1993
IBM 370 Virtual Memory
IBM 370 Virtual Memory
IBM 370 Virtual Memory
IBM 370 Virtual Memory
Interactive response
Boeing Computer Services
IBM I/O & DASD
IBM 360, Future System
IBM 8100, SNA, OSI, TCP/IP, Amadeus

Mainframe and Cloud

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe and Cloud
Date: 06 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#112 Mainframe and Cloud

... note online/cloud tends to have capacity much greater than avg use
... to meet peek on-demand use which could be order of magnitude
greater. cloud operators had heavily optimized server blade systems
costs (including assembling their own systems for a fraction of brand
name servers) ... and power consumption was increasingly becoming a
major expense. There was then increasing pressure on makers of server
components to optimize power use as well as allowing power use to drop
to zero when idle ... but instant on to meet on-demand requirements.

large cloud operation can have a score (or more) of megadatacenters
around the world, each with half million or more server blades, and
each server blade with ten times rocessing of max. configured
mainframe .... and enormous automation; a megadatacenter with 70-80
staff (upwards of 10,000 or more systems per staff). In the past were
articles about being able to use a credit card to on-demand spin up
for a couple of hrs, a cloud ("cluster") supercomputer (that ranked in
the top 40 in the world)

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

GML was invented at the IBM Cambridge Science Center in 1969 (about
the same time that CICS product appeared) .... after decade morphs
into ISO standard SGML and after another decade morphs into HTML at
CERN.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#bdam

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe skills

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe skills
Date: 06 Oct, 2025
Blog: Facebook

In CP67->VM370 (after decision to add virtual memory to all 370s),
lots of stuff was simplified and/or dropped (including multiprocessor
support) ... adding virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73

Then in 1974 with a VM370R2, I start adding a bunch of stuff back in
for my internal CSC/VM (including kernel re-org for multiprocessor,
but not the actual SMP support). Then with a VM370R3, I add SMP back
in, originally for (online sales&marketing support) US HONE so
they could upgrade all their 168s to 2-CPU 168s (with little slight of
hand getting twice throughput).

Then with the implosion of Future System
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
I get asked to help with a 16-CPU 370, and we con the 3033 processor
engineers into helping in their spare time (a lot more interesting
than remapping 168 logic to 20% faster chips). Everybody thought it
was great until somebody tells the head of POK that it could be
decades before POK favorite son operating system ("MVS") had
(effective) 16-CPU support (MVS docs at the time saying 2-CPU systems
had 1.2-1.5 times throughput of single CPU; POK doesn't ship 16-CPU
system until after turn of century). Then head of POK invites some of
us to never visit POK again and directs 3033 processor engineers heads
down and no distractions.

2nd half of 70s transferring to SJR on west coast, I worked with Jim
Gray and Vera Watson on original SQL/relational, System/R (done on
VM370); we were able to tech transfer ("under the radar" while
corporation was pre-occupied with "EAGLE") to Endicott for
SQL/DS. Then when "EAGLE" implodes, there was request for how fast
could System/R be ported to MVS ... which was eventually released as
DB2, originally for decision-support *only*.

I also got to wander around IBM (and non-IBM) datacenters in silicon
valley, including DISK bldg14 (engineering) and bldg15 (product test)
across the street. They were running pre-scheduled, 7x24, stand-alone
testing and had mentioned recently trying MVS, but it had 15min MTBF
(requiring manual re-ipl) in that environment. I offer to redo I/O
system to make it bullet proof and never fail, allowing any amount of
on-demand testing, greatly improving productivity. Bldg15 then gets
1st engineering 3033 outside POK processor engineering ... and since
testing only took percent or two of CPU, we scrounge up 3830
controller and 3330 string to setup our own private online
service. Then bldg15 also gets engineering 4341 in 1978 and some how
branch hears about it and in Jan1979 I'm con'ed into doing a 4341
benchmark for a national lab that was looking at getting 70 for
compute farm (leading edge of the coming cluster supercomputing
tsunami).

Decade later in 1988, got approval for HA/6000 originally for NYTimes
to port their newspaper system (ATEX) from DEC VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Ingres, Sybase, Informix that have DEC
VAXCluster support in same source base with UNIX). IBM S/88 Product
Administrator was also taking us around to their customers and also
had me write a section for corporate continuous availability strategy
document (it gets pulled when both Rochester/AS400 and POK/mainframe
complain).

Early Jan92 meeting with Oracle CEO, AWD executive Hester tells
Ellison that we would have 16-system clusters by mid92 and 128-system
clusters by ye92. Mid-jan92 convince FSD to bid HA/CMP for
gov. supercomputers. Late-jan92, cluster scale-up is transferred for
announce as IBM Supercomputer (for technical/scientific *only*) and we
were told we couldn't work on clusters with more than four systems (we
leave IBM a few months later).

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to the
industry MIPS reference platform):

• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

Former executive we had reported to, goes over to head up Somerset/AIM
(Apple, IBM, Motorola), single chip RISC with M88k bus/cache (enabling
clusters of shared memory multiprocessors)

i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:

• IBM PowerPC 440: 1,000MIPS
• Pentium3: 2,054MIPS (twice PowerPC 440)

early numbers actual industry benchmarks, later used IBM pubs giving
percent change since previous

z900, 16 processors 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS (1250MIPS/core), Jun2025

Also in 1988, the branch office asks if I could help LLNL (national
lab) standardize some serial they were working with, which quickly
becomes fibre-channel standard, "FCS" (not First Customer Ship),
initially 1gbit/sec transfer, full-duplex, aggregate 200mbye/sec
(including some stuff I had done in 1980). Then POK gets some of their
serial stuff released with ES/9000 as ESCON (when it was already
obsolete, initially 10mbytes/sec, later increased to
17mbytes/sec). Then some POK engineers become involved with FCS and
define a heavy-weight protocol that drastically cuts throughput
(eventually released as FICON).

2010, a z196 "Peak I/O" benchmark released, getting 2M IOPS using 104
FICON (20K IOPS/FICON). About the same time a FCS is announced for
E5-2600 server blade claiming over million IOPS (two such FCS having
higher throughput than 104 FICON). Also IBM pubs recommend that SAPs
(system assist processors that actually do I/O) be kept to 70% CPU (or
1.5M IOPS) and no new CKD DASD has been made for decades, all being
simulated on industry standard fixed-block devices. Note: 2010 E5-2600
server blade (16 cores, 31BIPS/core) benchmarked at 500BIPS (ten times
max configured Z196).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
original SQL/relational posts
https://www.garlic.com/~lynn/submain.html#systemr
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Fibre Channel Standard ("FCS") &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
DASD, CKD, FBA, multi-track posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

PS2 Microchannel

From: Lynn Wheeler <lynn@garlic.com>
Subject: PS2 Microchannel
Date: 07 Oct, 2025
Blog: Facebook

IBM AWD had done their own cards for PC/RT workstation (16bit AT-bus),
including a token-ring 4mbit T/R card. For RS/6000 w/microchannel,
they were told they couldn't do their own cards, but had to use
(heavily performance kneedcaped by communication group) PS2
microchannel cards. It turned out that the PC/RT 4mbit token ring card
had higher card throughput than the PS2 16mbit token ring card (joke
that PC/RT 4mbit T/R server would have higher throughput than RS/6000
16mbit T/R server).

Almaden research had been heavily provisioned with IBM CAT wiring,
assuming 16mbit T/R use. However they found that 10mbit Ethernet LAN
(running over IBM CAT wiring) had lower latency and higher aggregate
throughput than 16mbit T/R LAN. Also $69 10mbit Ethernet cards had
much higher throughput than $800 PS2 microchannel 16mbit T/R cards
(joke communication group trying to severely hobble anything other
than 3270 terminal emulation).

30yrs of PC market
https://arstechnica.com/features/2005/12/total-share/

Note above article makes reference to success of IBM PC clones
emulating IBM mainframe clones. Big part of the IBM mainframe clone
success was the IBM Future System effort in 1st half of 70s (going to
completely replace 370 mainframes, internal politics was killing off
370 efforts and claims is the lack of new 370s during the period is
what gave the 370 clone makers their market foothold)
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

PC/RT 4mbit T/R, PS2 16mbit T/R, 10mbit Ethernet
https://www.garlic.com/~lynn/2025d.html#81 Token-Ring
https://www.garlic.com/~lynn/2025d.html#8 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#2 Mainframe Networking and LANs
https://www.garlic.com/~lynn/2025c.html#114 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#88 IBM SNA
https://www.garlic.com/~lynn/2025c.html#56 IBM OS/2
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#41 SNA & TCP/IP
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring
https://www.garlic.com/~lynn/2024g.html#101 IBM Token-Ring versus Ethernet
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024e.html#64 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#83 IBM's Near Demise
https://www.garlic.com/~lynn/2023b.html#50 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#18 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021d.html#15 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring

--
virtualization experience starting Jan1968, online at home since Mar1970

Switching On A VAX

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Switching On A VAX
Newsgroups: alt.folklore.computers
Date: Tue, 07 Oct 2025 16:49:38 -1000

Lars Poulsen <lars@beagle-ears.com> writes:

As someone who spent some time on async terminal drivers, for both
TTYs and IBM 2741-family terminals as well as in the communications
areas of OS/360 (minimally), Univac 1100 EXEC-8, RSX-11M, VMS and
embedded systems on PDP-11/10, Z80 and MC68000, I can maybe contribute
some perspective here.

In 60s, lots of science/technical and univ were sold 360/67 (w/virtual
memory) for tss/360 ... but when tss/360 didn't come to production
... lots of places just used it as 360/65 with os/360 (Michigan and
Stanford wrote their own virtual memory systems for 360/67).

Some of the CTSS/7094 people went to the 5th flr to do Multics, others
went to the 4th flr to the IBM science center and did virtual
machines, internal network, lots of other stuff. CSC originally
wanted 360/50 to do virtual memory hardware mods, but all the spare
50s were going to FAA/ATC and had to settle for 360/40 to modify and
did (virtual machine) CP40/cms. Then when 360/67 standard with
virtual memory came available, CP40/CMS morphs into CP67/CMS.

Univ was getting 360/67 to replace 709/1401 and I had taken two credit
hr intro to fortran/computers; at end of semester was hired to rewrite
1401 MPIO for 360/30 (temporarily replacing 1401 pending 360/67). Within
a yr of taking intro class, the 360/67 arrives and I'm hired fulltime
responsible for OS/360 (Univ. shutdowns datacenter on weekend and I got
it dedicated, however 48hrs w/o sleep made monday classes hard).

Eventually CSC comes out to install CP67 (3rd after CSC itself and MIT
Lincoln Labs) and I mostly get to play with it during my 48hr weekend
dedicated time. I initially work on pathlengths for running OS/360 in
virtual machine. Test stream ran 322secs on real machine,
initially 856secs in virtual machine (CP67 CPU 534secs), after
a couple months I have reduced CP67 CPU from 534secs to 113secs. I
then start rewriting the dispatcher, (dynamic adaptive resource
manager/default fair share policy) scheduler, paging, adding ordered
seek queuing (from FIFO) and mutli-page transfer channel programs
(from FIFO and optimized for transfers/revolution, getting 2301 paging
drum from 70-80 4k transfers/sec to channel transfer peak of 270). Six
months after univ initial install, CSC was giving one week class in
LA. I arrive on Sunday afternoon and asked to teach the class, it
turns out that the people that were going to teach it had resigned the
Friday before to join one of the 60s CP67 commercial online spin-offs.

Original CP67 came with 1052 & 2741 terminal support with
automagic terminal identification, used SAD CCW to switch controller's
port terminal type scanner. Univ. had some number of TTY33&TTY35
terminals and I add TTY ASCII terminal support integrated with
automagic terminal type. I then wanted to have a single dial-in number
("hunt group") for all terminals. It didn't quite work, IBM had taken
short cut and had hard-wired line speed for each port. This kicks off
univ effort to do our own clone controller, built channel interface
board for Interdata/3 programmed to emulate IBM controller with the
addition it could do auto line-speed/(dynamic auto-baud). It was later
upgraded to Interdata/4 for channel interface with cluster of
Interdata/3s for port interfaces. Interdata (and later Perkin-Elmer)
was selling it as clone controller and four of us get written up
responsible for (some part of) the IBM clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

Univ. library gets an ONR grant to do online catalog and some of the
money is used for a 2321 datacell. IBM also selects it as betatest for
the original CICS product and supporting CICS added to my tasks.

Then before I graduate, I'm hired fulltime into a small group in the
Boeing CFO office to help with the formation of Boeing Computer
Services (consolidate all dataprocessing into independent business
unit). I think Renton datacenter largest in the world (360/65s
arriving faster than they could be installed, boxes constantly staged
in hallways around machine room; joke that Boeing was getting 360/65s
like other companies got keypunch machines). Lots of politics between
Renton director and CFO, who only had a 360/30 up at Boeing field for
payroll (although they enlarge the machine room to install 360/67 for
me to play with when I wasn't doing other stuff). Renton did have a
(lonely) 360/75 (among all the 360/65s) that was used for classified
work (black rope around the area, heavy black felt draopped over
console lights & 1403s with guards at perimeter when running
classified). After I graduate, I join IBM CSC in cambridge (rather
than staying with Boeing CFO).

One of my hobbies after joining IBM CSC was enhanced production
operating systems for internal datacenters. At one time had rivalry
with 5th flr over whether they had more total installations (internal,
development, commercial, gov, etc) running Multics or I had more
internal installations running my internal CSC/VM.

A decade later, I'm at SJR on the west coast and working with Jim Gray
and Vera Watson on the original SQL/relational implementation
System/R. I also had numerous internal datacenters running my internal
SJR/VM system ... getting .11sec trivial interactive system
response. This was at the time of several studies showing .25sec
response getting improved productivity.

The 3272/3777 controller/terminal had .089 hardware response (plus the
.11 system response resulted in .199 response, meeting .25sec
criteria).  The 3277 still had half-duplex problem attempting to hit a
key at same time as screen write, keyboard would lock and would have
to stop and reset. YKT was making a FIFO box available, unplug the
3277 keyboard for the head, plug-in the FIFO box and plug keyboard
into FIFO ... which avoided the half-duplex keyboard lock).

Then IBM produced 3274/3278 controller/terminal where lots of
electronics were moved back into the controller, reducing cost to make
the 3278, but significantly increase coax protocol chatter
... significantly increasing hardware response to .3-.5secs depending
on how much data was written to screen. Letters to the 3278 product
administrator complaining, got back response that 3278 wasn't designed
for interactive computing ... but data entry.

clone (pc ... "plug compatible") controller built w/interdata posts
https://www.garlic.com/~lynn/submain.html#360pcm
csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
original sql/relational, "system/r" posts
https://www.garlic.com/~lynn/submain.html#systemr

some posts mentioning undergraduate work at univ & boeing
https://www.garlic.com/~lynn/2025d.html#112 Mainframe and Cloud
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#91 IBM VM370 And Pascal
https://www.garlic.com/~lynn/2025d.html#69 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#103 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#0 System Response

posts rementioning response
https://www.garlic.com/~lynn/2025d.html#102 Rapid Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2023d.html#27 IBM 3278
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015g.html#58 [Poll] Computing favorities
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers   Personal
https://www.garlic.com/~lynn/2013g.html#21 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012d.html#19 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2012.html#15 Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2012.html#13 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#53 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009e.html#19 Architectural Diversity
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2003k.html#22 What is timesharing, anyway?
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe skills

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe skills
Date: 08 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#1 Mainframe skills

re: bldg15 3033, during FS (that was going to completely replace 370),
internal politics was killing off 370 efforts and the lack of new 370s
during the period is credited with given the 370 clone makers
(including Amdahl) their marketing foothold. When FS implodes there
was mad rush to get stuff back into the 370 product pipelines,
including kicking off quick&dirty 3033&3081 in parallel. For the 303x
channel director, they took 158 engine with just the integrated
channel microcode (and no 370 microcode). A 3031 was two 158 engines,
one with just the channel microcode and 2nd with just the 370
microcode. A 3032 was 168 redone to use 303x channel director. A 3033
started out 168 logic remapped to 20% faster chips.

One of the bldg15 early engineering 3033 problems were channel
director boxes would hang and somebody would have to manual re-impl
hung channel director box. Discovered doing variation on missing
interrupt handler, where CLRCH done quickly for all six channel
addresses for the hung box ... which would force the box re-impl.

posts mentioning getting to play disk engineer in bldgs 14&15:
https://www.garlic.com/~lynn/subtopic.html#disk

posts mentioning 303x CLRCH force re-impl:
https://www.garlic.com/~lynn/2025d.html#58 IBM DASD, CKD, FBA
https://www.garlic.com/~lynn/2024b.html#48 Vintage 3033
https://www.garlic.com/~lynn/2024b.html#11 3033
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#91 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2021i.html#85 IBM 3033 channel I/O testing
https://www.garlic.com/~lynn/2021b.html#2 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2014m.html#74 SD?
https://www.garlic.com/~lynn/2011o.html#23 3270 archaeology
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2000c.html#69 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/97.html#20 Why Mainframes?

--
virtualization experience starting Jan1968, online at home since Mar1970

Kuwait Email

From: Lynn Wheeler <lynn@garlic.com>
Subject: Kuwait Email
Date: 09 Oct, 2025
Blog: Facebook

From long ago and far away; one of my hobbies after joining IBM was
enhanced production operating systems for internal datacenters and the
online sales&marketing HONE systems was (one of the 1st and) long time
customer; ... in the mid-70s, all the US HONE datacenters were
consolidated in Palo Alto (trivia: when FACEBOOK 1st moved into
silicon valley, it was into a new bldg built next door to the former
consolidated US HONE datacenter), then HONE systems started cropping
up all over the world.

Co-worker at science center was responsible for the the wide-area
network
https://en.wikipedia.org/wiki/Edson_Hendricks
reference by one of the CSC 1969 GML inventors
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

which morphs into the corporate internal network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s ... about the time it was forced to convert to SNA/VTAM);
technology also used for the corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/BITNET
also EARN in Europe
https://en.wikipedia.org/wiki/European_Academic_Research_Network#EARN

I also would visit various datacenters around silicon valley,
including TYMSHARE:
https://en.wikipedia.org/wiki/Tymshare
which started providing their CMS-base online computer conferencing
system in Aug1976, "free" to the mainframe user group SHARE as vmshare
... archives here
http://vm.marist.edu/~vmshare

I cut a deal with Tymshare to get monthly tape dump of all VMSHARE
files for putting up on internal network and systems (including
HONE). The following is email from IBM sales/marketing branch office
employee in Kuwait:


Date: 14 February 1983, 09:44:58 CET
From: Fouad xxxxxx
To:   Lynn Wheeler
Subject: VMSHARE registration

Hello , I dont know if you are the right contact , but I really dont
know whom to ask.

My customer wants to get into TYMNET and VMSHARE.

They have a teletype and are able to have a dial-up line to USA.
How can they get a connection to TYMNET and  a registration to VMSHARE.

The country is KUWAIT, which is near to SAUDI ARABIA
Can you help me

thanks

... snip ... top of post, old email index

TYMSHARE's TYMNET:
https://en.wikipedia.org/wiki/Tymnet

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

some recent posts mentioning VMSHARE:
https://www.garlic.com/~lynn/2025b.html#14 IBM Token-Ring
https://www.garlic.com/~lynn/2024d.html#74 Some Email History
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024b.html#90 IBM User Group Share
https://www.garlic.com/~lynn/2024b.html#87 Dialed in - a history of BBSing
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024.html#109 IBM User Group SHARE
https://www.garlic.com/~lynn/2022g.html#16 Early Internet
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#37 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#126 Google Cloud
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2021j.html#71 book review:  Broad Band:  The Untold Story of the Women Who Made the Internet

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370 Teddy Bear

From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370 Teddy Bear
Date: 09 Oct, 2025
Blog: Facebook

the SHARE MVS mascot was the "turkey" (because MVS/TSO was so bad) as
opposed to the VM370 mascot as the teddy bear.
https://www.jaymoseley.com/hercules/downloads/pdf/$OSTL33.pdf
pg8:

And let us not forget that the performance of the first releases of
MVS was so bad that the MVS people in the SHARE user group adopted the
Turkey as their symbol.

... vs the SHARE VM370 mascot:

The symbol of the VM group was the Teddy Bear since it was said to be
better, warmer, and more user-friendly than MVS.

trivia1: 1974, CERN did SHARE presentation comparing MVS/TSO and
VM370/CMS ... copies inside IBM were stamped "IBM Confidential -
Restricted", 2nd highest security classification, available on need to
know only.

trivia2: customers not migrating to MVS as planned (I was at the the
initial SHARE when it was played).
http://www.mxg.com/thebuttonman/boney.asp

trivia3: after FS imploded:
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

the head of POK managed to convince corporate to kill the VM370
product, shutdown the development group and transfer all the people to
POK for MVS/XA (Endicott eventually managed to save the VM370 product
mission for the mid-range, but had to recreate a development group
from scratch). They weren't planning on telling the people until the
very last minute to minimize the number that might escape. It managed
to leak early and several managed to escape (it was during infancy of
DEC VAX/VMS ... before it even first shipped and joke was that head of
POK was a major contributor to VMS). There was then a hunt for the
leak, fortunately for me, nobody gave up the leaker.

Melinda's history
https://www.leeandmelindavarian.com/Melinda#VMHist

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe skills

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe skills
Date: 10 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#1 Mainframe skills
https://www.garlic.com/~lynn/2025e.html#4 Mainframe skills

... I had coined disaster survivability and "greographic
survivability" (as counter to disaster/recovery) when out marketing
HA/CMP.

trivia: as undergraduate, when 360/67 arrived (replacing 709/1401)
within a year of taking 2 credit hr intro to fortran/computers, I was
hired fulltime responsible for os/360 (360/67 as 360/65, tss/360 never
came to production). then before I graduate, I was hired fulltime into
small group in the Boeing CFO office to help with the formation of
Boeing Computer Services (consolidate all dataprocessing into an
independent business unit). I think Renton is the largest datacenter
(in the world?) ... 360/65s arriving faster than they could be
installed, boxes constantly staged in the hallways around the machine
room (joke that Boeing was getting 360/65s like other companies got
keypunches).

Disaster plan was to replicate Renton up at the new 747 plant at Paine
Field (in Everett) as countermeasure to Mt. Rainier heating up and the
resulting mud slide taking out Renton.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

recent posts mentioning Boeing disaster plan to replicate Renton up at
Paine Field:
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#31 Mainframe Datacenter
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#101 Operating System/360
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023.html#66 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022g.html#63 IBM DPD
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2021k.html#55 System Availability
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#54 Learning PDP-11 in 2021
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Somers

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Somers
Date: 11 Oct, 2025
Blog: Facebook

Late 80s, a senior disk engineer gets talk scheduled at annual,
internal, world-wide communication group conference, supposedly on
3174 performance. However, the opening was that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly being vetoed by the communication group (with
their corporate ownership of everything that crossed the datacenter
walls) trying to protect their dumb terminal paradigm. Senior disk
software executive partial countermeasure was investing in distributed
computing startups that would use IBM disks (he would periodically ask
us to drop in on his investments to see if we could offer any
assistance).

Early 90s, head of SAA (in late 70s, worked with him on ECPS microcode
assist for 138/148) had top flr, corner office in Somers and and would
perodically drop in to talk about some of his people. We were out
doing customer executive presentations on Ethernet, TCP/IP, 3-tier
networking, high-speed routers, etc and taking barbs in the back from
SNA&SAA members. We would periodically drop in on other Somers'
residents asking shouldn't they be doing something about the way the
company was heading.

The communication group stranglehold on mainframe datacenters wasn't
just disks and IBM has one of the largest losses in the history of US
corporations and was being reorganized into the 13 "baby blues" in
preparation for breaking up the company ("baby blues" take-off on the
"baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

20yrs earlier, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

90s, we were doing work for large AMEX mainframe datacenters spin-off
and former AMEX CEO.

3-tier, ethernet, tcp/ip, etc posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension

recent posts mentioning IBM Somers:
https://www.garlic.com/~lynn/2025d.html#98 IBM Supercomputer
https://www.garlic.com/~lynn/2025d.html#86 Cray Supercomputer
https://www.garlic.com/~lynn/2025d.html#73 Boeing, IBM, CATIA
https://www.garlic.com/~lynn/2025d.html#68 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025d.html#51 Computing Clusters
https://www.garlic.com/~lynn/2025d.html#24 IBM Yorktown Research
https://www.garlic.com/~lynn/2025d.html#7 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#1 Chip Design (LSM & EVE)
https://www.garlic.com/~lynn/2025c.html#116 Internet
https://www.garlic.com/~lynn/2025c.html#110 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#104 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#98 5-CPU 370/125
https://www.garlic.com/~lynn/2025c.html#93 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#69 Tandem Computers
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#40 IBM & DEC DBMS
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025b.html#118 IBM 168 And Other History
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#100 IBM Future System, 801/RISC, S/38, HA/CMP
https://www.garlic.com/~lynn/2025b.html#92 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#78 IBM Downturn
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025b.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025.html#119 Consumer and Commercial Computers
https://www.garlic.com/~lynn/2025.html#96 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput
https://www.garlic.com/~lynn/2025.html#74 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#37 IBM Mainframe
https://www.garlic.com/~lynn/2025.html#23 IBM NY Buildings
https://www.garlic.com/~lynn/2025.html#21 Virtual Machine History

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Interactive Response

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Interactive Response
Date: 11 Oct, 2025
Blog: Facebook

I took a 2 credit hr intro to fortran/computers and at end of semester
was hired to rewrite 1401 MPIO in assembler for 360/30 ... univ was
getting 360/67 for tss/360 to replace 709/1401 and got a 360/30
temporarily pending availability of a 360/67 (part of getting some 360
experience). Univ. shutdown datacenter on weekends and I would have it
dedicated (although 48hrs w/o sleep made monday classes hard), I was
given a pile of hardware & software manuals and got to design and
implement my own monitor, device drivers, interrupt handlers, storage
management, error recovery, etc ... and within a few weeks had a 2000
card program.

Then within a yr of taking intro class, 360/67 arrives and i'm hired
fulltime responsible for os/360 (tss/360 never came to production so
ran as 360/65).

CSC comes out to univ for CP67/CMS (precursor to VM370/CMS) install
(3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play
with it during my 48hr weekend dedicated time. I initially work on
pathlengths for running OS/360 in virtual machine. Test stream
ran 322secs on real machine, initially 856secs in virtual
machine (CP67 CPU 534secs), after a couple months I have reduced
CP67 CPU from 534secs to 113secs. I then start rewriting the
dispatcher/scheduler (dynamic adaptive resource manager/default fair
share policy) paging, adding ordered seek queuing (from FIFO) and
mutli-page transfer channel programs (from FIFO and optimized for
transfers/revolution, getting 2301 paging drum from 70-80 4k
transfers/sec to channel transfer peak of 270). Six months after univ
initial install, CSC was giving one week class in LA. I arrive on
Sunday afternoon and asked to teach the class, it turns out that the
people that were going to teach it had resigned the Friday before to
join one of the 60s CP67 commercial online spin-offs.

Before I graduate, I'm hired fulltime into a small group in the Boeing
CFO office to help with form Boeing Computer Services (consolidate all
dataprocessing into an independent business unit ... including
offering services to non-Boeing entities). When I graduate, I leave to
join CSC (instead of staying with CFO).

One of my hobbies after joining IBM was enhance production operating
systems for internal datacenters ... and in the morph of
CP67->VM370 a lot of stuff was dropped and/or simplified and every
few years I would be asked to redo stuff that had been dropped and/or
rewritten (... in part dynamic adaptive default policy calculated
dispatching order based on resource utilization over the last several
minutes compared to target resource utilization established by their
priority and number of users).

Late 80s, the OS2 team was told to contact VM370 group (because VM370
dispatching was much better than OS2) ... it was passed between the
various groups before being forwarded to me.

Example I didn't have much control over was late 70s, IBM San Jose
Research got a MVS 168 and a VM370 158 replacing MVT 195. My internal
VM370s were getting 90th percentile .11sec interactive system response
(with 3272/3277 hardware response of .086sec resulted in .196sec seen
by users ... better than the .25sec requirement mentioned in various
studies). All the SJR 3830 controllers and 3330 strings were
dual-channel connection to both systems but strong rules that no MVS
3330s can be mounted on VM370 strings. One morning operators mounted a
MVS 3330 on a VM370 string and within minutes they were getting irate
calls from all over the bldg complaining about response. The issue was
MVS has a OS/360 heritage of multi-track search for PDS directory
searches ... a MVS multi-cylinder PDS directory search can have
multiple full multi-track cylinder searches that lockout the (vm370)
controller for the duration (60revs/sec, 19tracks/search, .317secs
lockout per multi-track search I/O). Demand to move the pack was
answered with they would get around to it on 2nd shift.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource dispatch/scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
commercial virtual machine service posts
https://www.garlic.com/~lynn/submain.html#timeshare
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

some posts mentioning .11sec system response and .086sec
3272/3277 hardware response for .196sec
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015g.html#58 [Poll] Computing favorities
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers   Personal
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012.html#15 Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2012.html#13 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#53 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009e.html#19 Architectural Diversity
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Interactive Response

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Interactive Response
Date: 12 Oct, 2025
Blog: Facebook

re: https://www.garlic.com/~lynn/2025e.html#9 IBM Interactive Response

... further detail ... during the .317sec multi-track search, the
vm-side could build up several queued I/O requests for other vm 3330s
on the busy controller (SM+BUSY) ... when it ends, the vm-side might
get in one of the queued requests ... before the MVS hits it with
another multi-track search ... and so vm-side might see increasing
accumulating queued I/O requests waiting for nearly second (or more).

.. trivia: also after transfer to San Jose, I got to wander around IBM
(& non-IBM) datacenters in silicon valley, including disk bldg14
(engineering) and bldg15 (product test) across the street. They were
running pre-scheduled, 7x24, stand-alone mainframe testing and
mentioned that they had recently tried MVS, but it had 15min MTBF
(requiring manual re-ipl) in that environment. I offered to rewrite
I/O supervisor to make it bullet proof and never fail allowing any
amount of on-demand, concurrent testing ... greatly improving
productivity. I write an internal IBM paper on the I/O integrity work
and mention the MVS 15min MTBF ... bringing down the wrath of the MVS
organization on my head. Later, a few months before 3880/3380 FCS, FE
(field engineering) had test of 57 simulated errors that were likely
to occur and MVS was failing in all 57 cases (requiring manual re-ipl)
and in 2/3rds of the cases no indication of what caused the failure.

posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

posts mentioning MVS failing in test of 57 simulated errors
https://www.garlic.com/~lynn/2025d.html#58 IBM DASD, CKD, FBA
https://www.garlic.com/~lynn/2025d.html#45 Some VM370 History
https://www.garlic.com/~lynn/2025d.html#35 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#34 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025b.html#44 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2024e.html#35 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#75 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024.html#114 BAL
https://www.garlic.com/~lynn/2024.html#88 IBM 360
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#115 IBM RAS
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#72 Some Virtual Machine History
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#80 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#70 IBM 3380 disks
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019b.html#53 S/360
https://www.garlic.com/~lynn/2018d.html#86 3380 failures

--
virtualization experience starting Jan1968, online at home since Mar1970

Interesting timing issues on 1970s-vintage IBM mainframes

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Interesting timing issues on 1970s-vintage IBM mainframes
Newsgroups: alt.folklore.computers
Date: Sun, 12 Oct 2025 08:24:39 -1000

James Dow Allen <user4353@newsgrouper.org.invalid> writes:

Interest was minimal in the (startling?) fact that the main 145 oscillator
is immediately passed through an XOR gate.  But I'll persist and mention
interesting(?) facts about the clocks on the Big IBM Multiprocessors.

The 370/168 was, arguably, the Top Of The Line among IBM mainframes
in the mid-1970s.  Sure, there was a 370 Model 195 but it was almost just
a gimmick: Salesmen might say "You're complaining about the $3 Million
price-tag on our wonderful Model 168?
Just be thankful I'm not trying to sell you a Model 195!"

After joining IBM, the 195 group talk me into helping with
hyperthreading the 195. 195 had out-of-order, but conditional branches
drained the pipeline ... so most codes only ran at half the rated
speed.  hyperthreading, simulating 2CPU multiprocessor possibly would
keep the hardware fully busy ... hyperthreading patent mentioned in
this about the death of ACS/360 (Amdahl had won the battle to make
ACS, 360 compatible, the ACS/360 was killed, folklore was executives
felt it would advance the state of art to fast and company would loose
control of the market).
https://people.computing.clemson.edu/~mark/acs_end.html

modulo MVT (VS2/SVS & VS2/MVS) documentation (heavy-weight
multiprocessor overhead) SMP, only had 2-CPU throughtput 1.2-1.5 times
single processor throughput.

early last decade, I was asked to track down the decision to add
virtual memory to all 370s (pieces, originally posted here and in
ibm-main NGs) ... adding virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73

Basically MVT storage management was so bad that region sizes had to
be specified four times larger than used ... as result typical 1mbyte
370/165 only ran four concurrent regions, insufficient to keep system
busy and justified. Going to single 16mbyte virtual address space
(i.e. VS2/SVS ... sort of like running MVT in a CP67
16mbyte virtual machine) allowed concurrent regions to be
increased by factor of four (modulo caped at 15 because 4bit storage
protect keys) with little or no paging.

It was deemed that it wasn't worth the effort to add virtual memory to
370/195 and all new work was killed.

Then there was the FS effort, going to completely replace 370 and
internal politics was killiing off 370 efforts, claims that lack of new
370s during FS gave the clone 370 makers their market foothold).
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

Note 370/165 avg 2.1 machine cycles per 370 instruction. for 168 they
significantly increase main memory size & speed and microcode was
optimized resulting in avg of 1.6 machine cycles per instruction.
Then for 168-3, they doubled the size of processor cache, increasing
rated MIPS from 2.5MIPS to 3.0MIPS.

With the implosion of FS there was mad rush to get stuff back into the
370 product pipelines, kicking off the quick&dirty 3033 and 3081
efforts. The 3033 started off remapping 168 logic to 20% faster chips
and then optimized the microcode getting it down to avg of one machine
cycle per 370 instruction.

I was also talked into helping with a 16-CPU SMP/multiprocessor effort
and we con the 3033 processor engineers into helping (a lot more
interesting than remapping 168 logic). Everybody thought it was great
until somebody reminds the head of POK that POK's favorite son
operating system ("VS2/MVS") 2CPU multiprocessor overhead only getting
1.2-1.5 times throughput of non-multiprocessor (and overhead
increasing significantly as #CPUs increased ... POK doesn't ship a
16-CPU machine until after the turn of century). Then head of POK
invites some of us to never visit POK again and directs the 3033
processor engineers, heads down and no distractions.

trivia: when I graduate and join IBM Cambridge Science Center, one of
my hobbies was enhanced production operating systems and one of my
first (and long time) customers was the online sales&marketing
HONE systems.  With the decision to add virtual memory to all 370s,
there was also decision to form development group to do VM370. In the
morph of CP67->VM370, lots of stuff was simplified and/or dropped
(including multiprocessor support). 1974, I start adding stuff back
into a VM370R2-base for my interal CSC/VM (including kernel-reorg for
SMP, but not the actual SMP support). Then for VM370R3-base CSC/VM, I
add multiprocessor support back in, originally for HONE so they could
upgrade their 168s to 2-CPU systems (with some slight-of-hand and
cache affinity, was getting twice throughput of single processor).

other trivia: US HONE had consolidated all their datacenters in
silicon valley, when FACEBOOK first moved into silicon valley, it was
into new bldg built next door to the former consolidated US HONE
datacenter.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Interesting timing issues on 1970s-vintage IBM mainframes

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Interesting timing issues on 1970s-vintage IBM mainframes
Newsgroups: alt.folklore.computers
Date: Sun, 12 Oct 2025 11:21:54 -1000

James Dow Allen <user4353@newsgrouper.org.invalid> writes:

About 1978, the 370/168 was superseded by the 3032 and 3033.  These were
a disappointment for anyone infatuated with blinking lights and fancy
switches and dials.  The console disappeared entirely except for an ordinary
CRT, keyboard, light-pen and a tiny number of lights and buttons (e.g. "IPL").
This trend began a few years earlier when the fancy front-panel of
the 370/155 was replaced with a boring CRT/light-pen for the 370/158.

re:
https://www.garlic.com/~lynn/2025e.html#11 Interesting timing issues on 1970s-vintage IBM mainframes

when FS imploded, they start on 3033 (remap 168 logic to 20% faster
chips). They take a 158 engine with just the integrated channel
microcode for the 303x channel director. A 3031 was two 158 engines,
one for the channel director (integrated channel microcode) and 2nd
with just the 370 microcode. The 3032 was 168-3 reworked to use the
303x channel director for external channels.

I had transferred out to the west coast and got to wander around IBM
(and non-IBM) datacenters in silicon valley, including disk bldg14
(engineering) and bldg15 (product test) across the street. They were
running pre-scheduled, 7x24, stand-alone mainframe testing and
mentioned that they had recently tried MVS, but it had 15min MTBF
(requiring manual re-IPL) in that environment. I offer to rewrite I/O
supervisor making it bullet-proof and never fail, allowing any amount
of on-demand, concurrent testing ... greatly improving productivity.
Then bldg15 gets the 1st engineering 3033 outside POK processor
engineering. Testing was only taking percent or two of CPU, so we
scrounge up a 3830 controller and string of 3330 drives and setup our
own private online service.

One of the things found was the engineering channel directors (158
engines) still had habit of periodic hanging, requiring manual
re-impl.  Discover then if you hit all six channels of a channel
director quickly with CLRCH, it would force automagic re-impl ... so
modify missing interrupt handler to deal with hung channel director.

getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

some recent posts mentioning MVS 15min MTBF
https://www.garlic.com/~lynn/2025e.html#10 IBM Interactive Response
https://www.garlic.com/~lynn/2025e.html#1 Mainframe skills
https://www.garlic.com/~lynn/2025d.html#107 Rapid Response
https://www.garlic.com/~lynn/2025d.html#98 IBM Supercomputer
https://www.garlic.com/~lynn/2025d.html#87 IBM 370/158 (& 4341) Channels
https://www.garlic.com/~lynn/2025d.html#78 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#71 OS/360 Console Output
https://www.garlic.com/~lynn/2025d.html#68 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025d.html#58 IBM DASD, CKD, FBA
https://www.garlic.com/~lynn/2025d.html#45 Some VM370 History
https://www.garlic.com/~lynn/2025d.html#35 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#34 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#11 IBM 4341
https://www.garlic.com/~lynn/2025d.html#1 Chip Design (LSM & EVE)
https://www.garlic.com/~lynn/2025c.html#107 IBM San Jose Disk
https://www.garlic.com/~lynn/2025c.html#101 More 4341
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#78 IBM 4341
https://www.garlic.com/~lynn/2025c.html#62 IBM Future System And Follow-on Mainframes
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#47 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#42 SNA & TCP/IP
https://www.garlic.com/~lynn/2025c.html#29 360 Card Boot
https://www.garlic.com/~lynn/2025c.html#12 IBM 4341
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025b.html#112 System Throughput and Availability II
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#82 IBM 3081
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#44 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#20 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025.html#122 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#113 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput
https://www.garlic.com/~lynn/2025.html#77 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#71 VM370/CMS, VMFPLC
https://www.garlic.com/~lynn/2025.html#59 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#29 IBM 3090

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CP67 Multi-leavel Update

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CP67 Multi-leavel Update
Date: 12 Oct, 2025
Blog: Facebook

Some of the MIT CTSS/7094 people went to the 5th flr to do
MULTICS. Others went to the IBM Science Center on the 4th flr and did
virtual machines (1st modified 360/40 w/virtual memory and did
CP40/CMS, morphs into CP67/CMS when 360/67 standard with virtual
memory becomes available), science center wide-area network (that
grows into corporate internal network, larger than arpanet/internet
from science-center beginning until sometime mid/late 80s; technology
also used for the corporate sponsored univ BITNET), invented GML 1969
(precursor to SGML and HTML), lots of performance tools, etc.

I took a 2 credit hr intro to fortran/computers and at end of semester
was hired to rewrite 1401 MPIO in assembler for 360/30 ... univ was
getting 360/67 for tss/360 to replace 709/1401 and got a 360/30
temporarily pending availability of a 360/67 (part of getting some 360
experience). Univ. shutdown datacenter on weekends and I would have it
dedicated (although 48hrs w/o sleep made monday classes hard), I was
given a pile of hardware & software manuals and got to design and
implement my own monitor, device drivers, interrupt handlers, storage
management, error recovery, etc ... and within a few weeks had a 2000
card program. 360/67 arrived within a year of taking intro class and I
was hired fulltime responsible for OS/360 (tss/360 never came to
production, so ran as 360/65). Student Fortran jobs ran under second
on 709. Initially MFTR9.5 ran well over minute. I install HASP cutting
time in half. I then start redoing MFTR11 STAGE2 SYSGEN to carefully
place datasets and PDS members to optimize arm seek and multi-track
seach cutting another 2/3rds to 12.9secs. Student Fortran never got
better than 709 until I install UofWaterloo WATFOR (on 360/65 ran at
20,000 cards/min or 333q cards/sec, student Fortran jobs typically
30-60 cards).

CSC came out to univ for CP67/CMS (precursor to VM370/CMS) install
(3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play
with it during my 48hr weekend dedicated time. I initially work on
pathlengths for running OS/360 in virtual machine. Test stream ran
322secs on real machine, initially 856secs in virtual machine (CP67
CPU 534secs), after a couple months I have reduced that CP67 CPU from
534secs to 113secs. I then start rewriting the dispatcher, (dynamic
adaptive resource manager/default fair share policy) scheduler,
paging, adding ordered seek queuing (from FIFO) and mutli-page
transfer channel programs (from FIFO and optimized for
transfers/revolution, getting 2301 paging drum from 70-80 4k
transfers/sec to channel transfer peak of 270). Six months after univ
initial install, CSC was giving one week class in LA. I arrive on
Sunday afternoon and asked to teach the class, it turns out that the
people that were going to teach it had resigned the Friday before to
join one of the 60s CP67 commercial online spin-offs.

Initially CP67 source was deliivered on OS/360, source modified,
assembled, txt decks collected, marked with stripe & name across top,
all fit in card tray, BPS loader placed in front and IPLed ... would
write memory image to disk for system IPL. A couple months later, new
release now resident on CMS ... modifications in CMS "UPDATE" files,
exec that applied update and generated temp file that was assembled. A
system generation exec, "punched" txt decks spooled to virtual reader
that was then IPLed.

After graduating and joining CSC, one of my hobbies was enhanced
production operating systems ("CP67L") for internal datacenters
(inluding online sales&marketing support HONE systems, was one of the
first, and long time customer). With the decision to add virtual
memory to all 370s, there was also decision to do CP67->VM370 and some
of the CSC people went to the 3rd flr, taking over the IBM Boston
Programming Center for the VM370 group. CSC developed set of CP67
updates that provided (simulated) VM370 virtual machines
("CP67H"). Then there were a set of CP67 updates that ran on 370
virtual memory architecture ("CP67I"). At CSC, because there were
profs, staff, and students from Boston area institutions using the CSC
systems, CSC would run "CP67H" in a 360/67 virtual machine (to
minimize unannounced 370 virtual memory leaking).

CP67L ran on real 360/67
... CP67H ran in a CP67L 360/67 virtual machine
...... CP67I ran in a CP67H 370 virtual machine

CP67I was in general use, a year before the 1st engineering 370 (with
virtual memory) was operation ... in fact, IPL'ing CP67I on the real
machine was test case.

As part of CP67L, CP67H, CP67I effort, the CMS Update execs were
improved to support multi-level update operation (later multi-level
update support was added to various editors). Three engineers come out
from San Jose and add 2305 & 3330 support to CP67I, creating CP67SJ
which was widely use on internal machines, even after VM370 was
available.

Mid-80s, Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist
asked if I could send her the original exec multi-level update
implementation. I had large archive dating back to the 60s on triple
redundant tapes in the IBM Almaden Research tape library. It was
fortunate since within a few weeks, Almaden had an operation problem
mounting random tapes as scratch and I lost nearly dozen tapes,
including triple redundant tape archive.

In the morph of CP67->VM370, a lot of stuff was simplified and/or
dropped (including multiprocessor support). 1974, I start adding a lot
of stuff back into VM370R2-base for my internal CSC/VM (including
kernel re-org for SMP, but not the actual SMP support). Then with
VM370R3-base, I add multiprocessor support into CSC/VM, initially for
HONE so they could upgrade all their 168 systems to 2-CPU (getting
twice throughput of 1-CPU systems). HONE trivia: All the US HONE
datacenters had been consolidated in Palo Alto ... when FACEBOOK 1st
moved into silicon valley, it was into a new bldg built next door to
the former consolidate US HONE datacenter.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

some CP67L, CP67H, CP67I, CP67SJ, CSC/VM posts
https://www.garlic.com/~lynn/2025d.html#91 IBM VM370 And Pascal
https://www.garlic.com/~lynn/2025.html#122 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#121 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM DASD, CKD and FBA

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM DASD, CKD and FBA
Date: 13 Oct, 2025
Blog: Facebook

When I offered the MVS group fully tested FBA support ... they said I
needed $26M incremental new revenue (some $200M in sales) to cover
cost of education and documentation. However, since IBM at the time,
was selling every disk it made, FBA support would just move some CKD
sales to FBA ... also I wasn't allowed to use life-time savings as
part of the business case.

All disk technology was actually moving to FBA ... can be seen in 3380
formulas for records/track calculations, having to round record sizes
up to multiple of fixed cell size. Now no real CKD have been made for
decades, all being simulated on industry standard fixed-block
devices. A big part of FBA was error correcting technology performance
... part of recent FBA technology moving from 512byte blocks to 4k
blocks.

trivia: 80s, large corporations were ordering hundreds of vm/4341s at
a time for deploying out in departmental areas (sort of the leading
edge of the coming distributed departmental computing tsunami). Inside
IBM, conference rooms were becoming scarce, being converted into
departmental vm/4341 rooms. MVS looked at the big explosion in sales
and wanted a piece of the market. However the only new non-datacenter
disks were FBA/3370 ... eventually get CKD emulation as 3375. However
didn't do MVS much good, distributed departmental dataprocessing was
looking at scores of systems per support person ... while MVS was
scores of support people per system.

Note: ECKD was originally channel commands for Calypso, 3880 speed
matching buffer allowing 3mbyte/sec 3380 to be used with existing
1.5mbyte/sec channels ... it went through significant teething
problems ... lots and lots of sev1.

Other trivia: I had transferred from CSC out to SJR on west coast and
got to wander around IBM (and non-IBM) datacenters in silicon valley,
including disk bldg14 (engineering) and bldg15 (product test) across
the street. They were running pre-scheduled, 7x24, stand-alone
mainframe testing and mentioned that they had recently tried MVS, but
it had 15min MTBF (requiring manual re-ipl) in that environment. I
offered to rewrite I/O supervisor to make it bullet proof and never
fail allowing any amount of on-demand, concurrent testing ... greatly
improving productivity. I write an internal IBM paper on the I/O
integrity work and mention the MVS 15min MTBF ... bringing down the
wrath of the MVS organization on my head. Later, a few months before
3880/3380 FCS, FE (field engineering) had test of 57 simulated errors
that were likely to occur and MVS was failing in all 57 cases
(requiring manual re-ipl) and in 2/3rds of the cases no indication of
what caused the failure.

DASD, CKD, FBA, and multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

a few posts mentioning calypso, eckd, mtbf
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2024g.html#3 IBM CKD DASD
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer

a few posts mentioning FBA fixed-block 512 4k
https://www.garlic.com/~lynn/2021i.html#29 OoO S/360 descendants
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2017f.html#39 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2014g.html#84 real vs. emulated CKD
https://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2010d.html#9 PDS vs. PDSE

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM DASD, CKD and FBA

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM DASD, CKD and FBA
Date: 13 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#14 IBM DASD, CKD and FBA

semi-related, 1988 IBM branch asks if I could help LLNL (national lab)
standardize some serial stuff they were working with ... which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980), initially 1gbit/sec transfer, full-duplex, 200mbyte/sec.
Then POK finally gets their serial stuff released (when it is already
obsolete), initially 10mbyte/sec, later improved to 17mbyte/sec.

Some POK engineers then become involved with FCS and define a
heavy-weight protocol that radically reduces throughput, eventually
released as FICON

Latest public benchmark I've seen is 2010 z196 "Peak I/O", getting 2M
IOPS using 104 FICON (20K IOPS/FICON). About the same time a FCS is
announced for E5-2600 server blades claiming over million IOPS (two
such FCS having higher throughput than 104 FICON). Also IBM pubs
recommend that SAPs (system assist processors that actually do I/O) be
kept to 70% CPU (or 1.5M IOPS).

FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

CTSS, Multics, Unix, CSC

From: Lynn Wheeler <lynn@garlic.com>
Subject: CTSS, Multics, Unix, CSC
Date: 13 Oct, 2025
Blog: Facebook

Some of the MIT CTSS/7094 people went to the 5th flr for MULTICs
http://www.bitsavers.org/pdf/honeywell/large_systems/multics/

Note that original UNIX had been done at AT&T ... somewhat after the
become disenchanted with MIT Multics ... UNIX is supposedly take-off
on the name MULTICS and is simplification.
https://en.wikipedia.org/wiki/Multics#Unix

Others from MIT CTSS/7094 went to the IBM Cambridge Scientific Center
on the 4th flr and did virtual machines, science center wide-area
network that morphs into the internal network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s, about the time it was forced to convert to SNA/VTAM), invented
GML in 1969 (decade morphs into ISO standard SGML, after another
decade morphs into HTML at CERN), bunch of other stuff.

I was at univ that had gotten a 360/67 for tss/360. The 360/67
(replacing 709/1401) arrives within a year of my taking a 2 credit hr
intro to fortran/computers and I'm hired fulltime responsible of
OS/360 (tss/360 didn't come to production, so ran as 360/65). Later
CSC comes out to install CP67 (3rd install after CSC itself and MIT
Lincoln Labs). Nearly two decades later I'm dealing with some UNIX
source and notice some similarity between UNIX code and that early
CP67 (before I started reWriting a lot of the code) ... possibly
indicating some common heritage back to CTSS. Before I graduate, I'm
hired fulltime into small group in Boeing CFO office to help with the
formation of Boeing Computer services (consolidate all dataprocessing
into independent business unit, including offering services to
non-Boeing entities). Then when I graduate, I join CSC, instead of
staying with the CFO.

At IBM was one of my hobbies was enhanced production operating systems
for internal networks. There was some friendly rivalry between 4th &
5th flrs ... it wasn't fair to compare total number of MULTICS
installations with total number of IBM customer virtual machine
installations or even number of internal IBM virtual machine
installations, but at one point I could show more of my internal
installations than all MULTICS that ever existed.

Amdahl wins battle to make ACS, 360-compatible. Then ACS/360 is killed
(folklore was concern that it would advance state-of-the-art too fast)
and Amdahl then leaves IBM (before Future System); end ACS/360:
https://people.computing.clemson.edu/~mark/acs_end.html

During Future System (1st half of 70s), going to totally replace 370
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

Internal politics was killing off 370 efforts and lack of new 370 is
credited with giving the clone 370 makers (including Amdahl) their
market foothold. When FS implodes there is mad rush to get stuff back
into 370 product pipelines, including kicking off quick&dirty
3033&3081 efforts in parallel.

Besides hobby of doing enhanced production operating systems for
internal datacenters ... and wandering around internal datacenters
... I spent some amount of time at user group meetings (like SHARE)
and wandering around customers. Director of one of the largest
(customer) financial datacenters liked me to drop in and talk
technology. At one point, the branch manager horribly offended the
customer and in retaliation, they ordered an Amdahl machine (lonely
Amdahl clone 370 in a vast sea of "blue). Up until then Amdahl had
been selling into univ. & tech/scientific markets, but clone 370s had
yet to break into the IBM true-blue commercial market ... and this
would be the first. I got asked to go spend 6m-12m on site at the
customer (to help obfuscate the reason for the Amdahl order?). I
talked it over with the customer, who said while he would like to have
me there it would have no affect on the decision, so I declined the
offer. I was then told the branch manager was good sailing buddy of
IBM CEO and I could forget a career, promotions, raises.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

other posts mentioning I could forget career, promotions, raises
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#61 Amdahl Leaves IBM
https://www.garlic.com/~lynn/2025c.html#49 IBM And Amdahl Mainframe
https://www.garlic.com/~lynn/2025c.html#35 IBM Downfall
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s
https://www.garlic.com/~lynn/2025.html#64 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#19 60s Computers
https://www.garlic.com/~lynn/2024f.html#122 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#23 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024f.html#1 IBM (Empty) Suits
https://www.garlic.com/~lynn/2024e.html#65 Amdahl
https://www.garlic.com/~lynn/2023g.html#42 IBM Koolaid
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023c.html#56 IBM Empty Suits
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#45 IBM 3081 TCM
https://www.garlic.com/~lynn/2022g.html#66 IBM Dress Code
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#60 IBM CEO: Only 60% of office workers will ever return full-time
https://www.garlic.com/~lynn/2022e.html#14 IBM "Fast-Track" Bureaucrats
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022d.html#21 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#88 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021k.html#105 IBM Future System
https://www.garlic.com/~lynn/2021j.html#93 IBM 3278
https://www.garlic.com/~lynn/2021j.html#4 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#81 IBM Downturn
https://www.garlic.com/~lynn/2021e.html#66 Amdahl
https://www.garlic.com/~lynn/2021e.html#63 IBM / How To Stuff A Wild Duck
https://www.garlic.com/~lynn/2021e.html#15 IBM Internal Network
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2021.html#39 IBM Tech
https://www.garlic.com/~lynn/2021.html#8 IBM CEOs
https://www.garlic.com/~lynn/2018e.html#27 Wearing a tie cuts circulation to your brain
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2017b.html#2 IBM 1970s
https://www.garlic.com/~lynn/2016h.html#86 Computer/IBM Career
https://www.garlic.com/~lynn/2016e.html#95 IBM History
https://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM DASD, CKD and FBA

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM DASD, CKD and FBA
Date: 13 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#14 IBM DASD, CKD and FBA
https://www.garlic.com/~lynn/2025e.html#15 IBM DASD, CKD and FBA

co-worker at CSC was responsible for the CP67-based wide-area network
for the science centers
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.

... snip ...

newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

... CP67-baseed wide-area network reference by one of the inventors of
GML at the science center
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

... morphs into the IBM internal network, larger than arpanet/internet
from just about the beginning until sometime mid/late 80s when forced
to convert to SNA/VTAM. Technology also used for the corporate
sponsored univ BITNET (& EARN in Europe) ... and the VM/4341s
distributed departmental systems ... until change over to large PC and
workstation servers (again in part because of IBM pressure to move to
SNA/VTAM).

then late 80s, senior disk engineer gets a talk scheduled at annual,
internal, world-wide communication group conference, supposedly on
3174 performance. However, the opening was that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly being vetoed by the communication group (with
their corporate ownership of everything that crossed the datacenter
walls) trying to protect their dumb terminal paradigm. Senior disk
software executive partial countermeasure was investing in
distributed computing startups that would use IBM disks (he would
periodically ask us to drop in on his investments to see if we could
offer any assistance).

The communication group stranglehold on mainframe datacenters wasn't
just disk and a couple years later, IBM has one of the largest losses
in the history of US companies and was being reorganized into the 13
"baby blues" in preparation for breaking up the company ("baby blues"
take-off on the "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

20yrs earlier, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET/EARN posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe TCP/IP and RFC1044

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe TCP/IP and RFC1044
Date: 13 Oct, 2025
Blog: Facebook

Communication group was fighting to block release of mainframe TCP/IP
support .... when they lost, they changed their tactic and said that
since they had corporate ownership of everything that crosse the
datacenter walls, it had to be released through them, what shipped had
aggregate 44kbytes/sec using nearly whole 3090 processor. It was
eventually ported to MVS by simulating some VM370 diagnose
instructions.

I then added support for RFC1044 and in some tuning tests at Cray
Research between Cray and 4341, got sustained 4341 channel throughput
using only modest amount of 4341 processor (something like 500 times
improvement in bytes moved per instruction executed). Part of the
difference was 8232 was configured as bridge .... while RFC1044
supported mainframe channel attached TCP/IP router (for about same
price as 8232, channel attached router could support dozen ethernet
interfaces, T1&T3, FDDI, and other).

Also con'ed one of the high-speed router vendors into adding support
for RS6000 "SLA" (serial link adapter ... sort of enhanced ESCON,
faster, full-duplex, capable of aggregate 440mbits/sec, they had to
buy the "SLA" chips from IBM) and planning for fiber-channel standard
("FCS"). Part of the original issues was RS6000 SLAs would only talk
to other RS6000s.

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe TCP/IP and RFC1044

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe TCP/IP and RFC1044
Date: 14 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#18 IBM Mainframe TCP/IP and RFC1044

note early 90s, CSG hired silicon valley contractor to implement
TCP/IP directly in VTAM. What he initially demo'ed had TCP much faster
than LU6.2. He was then told that everybody knows that a "proper"
TCP/IP implementation is much slower than LU6.2 and they would only be
paying for a "proper" implementation.

Senior disk engineer gets talk scheduled at annual, internal,
world-wide communication group conference and opens with the statement
that communication group was going to be responsible for the demise of
the disk division ... datacenter stranglehold wasn't just disk
division and couple years later IBM has one of the largest losses in
the history of US companies ... and was being re-orged into the 13
"baby blues" in preperation for breaking up the company ... see lot
more at a post comment in "public" mainframe group:
https://www.garlic.com/~lynn/2025e.html#17 IBM DASD, CKD and FBA

... also OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open
Systems Interconnection standards to become the global protocol for
computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt

Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."

... snip ...

demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM HASP & JES2 Networking

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM HASP & JES2 Networking
Date: 14 Oct, 2025
Blog: Facebook

recent comment in another post (in this group)
https://www.garlic.com/~lynn/2025e.html#17 IBM DASD, CKD and FBA

co-worker at the science center was responsible for science center
wide-area network that morphs into the corporate internal network and
technology also used for the corporate sponsored univ BITNET (&
EARN in Europe).

When went to try and announce VNET/RSCS for customers it was blocked
by head of POK, this was after the FS implosion
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

and head of POK was in the process of convincing corporate to kill the
VM370 product, shutdown the development group and transfer all the
people to POK for MVS/XA ... and so he was never going to agree to
announcing new VM370-related products (Endicott eventually manages to
acquire the VM370 product mission, but had to recreate a development
group from scratch). This was also after 23jun1969 unbundling announce
and charging for software (with requirement that software revenue had
to cover originally development, and ongoing support and maint) JES2
network code (from HASP that originally carried "TUCC" in cols 68-71)
couldn't meet the revenue requirement ... standard process was
forecast the sales at low, medium, & high ($300/$600/$1200 per
month) price ... and there was no NJE price at which revenue met the
requirement. They then came up with the idea of announcing JES2
Networking & RSCS/VNET as a combined product (merged expenses and
revenue) ... where the RSCS/VNET revenue (which had acceptable
forecast at $30/month) was able to cover JES2 networking.

RSCS/VNET was clean layered design and a NJE emulation driver was easy
to do to connect JES2 into the RSCS/VNET network. However, JES2 still
had to be restricted to boundary nodes: 1) the original HASP used
spare entries in the 255 psuedo device table, usually 160-180 and the
internal network was approaching 768 (and NJE would trash traffice
where the origin or destination node weren't defined), 2) it also
somewhat intermixed network & job control info in header fields,
traffic between two JES2 systems at different release levels had a
habit of crashing the destination MVS, as a result a body of RSCS/VNET
emulated NJE drivers grew up that could recognize header versions and
if necessary reorganize the fileds to keep a destination MVS system
from crashing (there is the infamous case of new SAN Jose JES2 system
crashing a Hursley MVS system ... blamed on the Hursley RSCS/VNET
because they hadn't updated the Hursley RSCS/VNET NJE driver with the
most recent updates to account for San Jose new JES2).

There is also the story of trying to set-up offshift use between San
Jose STL (west coast) and Hursley (England) with a double-hop
satellite circuit. Initially it was brought up between two RSCS/VNET
systems and everything worked fine. Then a STL executive (steeped in
MVS), insisted the circuit be brought up between two JES2 systems
... and nothing worked. They then dropped back to the two RSCS/VNET
systems and everything worked. The executive then commented that
RSCS/VNET must be too dumb to realize it wasn't really working
(despite valid traffic flowing fine in both directions).

other trivia: at the univ, I had taken two credit hr intro to
Fortran/Computers and at end of semester was hired to rewrite 1401
MPIO for 360/30. The univ was getting 360/67 for tss/360 and got
360/30 temporarily replacing 1401 until 360/67 was available. Within a
year of taking intro class, 360/67 arrives and I'm hired fulltime
responsible for OS/360 (tss/360 never came to production). Student
Fortran jobs had run under second on 709 and well over a minute
w/MFTr9.5. I install HASP and cuts the time in half. I then redo
STAGE2 SYSGEN MFTr11 to carefully place datasets and order PDS members
to optimize arm seek and multi-track search, cutting another 2/3rds to
12.9secs (student fortran never got better than 709 until I install
UofWaterloo WATFOR). Later for MVT18 HASP, I strip out 2780 support
(to reduce real storage requirements) and add-in 2741&TTY terminal
support and editor that implemented the CMS EDITOR syntax (code
rewritten from scratch since the environments were so different) for
CRJE support.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
23jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 15 Oct, 2025
Blog: Facebook

AWD had done their own cards for PC/RT (16bit PC/AT bus) ... including
4mbit Token-Ring card. For the RS/6000 (w/microchannel, 32bit bus),
AWD was told they couldn't do their own cards, but had to use the
(communication group heavily performance kneecaped) PS2 cards. It
turns out the PS2 microchannel 16mbit Token-Ring card had lower
throughput than the PC/RT 4mbit Token-Ring card (i.e. joke that PC/RT
server with 4mbit T/R would have higher throughput than RS/6000 server
with 16mbit T/R).

New Almaden Research bldg had been heavily provisioned with CAT wiring
assuming 16mbit token-ring, but found 10mbit ethernet (over CAT
wiring) LAN had lower latency and higher aggregate throughput than
16mbit token-ring LAN. They also found $69 10mbit ethernet cards had
higher throughput than the PS2 $800 16mbit token-ring cards. We were
out giving customer executive presentations on TCP/IP, 10mbit
ethernet, 3-tier architecture, high-performance routers, distributed
computing presentations (including comparisons with standard IBM
offerings) and taking misinformation barbs in the back by the SNA,
SAA, & token-ring forces. The Dallas E&S center published something
purported to be 16mbit T/R compared to Ethernet ... but only (remotely
valid) explanation I could give was that they compared to early 3mbit
ethernet prototype predating listen-before-transmit (CSMA/CD) part of
Ethernet protocol standard.

About the same time, senior disk engineer gets talk scheduled at
annual, internal, world-wide communication group conference,
supposedly on 3174 performance. However, his opening was that the
communication group was going to be responsible for the demise of the
disk division. The disk division was seeing drop in disk sales with
data fleeing mainframe datacenters to more distributed computing
friendly platforms. The disk division had come up with a number of
solutions, but they were constantly being vetoed by the communication
group (with their corporate ownership of everything that crossed the
datacenter walls) trying to protect their dumb terminal paradigm.

Senior disk software executive partial countermeasure was investing in
distributed computing startups that would use IBM disks (he would
periodically ask us to drop in on his investments to see if we could
offer any assistance). The communication group stranglehold on
mainframe datacenters wasn't just disk and a couple years later, IBM
has one of the largest losses in the history of US companies and was
being reorganized into the 13 "baby blues" in preparation for breaking
up the company ("baby blues" take-off on the "baby bell" breakup
decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 15 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#21 IBM Token-Ring

also OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open
Systems Interconnection standards to become the global protocol for
computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt

Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."

... snip ...

2nd half 80s, I served on (Chessin's) XTP TAB and there were several
gov agencies participating in XTP ... so took XTP (as "HSP") to (ISO
chartered) ANSI X3S3.3 standards group (transport and
network). Eventually was told they were only allowed to do standards
for things that conform to OSI model; XTP didn't because 1) supported
internetworking (non-existed layer between 3&4), 2) bypassed layer 3/4
interface going directly to LAN MAC, and 3) supported LAN MAC,
non-existed layer somewhere in the middle of layer 3. Joke at the time
was Internet/IETF required two interoperable implementations before
standards progression while ISO didn't even require a standard to be
implementable.

XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 15 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#21 IBM Token-Ring
https://www.garlic.com/~lynn/2025e.html#22 IBM Token-Ring

trivia: 1988, Nick Donofrio approved HA/6000, originally for NYTimes
to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I
rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, informaix that had DEC
VAXCluster support in same source base unix). S/88 Product
Administrator started taking us around to their customers and also had
me write a section for the corporate continuous availability
document (it gets pulled when both AS400/Rochester and mainframe/POK
complain they couldn't meet requirements). Also previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson.

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on anything with more than 4-system clusters, then
leave IBM a few months later.

Had left IBM and was brought in as consultant to small client/server
startup by two former Oracle employees (worked with on RDBMS and were
in the Ellison/Hester meeting) that were there responsible for
something called "commerce server" and they wanted to do payment
transactions on the server. The startup had also invented this
technology called SSL they wanted to use, it is now sometimes called
"electronic commerce". I had responsibility for everything between
webservers and payment networks. Based on procedures, documentation
and software I had to do for electronic commerce, I did a talk on "Why
Internet Wasn't Business Critical Dataprocessing" that (Internet/IETF
standards editor) Postel sponsored at ISI/USC.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
Original SQL/relational, System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe TCP/IP and RFC1044

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe TCP/IP and RFC1044
Date: 15 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#18 IBM Mainframe TCP/IP and RFC1044
https://www.garlic.com/~lynn/2025e.html#19 IBM Mainframe TCP/IP and RFC1044

yes, NSC RFC1044. I got sucked into doing the NSC channel extender
support first for IBM STL. In 1980 ... STL was bursting at the seams
and they were moving 300 people from STL IMS group to offsite bldg
with dataprocessing back to STL. They had tried "remote 3270" and
found human factors totally unacceptable. I then implemented the
channel extender support for NSC and they found no perceptible
difference between offsite and inside STL. An unanticipated
side-effect was it improved system throughput by 10-15%. STL had
spread the 3270 controllers across same channels with 3830 disk
controllers ... the NSC channel extender significantly reduced the
channel busy for the same amount of 3270 activity ... improving disk
and overall system throughput. There was some consideration of then
configuring all STL systems with channel extender (even for 3270s
inside STL).

Then NSC tried to get IBM to release my support, but there were some
people in POK working on serial and they got it vetoed (because they
were worried it might affect justify the release of their serial
stuff.

1988, the branch office asks if I could help LLNL (national lab)
standardize some serial they were working with, which quickly becomes
fibre-channel standard, "FCS" (not First Customer Ship), initially
1gbit/sec transfer, full-duplex, aggregate 200mbye/sec (including some
stuff I had done in 1980). Then POK finally gets their serial stuff
released with ES/9000 as ESCON (when it was already obsolete,
initially 10mbytes/sec, later increased to 17mbytes/sec). Then some
POK engineers become involved with FCS and define a heavy-weight
protocol that drastically cuts throughput (eventually released as
FICON). 2010, a z196 "Peak I/O" benchmark released, getting 2M IOPS
using 104 FICON (20K IOPS/FICON). About the same time a FCS is
announced for E5-2600 server blade claiming over million IOPS (two
such FCS having higher throughput than 104 FICON). Also IBM pubs
recommend that SAPs (system assist processors that actually do I/O) be
kept to 70% CPU (or 1.5M IOPS) and no new CKD DASD has been made for
decades, all being simulated on industry standard fixed-block devices.

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

Opel

From: Lynn Wheeler <lynn@garlic.com>
Subject: Opel
Date: 16 Oct, 2025
Blog: Facebook

related; 1972, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Early/mid 70s, was IBM's Future System; FS was totally different from
370 and was going to completely replace it. During FS, internal
politics was killing off 370 projects and lack of new 370 is credited
with giving the clone 370 makers (including Amdahl), their market
foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

20yrs after Learson's failure, IBM has one of the largest losses in
the history of US companies. IBM was being reorganized into the 13
"baby blues" in preparation for breaking up the company (take off on
"baby bells" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist

some more from token-ring post yesterday in (public) mainframe group:
https://www.garlic.com/~lynn/2025e.html#23 IBM Token-Ring
https://www.garlic.com/~lynn/2025e.html#22 IBM Token-Ring
https://www.garlic.com/~lynn/2025e.html#21 IBM Token-Ring

senior disk engineer gets talk scheduled at annual, internal,
world-wide communication group conference, supposedly on 3174
performance. However, his opening was that the communication group was
going to be responsible for the demise of the disk division. The disk
division was seeing drop in disk sales with data fleeing mainframe
datacenters to more distributed computing friendly platforms. The disk
division had come up with a number of solutions, but they were
constantly being vetoed by the communication group (with their
corporate ownership of everything that crossed the datacenter walls)
trying to protect their dumb terminal paradigm.

Senior disk software executive partial countermeasure was investing in
distributed computing startups that would use IBM disks (he would
periodically ask us to drop in on his investments to see if we could
offer any assistance). The communication group stranglehold on
mainframe datacenters wasn't just disk and a couple years later, IBM
has one of the largest losses in the history of US companies

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension

--
virtualization experience starting Jan1968, online at home since Mar1970

Opel

From: Lynn Wheeler <lynn@garlic.com>
Subject: Opel
Date: 17 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#25 Opel

1972, Learson tried (and failed) to block bureaucrats, careerists, and
MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

20yrs after Learson's failure, IBM has one of the largest losses in
the history of US companies. IBM was being reorganized into the 13
"baby blues" in preparation for breaking up the company (take off on
"baby bells" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

trivia: AMEX & KKR were in competition for private equity, LBO
take-over of RJR.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
and KKR wins. KKR runs into some troubles and hire away AMEX president
to help. Then IBM board hires away former AMEX president to try and
save IBM from the breakup.

Other trivia: the year that IBM has one of the largest losses in the
history of US companies, AMEX spins off its mainframe datacenters and
financial transaction outsourcing business in the largest IPO up until
that time (as First Data). Then 15yrs later, KKR does a private equity
LBO (in the largest LBO up until that time) of FDC (before selling it
off to Fiserv)

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity

--
virtualization experience starting Jan1968, online at home since Mar1970

Opel

From: Lynn Wheeler <lynn@garlic.com>
Subject: Opel
Date: 17 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#25 Opel
https://www.garlic.com/~lynn/2025e.html#26 Opel

disclaimer: turn of the century I'm FDC Chief Scientist. One of FDC
datacenters is handling credit card outsourcing for half of all credit
card accounts in the US. It was 40+ max configured mainframes (@$30M,
aggregate >$1.2B), none older than 18 months, constant rolling
upgrades. The number of mainframes are necessary to handle account
settlement in the overnight batch window. They have hired a (EU)
performance consultant to look at the 450K cobol statement (credit
card processing) application that runs on all systems.

In the 70s, one of the CSC co-workers had developed an APL-based
analytical system model ... which was made available on (the online
sales&marketing) HONE (when I joined IBM, one of my hobbies was
enhanced production operating systems for internal datacenters and
HONE was one of my 1st and long time customers) as Performance
Predictor (branch enters customer configuration and workload
information and then can ask what-if questions about effect of
changing configuration and/or workloads). The consolidated US HONE
single-system-image also uses a modified version of performance
predictor to make system (logon) load balancing decisions. The
consultant had acquired a descendent of the Performance Predictor
(during IBM's troubles in the early 90s when lots of stuff was being
spun off). He managed to identify a 7% improvement (of the >$1.2B). In
parallel, I'm using some other CSC performance technology from the 70s
and identify a 14% improvement (of the >$1.2B) for aggregate 21%
improvement.

Mar/Apr '05 eserver magazine article (gone 404, at wayback machine),
some info somewhat garbled
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

posts mentioning Performance Predictor and FDC 450k cobol statment app
https://www.garlic.com/~lynn/2025c.html#19 APL and HONE
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2014b.html#83 CPU time
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Germany

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Germany
Date: 18 Oct, 2025
Blog: Facebook

When I graduated and joined the IBM Cambridge Science Center, one of
my hobbies was enhanced production operating systems for internal
datacenters and (online branch office sales&marketing support) US HONE
systems was one of the first (and long time) customer. Then got one of
my overseas business trips to both Paris, for 1st non-US HONE install
and to Boeblingen (put me up in small business traveler's hotel in
residential district and hotels changed four times the telco tariff
(like $60).

During FS in the early 70s, internal politics were killing 370 efforts
and the lack of new 370 is credited with giving the clone 370 makers
their market foothold). After Future System implodes, there is mad
rush to get stuff back into the 370 product pipelines (including
quick&dirty 3033&3081 efforts)
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

About the same time Endicott cons me into helping with 138/148 ECPS
microcode assist and was also con'ed into helping with a 125-II five
CPU implementation (115 has nine microprocessor memory bus, all the
microprocessors the same, including 370, just different microcode
loads; 125 is identical except the microprocessor running 370
microcode is 50% faster). Endicott then complains that the 5-CPU 125
would overlap the throughput of 148/ECPS ... and at escalation
meeting, I had to argue both sides of the table (but the 125 5-CPU
gets shutdown).

Later in 70s I had transferred to SJR on the west coast and get to
wander around IBM (and non-IBM) datacenters in silicon valley,
including disk bldg14/engineering and bldg15/product test across the
street. They were running 7x24, pre-scheduled, stand-alone testing and
mentioned that they had recently tried MVS, but it had 15min MTBF (in
that environment) requiring manual re-ipl. I offer to rewrite I/O
system to make it bullet proof and never fail, allowing any amount of
on-demand testing, greatly improving productivity. Bldg15 then got 1st
engineering 3033 (1st outside POK processor engineering) and since I/O
testing only used a percent or two of CPU, we scrounge up a 3830 and a
3330 string for our own, private online service. At the time, air
bearing simulation (for thin-film head design) was only getting a
couple turn arounds a month on SJR 370/195. We set it up on bldg15
3033 (slightly less than half 195 MIPS) and they could get several
turn arounds a day. Used in 3370FBA then in 3380CKD
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

trivia: 3380 was already transitioning to fixed-block, can be seen in
the records/track formulas where record size had to be rounded up to
multiple of fixed "cell" size.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
cp/67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
125 5-cpu project
https://www.garlic.com/~lynn/submain.html#bounce
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Thin Film Disk Head

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Thin Film Disk Head
Date: 19 Oct, 2025
Blog: Facebook

2nd half of 70s transferring to SJR on west coast, I worked with Jim
Gray and Vera Watson on original SQL/relational, System/R (done on
VM370); we were able to do tech transfer ("under the radar" while
corporation was pre-occupied with "EAGLE") to Endicott for
SQL/DS. Then when "EAGLE" implodes, there was request for how fast
could System/R be ported to MVS ... which was eventually released as
DB2, originally for decision-support *only*.

I also got to wander around IBM (and non-IBM) datacenters in silicon
valley, including DISK bldg14 (engineering) and bldg15 (product test)
across the street. They were running pre-scheduled, 7x24, stand-alone
testing and had mentioned recently trying MVS, but it had 15min MTBF
(requiring manual re-ipl) in that environment. I offer to redo I/O
subsystem to make it bullet proof and never fail, allowing any amount
of on-demand testing, greatly improving productivity. Bldg15 then gets
1st engineering 3033 outside POK processor engineering ... and since
testing only took percent or two of CPU, we scrounge up 3830
controller and 3330 string to setup our own private online service.
Air bearing simulation (for thin film heads) was getting a couple turn
arounds a month on the SJR MVT 370/195. We set it up on the bldg15
3033 and it was able to get as many turn arounds a day as they wanted.

Then bldg15 also gets engineering 4341 in 1978 and some how branch
hears about it and in Jan1979 I'm con'ed into doing a 4341 benchmark
for a national lab that was looking at getting 70 for compute farm
(leading edge of the coming cluster supercomputing tsunami).

first thin film head was 3370
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

then used for 3380; original 3380 had 20 track spacings between each
data track, then cut the spacing in half for double the capacity, then
cut the spacing again for triple the capacity (3380K). The "father of
risc" then talks me into helping with a "wide" disk head design,
read/write 16 closely spaced data tracks in parallel (plus two servo
tracks, one on each side of 16 data track groupings). Problem was data
rate would have been 50mbytes/sec at a time when mainframe channels
were still 3mbytes/sec. However 40mbyte/sec disk arrays were becoming
common and Cray channel had been standardized as HIPPI (100mbyte/sec)
https://en.wikipedia.org/wiki/HIPPI

1988, IBM branch asks if I could help LLNL (national lab) standardize
some serial stuff they were working with, which quickly becomes fibre
channel standard ("FCS", initially 1gbit, full-duplex, got RS/6000
cards capable of 200mbytes/sec aggregate for use with 64-port FCS
switch). In 1990s, some serial stuff that POK had been working with
for at least the previous decade is released as ESCON (when it is
already obsolete, 10mbytes/sec, upgraded to 17mbytes/sec). Then some
POK engineers become involved with FCS and define heavy weight
protocol that significantly reduces ("native") throughput, which ships
as "FICON".

Latest public benchmark I've seen was z196 "Peak I/O" getting 2M IOPS
using 104 FICON (20K IOPS/FICON). About the same time a FCS is
announced for E5-2600 blades claiming over a million IOPS (two such
FCS having higher throughput than 104 FICON). Also IBM pubs
recommended that SAPs (system assist processors that do actual I/O) be
held to 70% CPU (or around 1.5M IOPS) and no CKD DASD have been made
for decades, all being simulated on industry standard fixed-block
devices.  https://en.wikipedia.org/wiki/Fibre_Channel

posts mentioning working with Jim Gray and Vera Watson on original SQL/relational, System/R
https://www.garlic.com/~lynn/submain.html#systemr
posts mentioning getting to work in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Germany

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Germany
Date: 20 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#28 IBM Germany

After 125 5-cpu got shutdown was asked to help with a 16-cpu 370 and
we con the 3033 processor engineers into helping with it in their
spare time (a lot more interesting than remapping 168 logic to 20%
faster chips). Everybody thought it was great until somebody tells the
head of POK that it could be decades until POK's favorite son
operating system ("MVS") had (effective) 16-cpu support (POK doesn't
ship a 16-CPU system until after the turn of the century).  At the
time, MVS documents had 2-CPU support only getting 1.2-1.5 the times
the throughput of 1-CPU (because of the heavy weight multiprocessor
support).  Head of POK then invites some of us to never visit POK
again and instructs the 3033 processor engineers keep heads down and
no distractions.

Contributing was head of POK was in the process of convincing
corporate to kill the VM370 product, shutdown the development group
and transfer all the people to POK for MVS/XA. Endicott eventually
manages to save the VM370 product mission (for the mid-range), but has
to recreate a development group from scratch.

SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3274/3278

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3274/3278
Date: 20 Oct, 2025
Blog: Facebook

3278 moved a lot of electronics back into the (shared) 3274 controller
(reducing 3278 manufacturing cost), but driving up protocol chatter
and latency. 3272/3277 had .086 hardware response. 3274/3278 had .3-.5
hardware response (depending on amount of data transferring). Later
3277 IBM/PC emulation board had 4-5 times the throughput of 3278
emulation board. Letters to the 3278 product administrator got replies
that 3278 wasn't met for interactive computing, but data entry
(i.e. electronic keypunch).

Back when 3278 was introduced there were published studies about
.25sec response improved productivity. I had several systems (after
joining IBM one of my hobbies was enhanced production operating
systems for internal datacenters) that had .11 interactive response
... with 3277 .086 ... users saw .196 (meeting .25 requirement). It
wasn't possible with 3278.

trivia: One of my 1st (and long time) internal customers was branch
office online sales&marketing support HONE systems, 1st CP67l ... then
CSC/VM and SJR/VM. Got one of the early 3274s in bldg15 ... and it was
frequently hanging up, requiring re-impl. I had modified missing
interrupt handler to deal with early engineering 3033 channel director
that would hang and required re-impl ... discovering if I quickly
executed CLRCH for all six channel addresses, it would automagically
re-impl. Then discovered something similar for 3274, doing HDV/CLRIO
in tight loop for all 3274 subchannel addresses, it would (also)
re-impl

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

posts mentioning using CLRCH and/ HDV/CLRIO to force re-impl
https://www.garlic.com/~lynn/2025d.html#58 IBM DASD, CKD, FBA
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2011o.html#23 3270 archaeology
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/97.html#20 Why Mainframes?

posts mentioning comparing 3272/3277 & 3274/3278 response
https://www.garlic.com/~lynn/2025d.html#102 Rapid Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015g.html#58 [Poll] Computing favorities
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers   Personal
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012d.html#19 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2012.html#13 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#53 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009e.html#19 Architectural Diversity
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

--
virtualization experience starting Jan1968, online at home since Mar1970

What Is A Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: What Is A Mainframe
Date: 21 Oct, 2025
Blog: Facebook

Early numbers actual industry benchmarks (number of program iterations
compared to industry standard MIPS/BIPS reference platform), later
used IBM pubs giving percent change since previous

z900, 16 processors 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS (1250MIPS/core), Jun2025

In 1988, the IBM branch office asks if I could help LLNL (national
lab) standardize some serial they were working with, which quickly
becomes fibre-channel standard, "FCS" (not First Customer Ship),
initially 1gbit/sec transfer, full-duplex, aggregate 200mbye/sec
(including some stuff I had done in 1980).

Then POK gets some of their serial stuff released with ES/9000 as
ESCON (when it was already obsolete, initially 10mbytes/sec, later
increased to 17mbytes/sec). Then some POK engineers become involved
with FCS and define a heavy-weight protocol that drastically cuts
throughput (eventually released as FICON). 2010, a z196 "Peak I/O"
benchmark released, getting 2M IOPS using 104 FICON (20K
IOPS/FICON). About the same time a FCS is announced for E5-2600 server
blade claiming over million IOPS (two such FCS having higher
throughput than 104 FICON). Also IBM pubs recommend that SAPs (system
assist processors that actually do I/O) be kept to 70% CPU (or 1.5M
IOPS) and no new CKD DASD has been made for decades, all being
simulated on industry standard fixed-block devices.

Note: 2010 E5-2600 server blade (16 cores, 31BIPS/core) benchmarked at
500BIPS (ten times max configured Z196). At the time, commonly used in
cloud megadatacenters (each having half million or more server blades)

trivia: in the wake of the Future System implosion in the mid-70s,
there is mad rush to get stuff back into the 370 product pipelines and
I get asked to help with a 16-CPU 370 effort and we con the 3033
processor (started out remapping 168 logic to 20% faster chips)
engineers into working on it in their spare time (lot more interesting
than the 168 logic remapping). Everybody thought it was great until
somebody tells head of POK (IBM high-end 370) that it could be decades
before POK's favorite son operating system ("MVS") had ("effective")
16-CPU support (MVS docs had 2-CPU throughput only 1.2-1.5 times
throughput of single CPU because of its high-overhead multiprocessor
support, POK doesn't ship 16-CPU system until after turn of
century). Head of POK then invites some of us to never visit POK again
and directs the 3033 processor engineers heads down and no
distractions.

Also 1988, Nick Donofrio approves HA/6000, originally for NYTimes to
move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Ingres, Sybase, Informix that have DEC
VAXCluster support in same source base with UNIX). IBM S/88 Product
Administrator was also taking us around to their customers and also
had me write a section for corporate continuous availability strategy
document (it gets pulled when both Rochester/AS400 and POK/mainframe
complain).

Early Jan92 meeting with Oracle CEO, AWD executive Hester tells
Ellison that we would have 16-system clusters by mid92 and 128-system
clusters by ye92. Mid-jan92 convince FSD to bid HA/CMP for
gov. supercomputers. Late-jan92, cluster scale-up is transferred for
announce as IBM Supercomputer (for technical/scientific *only*) and we
were told we couldn't work on clusters with more than four systems (we
leave IBM a few months later).

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to the
industry MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

Former executive we had reported to, goes over to head up Somerset/AIM
(Apple, IBM, Motorola), single chip RISC with M88k bus/cache (enabling
clusters of shared memory multiprocessors)

i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:

IBM PowerPC 440: 1,000MIPS
Pentium3: 2,054MIPS (twice PowerPC 440)

Fibre Channel Standard ("FCS") &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
DASD, CKD, FBA, multi-track posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

What Is A Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: What Is A Mainframe
Date: 21 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#32 What Is A Mainframe

During FS period, internal politics was killing off 370 efforts
(claims that lack of new 370s during FS gave the 370 clone makers
their market foothold).
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

when FS implodes there was mad rush to get stuff back into the 370
product pipelines, including kicking off quick&dirty 3033&3081
efforts in parallel. Original 3081D (2-CPU) aggregate MIPS was less
than Amdahl 1-CPU system. Then IBM doubles the processors' cache size,
making 3081K 2-CPU about the same MIPS as Amdahl 1-CPU (modulo 3081K
2-CPU MVS throughput was only .6-.75 the throughput of Amdahl 1-CPU,
because of its significant multiprocessor overhead). Then because
ACP/TPF didn't have SMP, tightly-coupled, multiprocessor support,
there was concern that the whole ACP/TPF market would move to
Amdahl. The 3081 2nd CPU was in the middle of the box, concern just
removing it (for 1-CPU 3083) would make it top heavy and prone to
falling over (so box had to be rewired to move CPU0 to the middle)

trivia: before MS/DOS:
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M (name take-off on IBM CP/67)
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, kildall worked on IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

... more trivia:

I took two credit hr, intro to fortran/computers and at end of
semester, I was hired to rewrite 1401 MPIO for 360/30. Univ. was
getting 360/67 for tss/360 replacing 709/1401 and 360/30 temporarily
replacing 1401 pending availability of 360/67 ... and within few weeks
had 2000 card 360 assembler program (univ. shutdown datacenter on
weekends and I had the datacenter dedicated, although 48hrs w/o sleep
made monday classes hard). Within a year of taking intro class and I
was hired fulltime responsible for OS/360 (360/67 running 360/65,
tss/360 didn't come to production). Jan, 1968, CSC came out to install
CP/67 (precursor to VM370), 3rd installation after CSC itself and MIT
Lincoln Lab. I mostly got to play with it during my dedicated weekend
time, initially working on pathlengths for running OS/360
in virtual machine. OS/360 benchmark ran 322secs on real
machine, initially 856secs in virtual machine (CP67 CPU
534secs), after a couple months I have reduced that CP67 CPU from
534secs to 113secs. I then start rewriting the dispatcher, (dynamic
adaptive resource manager/default fair share policy) scheduler,
paging, adding ordered seek queuing (from FIFO) and mutli-page
transfer channel programs (from FIFO and optimized for
transfers/revolution, getting 2301 paging drum from 70-80 4k
transfers/sec to channel transfer peak of 270). Six months after univ
initial install, CSC was giving one week class in LA. I arrive on
Sunday afternoon and asked to teach the class, it turns out that the
people that were going to teach it had resigned the Friday before to
join one of the 60s CP67 commercial online spin-offs.

Originally CP67 delivered with 1052&2741 terminal support and
automagic terminal type (switching terminal type port scanner with SAD
CCW). Univ. had some ASCII TTYs and I add TTY integrated with
automagic termainl type. I then wanted to have single terminal dial-in
phone number ("hunt group"), but it didn't work since IBM had taken
short cut and hard-wired line-speed for each port. This kicks off univ
clone controller effort, build channel interface board for Interdata/3
programmed to emulate IBM controller with addition of auto-baud
terminal line support. This is upgraded to Interdata/4 for channel
interface and clusters of Interdata/3s for ports. Interdata (and then
Perrkin-Elmer) sells it as clone controller and four of use are
written up as responsible for (some part of) the clone controller
business
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

Linux Clusters

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Linux Clusters
Date: 21 Oct, 2025
Blog: Facebook

basically cluster supercomputers and cloud clusters are similar with
large numbers of linux servers all tied together.  recent article the
top 500 supercomputers are all linux clusters. a large cloud operation
can have a score or more of megadatacenters around the world, each
megadatacenter with half million or more of linux server blades and
each server blade with ten times the processing of max configured
mainframe (and enormous automation, a megadatacenter run with 70-80
staff). Decade ago there were articles of being able to use credit
card at a cloud operation to automagically spin up blades for a 3hr
supercomputer ranking in the top 40 in the world.

trivia: 1988, IBM branch asks if I could help LLNL (national lab)
standardize some serial stuff they were working with, which quickly
becomes fibre channel standard ("FCS", initially 1gbit, full-duplex,
got RS/6000 cards capable of 200mbytes/sec aggregate for use with
64-port FCS switch). In 1990s, some serial stuff that POK had been
working with for at least the previous decade is released as ESCON
(when it is already obsolete, 10mbytes/sec, upgraded to
17mbytes/sec). Then some POK engineers become involved with FCS and
define heavy weight protocol that significantly reduces ("native")
throughput, which ships as "FICON". Latest public benchmark I've seen
was z196 "Peak I/O" getting 2M IOPS using 104 FICON (20K
IOPS/FICON). About the same time a FCS is announced for E5-2600 server
blades claiming over a million IOPS (two such FCS having higher
throughput than 104 FICON). Also IBM pubs recommended that SAPs
(system assist processors that do actual I/O) be held to 70% CPU (or
around 1.5M IOPS) and no CKD DASD have been made for decades, all
being simulated on industry standard fixed-block devices..

The max configured z196 benchmarked at 50BIPS and went for $30M. An
E5-2600 server blade benchmarked at 500BIPS (ten times z196, same
industry benchmark, number of program iterations compared to industry
MIPS/BIPS reference platform) and had a IBM base list price of $1815
(shortly later industry pubs had blade server component makers
shipping half their products directly to cloud operations that
assemble their own servers at 1/3rd the cost of brand name servers
... and IBM sells off its server business).

FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter poss
https://www.garlic.com/~lynn/submisc.html#megadatacenter
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

Linux Clusters

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Linux Clusters
Date: 24 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#34 Linux Clusters

Also 1988, Nick Donofrio approved HA/6000, originally for NYTimes to
move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I
rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, informaix that had DEC
VAXCluster support in same source base unix). S/88 Product
Administrator started taking us around to their customers and also had
me write a section for the corporate continuous availability document
(it gets pulled when both AS400/Rochester and mainframe/POK complain
they couldn't meet requirements). Also previously worked on original
SQL/relational, System/R with Jim Gray and Vera Watson.

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems, then leave
IBM a few months later.

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to
MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

Had left IBM and was brought in as consultant to small client/server
startup by two former Oracle employees (worked with on RDBMS and were
in the Ellison/Hester meeting) that were there responsible for
something called "commerce server" and they wanted to do payment
transactions on the server. The startup had also invented this
technology called SSL they wanted to use, it is now sometimes called
"electronic commerce". I had responsibility for everything between
webservers and payment networks. Based on procedures, documentation
and software I had to do for electronic commerce, I did a talk on "Why
Internet Wasn't Business Critical Dataprocessing" that (Internet/IETF
standards editor) Postel sponsored at ISI/USC.

Executive we reported to, had gone over to head-up Somerset/AIM
(Apple, IBM, Motorola) and does single chip Power/PC ... using
Motorola 88K bus&cache ... enabling multiprocessor configurations
... enabling beefing up clusters with multiprocessor systems.

i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:

IBM PowerPC 440: 1,000MIPS
Pentium3: 2,054MIPS (twice PowerPC 440)

early benchmark numbers actual industry benchmarks, later used IBM
pubs giving percent change since previous

z900, 16 processors 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS (1250MIPS/core), Jun2025

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
original SQL/relational posts
https://www.garlic.com/~lynn/submain.html#systemr
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
SMP. tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

posts mentioning being asked Jan1979 to do 4341 benchmark for national lab
looking at getting 70 for compute farm (sort of the leading edge of the coming
cluster supercomputing tsunami:
https://www.garlic.com/~lynn/2025b.html#67 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#48 VAX MIPS whatever they were, indirection in old architectures
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018d.html#42 Mainframes and Supercomputers, From the Beginning Till Today
https://www.garlic.com/~lynn/2017j.html#88 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2017c.html#87 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#44 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016e.html#116 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#65 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#64 PL/I advertising
https://www.garlic.com/~lynn/2015g.html#98 PROFS & GML
https://www.garlic.com/~lynn/2015f.html#35 Moving to the Cloud
https://www.garlic.com/~lynn/2015.html#78 Is there an Inventory of the Inalled Mainframe Systems Worldwide
https://www.garlic.com/~lynn/2014j.html#37 History--computer performance comparison chart
https://www.garlic.com/~lynn/2014g.html#83 Costs of core
https://www.garlic.com/~lynn/2013i.html#2 IBM commitment to academia
https://www.garlic.com/~lynn/2012j.html#2 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012d.html#41 Layer 8: NASA unplugs last mainframe

--
virtualization experience starting Jan1968, online at home since Mar1970

Linux Clusters

From: Lynn Wheeler <lynn@garlic.com>
Subject: Linux Clusters
Date: 24 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#34 Linux Clusters
https://www.garlic.com/~lynn/2025e.html#35 Linux Clusters

In the 70s, with the implosion of Future System (internal politics had
been killing off 370 efforts, claim is the lack of new 370s during FS
is credited with giving clone 370 makers their market foothold), there
was mad rush to get stuff back into the 370 product pipelines,
including kicking off quick&dirty 3033&3081 efforts in parrallel.
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

I get asked to help with a 16-CPU 370, and we con the 3033 processor
engineers into helping in their spare time (a lot more interesting
than remapping 168 logic to 20% faster chips). Everybody thought it
was great until somebody tells the head of POK that it could be
decades before POK favorite son operating system ("MVS") had
(effective) 16-CPU support (MVS docs at the time saying 2-CPU systems
only had 1.2-1.5 times throughput of single CPU because of heavy SMP
overhead, aka MVS 2-CPU 3081K at same aggregate MIPS as Amdahl single
processor, only had .6-.75 the throughput); POK doesn't ship 16-CPU
system until after turn of century. Then head of POK invites some of
us to never visit POK again and directs 3033 processor engineers heads
down and no distractions.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP. tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Linux Clusters

From: Lynn Wheeler <lynn@garlic.com>
Subject: Linux Clusters
Date: 24 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#34 Linux Clusters
https://www.garlic.com/~lynn/2025e.html#35 Linux Clusters
https://www.garlic.com/~lynn/2025e.html#36 Linux Clusters

related; 1972, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Early/mid 70s, was IBM's Future System; FS was totally different from
370 and was going to completely replace it. During FS, internal
politics was killing off 370 projects and lack of new 370 is credited
with giving the clone 370 makers (included Amdahl), their market
foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

Late 80s, a senior disk engineer gets talk scheduled at annual,
internal, world-wide communication group conference, supposedly on
3174 performance. However, his opening was that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly being vetoed by the communication group (with
their corporate ownership of everything that crossed the datacenter
walls) trying to protect their dumb terminal paradigm. Senior disk
software executive partial countermeasure was investing in
distributed computing startups that would use IBM disks (he would
periodically ask us to drop in on his investments to see if we could
offer any assistance).

The communication group stranglehold on mainframe datacenters wasn't
just disk and a couple years later (20yrs after Learson's failure),
IBM has one of the largest losses in the history of US companies, and
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company (take off on "baby bells" breakup decade
earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
pension posts
https://www.garlic.com/~lynn/submisc.html#pension

--
virtualization experience starting Jan1968, online at home since Mar1970

Amazon Explains How Its AWS Outage Took Down the Web

From: Lynn Wheeler <lynn@garlic.com>
Subject: Amazon Explains How Its AWS Outage Took Down the Web
Date: 25 Oct, 2025
Blog: Facebook

Amazon Explains How Its AWS Outage Took Down the Web
https://www.wired.com/story/amazon-explains-how-its-aws-outage-took-down-the-web/

1988, Nick Donofrio approved HA/6000, originally for NYTimes to move
their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, informaix that had DEC
VAXCluster support in same source base unix). S/88 Product
Administrator started taking us around to their customers and also had
me write a section for the corporate continuous availability
document (it gets pulled when both AS400/Rochester and mainframe/POK
complain they couldn't meet requirements). Also previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson.

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems, then leave
IBM a few months later.

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to
MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

Had left IBM and was brought in as consultant to small client/server
startup by two former Oracle employees (worked with on RDBMS and were
in the Ellison/Hester meeting) that were there responsible for
something called "commerce server" and they wanted to do payment
transactions on the server. The startup had also invented this
technology called SSL they wanted to use, it is now sometimes called
"electronic commerce". I had responsibility for everything between
webservers and payment networks. Based on procedures, documentation
and software I had to do for electronic commerce, I did a talk on "Why
Internet Wasn't Business Critical Dataprocessing" (including it took
3-10 times the original application effort to turn something into a
"service") that (Internet/IETF standards editor) Postel sponsored at
ISI/USC.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

posts mentioning talk "Why Internet Wasn't Business Critical
Dataprocessing:
https://www.garlic.com/~lynn/2025e.html#35 Linux Clusters
https://www.garlic.com/~lynn/2025e.html#23 IBM Token-Ring
https://www.garlic.com/~lynn/2025d.html#111 ARPANET, NSFNET, Internet
https://www.garlic.com/~lynn/2025d.html#42 IBM OS/2 & M'soft
https://www.garlic.com/~lynn/2025b.html#97 Open Networking with OSI
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024g.html#80 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System

--
virtualization experience starting Jan1968, online at home since Mar1970

Amazon Explains How Its AWS Outage Took Down the Web

From: Lynn Wheeler <lynn@garlic.com>
Subject: Amazon Explains How Its AWS Outage Took Down the Web
Date: 25 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#38 Amazon Explains How Its AWS Outage Took Down the Web

Summary of the Amazon DynamoDB Service Disruption in the Northern Virginia (US-EAST-1) Region
https://aws.amazon.com/message/101925/

while doing electronic commerce, was working with some contractors
that were also doing work for GOOGLE in its early infancy. They
initially were doing DNS updates for load balancing ... but that
resulted in all sorts of DNS issues. Then then modified the Google
perimeter routers to support the load balancing function.

other trivia: In the early 80s, got HSDT project, T1 and faster
computer links (and arguments with the communication group, in the
60s, IBM had 2701s that supported T1 links, 70s with transition to
SNA/VTAM and other other issues, links were caped at 56kbits; FSD did
get S1 Zirpel T1 cards for gov. customers that were having their 2701s
failing). Was also working with NSF director and was suppose to get
$20M to interconnect the NSF supercomputer centers. Then congress cuts
the budget, some other things happen, then RFP was released (in part
based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

Posts mentioning Series/1 Zirpel T1 cards:
https://www.garlic.com/~lynn/2025d.html#47 IBM HSDT and SNA/VTAM
https://www.garlic.com/~lynn/2025c.html#70 Series/1 PU4/PU5 Support
https://www.garlic.com/~lynn/2025b.html#120 HSDT, SNA, VTAM, NCP
https://www.garlic.com/~lynn/2025b.html#114 ROLM, HSDT
https://www.garlic.com/~lynn/2025b.html#96 OSI/GOSIP and TCP/IP
https://www.garlic.com/~lynn/2025b.html#93 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#43 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#40 IBM APPN
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#15 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#99 Terminals
https://www.garlic.com/~lynn/2024g.html#79 Early Email
https://www.garlic.com/~lynn/2024e.html#34 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2018b.html#9 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2016h.html#26 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016d.html#27 Old IBM Mainframe Systems
https://www.garlic.com/~lynn/2015e.html#83 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2014f.html#24 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013j.html#60 Mainframe vs Server - The Debate Continues
https://www.garlic.com/~lynn/2013j.html#43 8080 BASIC
https://www.garlic.com/~lynn/2013j.html#37 8080 BASIC
https://www.garlic.com/~lynn/2013g.html#71 DEC and the Bell System?
https://www.garlic.com/~lynn/2011g.html#75 We list every company in the world that has a mainframe computer
https://www.garlic.com/~lynn/2010e.html#83 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2009j.html#4 IBM's Revenge on Sun
https://www.garlic.com/~lynn/2008l.html#63 Early commercial Internet activities (Re: IBM-MAIN longevity)
https://www.garlic.com/~lynn/2008e.html#45 1975 movie "Three Days of the Condor" tech stuff
https://www.garlic.com/~lynn/2007f.html#80 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006n.html#25 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2004l.html#7 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004g.html#37 network history
https://www.garlic.com/~lynn/2003m.html#28 SR 15,15
https://www.garlic.com/~lynn/2003d.html#13 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Boca and IBM/PCs

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Boca and IBM/PCs
Date: 25 Oct, 2025
Blog: Facebook

30yrs of PC market ("IBM" PCs increasingly dominated by "clones")
https://arstechnica.com/features/2005/12/total-share/

I had been posting to the internal PC forums, quantity one prices for
clones advertised in Sunday SJMN ... totally different from Boca
projections. Head of POK mainframe is moved to Boca to head up
PCs. They contract with Dataquest (since bought by Gartner) for study
of the PC market future, including a video taped roundtable of Silicon
Valley experts. I'd known the Dataquest person for a decade, and was
asked to be a Silicon Valley "expert" (promising that they would
garble my identity so Boca wouldn't recognize me as an IBM employee),
and after clearing it with my IBM management, and agreed to
participate.

trivia: Late 70s and early 80s, I had been blamed for online computer
conferencing on the internal network (precursor to social media,
larger than arpanet/internet from just about the beginning until
sometime mid/late 80s --- about the time it was forced to convert to
SNA/VTAM). Only about 300 actually participated but claims 25,000 were
reading. When the corporate executive committee was told about it,
folklore is 5of6 wanted to fire me. Results included officially
sanctioned software & forums and researcher paid to sit in back of
my office for nine months studying how I communicated, face-to-face
& telephone; all incoming & outgoing email, instant messages
logs (results were IBM reports, papers and conference talks, books,
and a Stanford Phd joint with language and computer AI).

AWD (workstation division) had done their own cards for PC/RT (16bit
PC/AT bus) ... including 4mbit Token-Ring card. For the RS/6000
(w/microchannel, 32bit bus), AWD was told they couldn't do their own
cards, but had to use the (communication group heavily performance
kneecaped) PS2 cards. It turns out the PS2 microchannel 16mbit
Token-Ring card had lower throughput than the PC/RT 4mbit Token-Ring
card (i.e. joke that PC/RT server with 4mbit T/R would have higher
throughput than RS/6000 server with 16mbit T/R).

New Almaden Research bldg had been heavily provisioned with CAT wiring
assuming 16mbit token-ring, but found 10mbit ethernet (over CAT
wiring) LAN had lower latency and higher aggregate throughput than
16mbit token-ring LAN. They also found $69 10mbit ethernet cards had
higher throughput than the PS2 microchannel $800 16mbit token-ring
cards. We were out giving customer executive presentations on TCP/IP,
10mbit ethernet, 3-tier architecture, high-performance routers,
distributed computing presentations (including comparisons with
standard IBM offerings) and taking misinformation barbs in the back by
the SNA, SAA, & token-ring forces. The Dallas E&S center
published something purported to be 16mbit T/R compared to Ethernet
... but only (remotely valid) explanation I could give was that they
compared to early 3mbit ethernet prototype predating
listen-before-transmit (CSMA/CD) part of Ethernet protocol standard.

About the same time, senior disk engineer gets talk scheduled at
annual, internal, world-wide communication group conference,
supposedly on 3174 performance. However, his opening was that the
communication group was going to be responsible for the demise of the
disk division. The disk division was seeing drop in disk sales with
data fleeing mainframe datacenters to more distributed computing
friendly platforms. The disk division had come up with a number of
solutions, but they were constantly being vetoed by the communication
group (with their corporate ownership of everything that crossed the
datacenter walls) trying to protect their dumb terminal paradigm.

Senior disk software executive partial countermeasure was
investing in distributed computing startups that would use IBM disks
(he would periodically ask us to drop in on his investments to see if
we could offer any assistance). The communication group stranglehold
on mainframe datacenters wasn't just disk and a couple years later,
IBM has one of the largest losses in the history of US companies and
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company ("baby blues" take-off on the "baby bell"
breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM communication group predicted to be responsible for demise of disk
division
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/85

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/85
Date: 26 Oct, 2025
Blog: Facebook

was blamed for online computer conferencing (precursor to social
media) in the late 70s and early 80s on the internal network (larger
than arpanet/internet from just about the beginning until sometime
mid/late 80s when it was force to convert to SNA/VTAM). It really took
off the spring of 1981 when I distributed trip report to Jim Gray at
Tandem (only about 300 actually participated but claims 25,000 were
reading). When the corporate executive committee was told there was
something of uproar (folklore 5of6 wanted to fire me), with some task
forces that resulted in official online conferencing software and
officially sanctioned moderated forums ... also a researcher was paid
to study how I communicated, sat in back of my office for 9months,
taking notes on my conversations (also got copies of all
incoming/outgoing email and logs of all instant messsages) resulting
in research reports, papers, conference talks, books and Stanford Phd
joint with language and computer AI. One of the observations


Date: 04/23/81 09:57:42
To: wheeler

your ramblings concerning the corp(se?) showed up in my reader
yesterday. like all good net people, i passed them along to 3 other
people. like rabbits interesting things seem to multiply on the
net. many of us here in pok experience the sort of feelings your mail
seems so burdened by: the company, from our point of view, is out of
control. i think the word will reach higher only when the almighty $$$
impact starts to hit. but maybe it never will. its hard to imagine one
stuffed company president saying to another (our) stuffed company
president i think i'll buy from those inovative freaks down the
street. '(i am not defending the mess that surrounds us, just trying
to understand why only some of us seem to see it).

bob tomasulo and dave anderson, the two poeple responsible for the
model 91 and the (incredible but killed) hawk project, just left pok
for the new stc computer company. management reaction: when dave told
them he was thinking of leaving they said 'ok. 'one word. 'ok. ' they
tried to keep bob by telling him he shouldn't go (the reward system in
pok could be a subject of long correspondence). when he left, the
management position was 'he wasn't doing anything anyway. '

in some sense true. but we haven't built an interesting high-speed
machine in 10 years. look at the 85/165/168/3033/trout. all the same
machine with treaks here and there. and the hordes continue to sweep
in with faster and faster machines. true, endicott plans to bring the
low/middle into the current high-end arena, but then where is the
high-end product development?

... snip ... top of post, old email index

trivia: One of my hobbies was enhanced production operating systems
for internal datacenters including disk bldg14/engineering and
bldg15/product test, across the street. Bldg15 got early engineering
processors for I/O testing and got an (Endicott) engineering 4341 in
1978. Branch office heard about it and in Jan1979 con me into doing
benchmarks for national lab looking at getting 70 for compute farm
(sort of the leading edge of the coming cluster supercomputing
tsunami).

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal networking posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

IBM System/360 Model 85 Functional Characteristics ©1967 (2.6 MB)
https://bitsavers.org/pdf/ibm/360/functional_characteristics/A22-6916-0_System_360_Model_85_Functional_Characteristics_1967.pdf

other
https://en.wikipedia.org/wiki/IBM_System/360_Model_85
https://en.wikipedia.org/wiki/Solid_Logic_Technology
https://en.wikipedia.org/wiki/Cache_(computing)
https://en.wikipedia.org/wiki/Microcode
https://en.wikipedia.org/wiki/IBM_System/370_Model_165
https://en.wikipedia.org/wiki/Floating-point_arithmetic
https://en.wikipedia.org/wiki/IBM_System/360

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/85

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/85
Date: 26 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#41 IBM 360/85

Amdahl wins battle to make ACS, 360-compatible. Then ACS/360 is killed
(folklore was concern that it would advance state-of-the-art too fast)
and Amdahl then leaves IBM (before Future System); end ACS/360:
https://people.computing.clemson.edu/~mark/acs_end.html

Besides hobby of doing enhanced production operating systems for
internal datacenters ... and wandering around internal datacenters
... I spent some amount of time at user group meetings (like SHARE)
and wandering around customers. Director of one of the largest
(customer) financial datacenters liked me to drop in and talk
technology. At one point, the branch manager horribly offended the
customer and in retaliation, they ordered an Amdahl machine (lonely
Amdahl clone 370 in a vast sea of "blue). Up until then Amdahl had
been selling into univ. & tech/scientific markets, but clone 370s had
yet to break into the IBM true-blue commercial market ... and this
would be the first. I got asked to go spend 6m-12m on site at the
customer (to help obfuscate the reason for the Amdahl order?). I
talked it over with the customer, who said while he would like to have
me there it would have no affect on the decision, so I declined the
offer. I was then told the branch manager was good sailing buddy of
IBM CEO and I could forget a career, promotions, raises.

Early 70s, there was Future System project (and internal politics was
killing off 370 efforts, claim is that lack of new 370s during FS
contributed to clone 370 makers their market foothold)
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

When FS implodes there was mad rush to get stuff back into the 370
product pipelines, including kicking off quick&dirty 3033&3081 efforts
in parallel and I got asked to help with a 16-CPU 370, and we con the
3033 processor engineers into helping in their spare time (a lot more
interesting than remapping 168 logic to 20% faster chips). Everybody
thought it was great until somebody tells the head of POK that it
could be decades before POK favorite son operating system ("MVS") had
(effective) 16-CPU support (MVS docs at the time saying 2-CPU systems
only had 1.2-1.5 times throughput of single CPU because of heavy SMP
overhead). Then head of POK invites some of us to never visit POK
again and directs 3033 processor engineers heads down and no
distractions.

Original 3081D (2-CPU) aggregate MIPS was less than Amdahl 1-CPU
system. Then IBM doubles the processors' cache size, making 3081K
2-CPU about the same MIPS as Amdahl 1-CPU. MVS 2-CPU 3081K at same
aggregate MIPS as Amdahl single processor, only had .6-.75 the
throughput (because of MVS multiprocessor overhead). POK doesn't ship
16-CPU system until after turn of century.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some ACS/360 and Amdahl clone 370s posts
https://www.garlic.com/~lynn/2025e.html#16 CTSS, Multics, Unix, CSC
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#61 Amdahl Leaves IBM
https://www.garlic.com/~lynn/2025c.html#49 IBM And Amdahl Mainframe
https://www.garlic.com/~lynn/2024f.html#122 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#23 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022b.html#88 Computer BUNCH
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#105 IBM Future System
https://www.garlic.com/~lynn/2021e.html#66 Amdahl
https://www.garlic.com/~lynn/2021.html#39 IBM Tech

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/85

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/85
Date: 26 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#41 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#42 IBM 360/85

1970, HAWK, 30MIPS processor


Date: 05/12/81 13:46:19
To: wheeler

RE: Competiton for resources in IBM

Before Bob Tomasulo left to go to work for STC, he told me many
stories about IBM.  Around 1970, there was a project called HAWK. It
was to be a 30 MIPS uniprocessor.  Bob was finishing up on the 360/91,
and wanted to go work on HAWK as his next project. He was told by
management that "there were already too many good people working over
there, and they really couldn't afford to let another good person work
on it"!  They forced him to work on another project that was more
"deserving" of his talents.  Bob never forgave them for that.

I guess IBM feels that resources are to be spread thinly, and that no
single project can have lots of talent on it.  Any project that has
lots of talent will be raided sooner or later.

... snip ... top of post, old email index

Amdahl had won battle to make ACS, 360-compatible. Then ACS/360 is
killed (folklore was concern that it would advance state-of-the-art
too fast) and Amdahl then leaves IBM (before Future System); end
ACS/360:
https://people.computing.clemson.edu/~mark/acs_end.html

Shortly after joining IBM, I was asked to help with adding
multithreading (see patent refs in ACS web page) to 370/195. 195 had
out-of-order execution, but no branch prediction (or speculative
execution) and conditional branches drained pipeline so most code only
ran at half rated speed. Adding another i-stream, simulating 2nd-CPU,
each running at half speed, could keep all the execution units running
at full-speed (modulo MVT&MVS 2-CPU multiprocessor only got 1.2-1.5
the throughput of single CPU; because of the multiprocessor
implementation). Then decision was made to add virtual memory to all
370s and it was decided that it wasn't partical to add virtual memory
to 370/195 (and the multithreading was killed).

Early 70s, there was Future System project (and internal politics was
killing off new 370 efforts, claim is that lack of new 370s during FS
contributed to giving clone 370 makers their market foothold)
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

HAWK may have been killed like Amdahl's ACS/360, or because of virtual
memory decision (like 370/195 multithreading), or because of Future
System (don't know).

Besides hobby of doing enhanced production operating systems for
internal datacenters ... and wandering around internal datacenters
... I spent some amount of time at user group meetings (like SHARE)
and wandering around customers. Director of one of the largest
(customer) financial datacenters liked me to drop in and talk
technology. At one point, the branch manager horribly offended the
customer and in retaliation, they ordered an Amdahl machine (lonely
Amdahl clone 370 in a vast sea of "blue). Up until then Amdahl had
been selling into univ. & tech/scientific markets, but clone 370s
had yet to break into the IBM true-blue commercial market ... and this
would be the first. I got asked to go spend 6m-12m on site at the
customer (to help obfuscate the reason for the Amdahl order?). I
talked it over with the customer, who said while he would like to have
me there it would have no affect on the decision, so I declined the
offer. I was then told the branch manager was good sailing buddy of
IBM CEO and I could forget a career, promotions, raises.

When FS implodes there was mad rush to get stuff back into the 370
product pipelines, including kicking off quick&dirty 3033&3081
efforts in parallel. I got asked to help with a 16-CPU 370, and we con
the 3033 processor engineers into helping in their spare time (a lot
more interesting than remapping 168 logic to 20% faster
chips). Everybody thought it was great until somebody tells the head
of POK that it could be decades before POK favorite son operating
system ("MVS") had (effective) 16-CPU support (MVS docs at the time
saying 2-CPU systems only had 1.2-1.5 times throughput of single CPU
because of heavy SMP overhead). Then head of POK invites some of us to
never visit POK again and directs 3033 processor engineers heads down
and no distractions.

Original 3081D (2-CPU) aggregate MIPS was less than Amdahl 1-CPU
system. Then IBM doubles the processors' cache size, making 3081K
2-CPU about the same MIPS as Amdahl 1-CPU. MVS 2-CPU 3081K at same
aggregate MIPS as Amdahl single processor, only had .6-.75 the
throughput (because of MVS multiprocessor overhead). POK doesn't ship
16-CPU system until after turn of century.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some posts mentioning 370/195 multithreading and virtual memory
https://www.garlic.com/~lynn/2025c.html#112 IBM Virtual Memory (360/67 and 370)
https://www.garlic.com/~lynn/2025c.html#79 IBM System/360
https://www.garlic.com/~lynn/2025b.html#35 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#24 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024e.html#115 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024d.html#101 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#66 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2023f.html#89 Vintage IBM 709
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#12 Computer Server Market
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#60 370/195
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195
https://www.garlic.com/~lynn/2019d.html#62 IBM 370/195
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2017g.html#39 360/95
https://www.garlic.com/~lynn/2017c.html#26 Multitasking, together with OS operations
https://www.garlic.com/~lynn/2017.html#90 The ICL 2900
https://www.garlic.com/~lynn/2017.html#3 Is multiprocessing better then multithreading?
https://www.garlic.com/~lynn/2016h.html#7 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015c.html#26 OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainframe - Forbes
https://www.garlic.com/~lynn/2012d.html#73 Execution Velocity

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM SQL/Relational

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM SQL/Relational
Date: 27 Oct, 2025
Blog: Facebook

2nd half of 70s transferring to SJR on west coast, I worked with Jim
Gray and Vera Watson on original SQL/relational, System/R (done on
VM370, Backus office just down the hall and Codds office was on flr
above). Jim refs:
https://jimgray.azurewebsites.net/
https://en.wikipedia.org/wiki/Jim_Gray_(computer_scientist)
and Vera Watson
https://en.wikipedia.org/wiki/Vera_Watson
... System/R
https://en.wikipedia.org/wiki/IBM_System_R

We were able to tech transfer ("under the radar" while corporation was
pre-occupied with "EAGLE") to Endicott for SQL/DS. Then when "EAGLE"
implodes, there was request for how fast could System/R be ported to
MVS ... which was eventually released as DB2, originally for
decision-support *only*. I also got to wander around IBM (and non-IBM)
datacenters in silicon valley, including DISK bldg14 (engineering) and
bldg15 (product test) across the street. They were running
pre-scheduled, 7x24, stand-alone testing and had mentioned recently
trying MVS, but it had 15min MTBF (requiring manual re-ipl) in that
environment. I offer to redo I/O system to make it bullet proof and
never fail, allowing any amount of on-demand testing, greatly
improving productivity. Bldg15 then gets 1st engineering 3033 outside
POK processor engineering ... and since testing only took percent or
two of CPU, we scrounge up 3830 controller and 3330 string to setup
our own private online service. Then bldg15 also gets engineering 4341
in 1978 and some how branch hears about it and in Jan1979 I'm con'ed
into doing a 4341 benchmark for a national lab that was looking at
getting 70 for compute farm (leading edge of the coming cluster
supercomputing tsunami).

in 1988, Nick Donofrio approved HA/6000, originally for NYTimes to
transfer their newspaper system (ATEX) from DEC VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on significantly improving scale-up performance). S/88 Product
Administrator started taking us around to their customers and also had
me write a section for the corporate continuous availability document
(it gets pulled when both AS400/Rochester and mainframe/POK complain
they couldn't meet requirements).

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems. Then leave
IBM a few months later.

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to MIPS
reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

Executive we reported to, had gone over to head-up Somerset/AIM
(Apple, IBM, Motorola) and does single chip Power/PC ... using
Motorola 88K bus&cache ... enabling multiprocessor configurations
... further beefing up clusters with multiple processors/system.

other info in this recent thread
https://www.garlic.com/~lynn/2025e.html#41 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#42 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#43 IBM 360/85

In-between System/R (plus getting to play disk engineer in
bldgs14&15) and HA/CMP ... early 80s, I got HSDT project, T1 and
faster computer links (both terrestrial and satellite) and battles
with CSG (60s, IBM had 2701 supporting T1, 70s with SNA/VTAM and
issues, links were caped at 56kbit ... and I had to mostly resort to
non-IBM hardware). Internal IBM network required link
encryptors and I hated what I had to pay for T1 encryptors and
faster ones were hard to find. I became involved with effort,
objective was to handle at least 6mbytes/sec and cost less than $100
to build. The corporate crypto group first claimed it seriously
weakened crypto standard and couldn't be used. It took me 3months to
figure out how to explain what was happening, rather than weaker, it
was much stronger ... it was hallow victory. I was then told that only
one organization in the world was allowed to use such crypto, I could
make as many as I wanted, but they all had to be sent to them. It was
when I realized there was three kinds of crypto: 1) the kind they
don't care about, 2) the kind you can't do, 3) the kind you can only
do for them.

Also was working with NSF director and was suppose to get $20M to
interconnect the NSF Supercomputer centers. Then congress cuts the
budget, some other things happened and eventually there was RFP
released (in part based on what we already had running). NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning three kinds of crypto
https://www.garlic.com/~lynn/2024e.html#125 ARPANET, Internet, Internal Network and DES
https://www.garlic.com/~lynn/2024e.html#28 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#75 Joe Biden Kicked Off the Encryption Wars
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2023f.html#79 Vintage Mainframe XT/370
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022b.html#109 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021e.html#75 WEB Security
https://www.garlic.com/~lynn/2021e.html#58 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#17 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#70 IBM/BMI/MIB
https://www.garlic.com/~lynn/2021b.html#57 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2019b.html#23 Online Computer Conferencing
https://www.garlic.com/~lynn/2017c.html#69 ComputerWorld Says: Cobol plays major role in U.S. government breaches
https://www.garlic.com/~lynn/2016c.html#57 Institutional Memory and Two-factor Authentication
https://www.garlic.com/~lynn/2015c.html#85 On a lighter note, even the Holograms are demonstrating
https://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Think

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Think
Date: 28 Oct, 2025
Blog: Facebook

"think" related; 1972, Learson tried (and failed) to block
bureaucrats, careerists, and MBAs from destroying Watson
culture/legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Early/mid 70s, was IBM's Future System; FS was totally different from
370 and was going to completely replace it. During FS, internal
politics was killing off 370 projects and lack of new 370 is credited
with giving the clone 370 makers (included Amdahl), their market
foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and
*MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive

... snip ...

20yrs after Learson's failure, IBM has one of the largest losses in
the history of US companies. IBM was being reorganized into the 13
"baby blues" in preparation for breaking up the company (take off on
"baby bells" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

other contribution, late 80s, senior disk engineer gets talk scheduled
at annual, internal, world-wide communication group conference,
supposedly on 3174 performance. However, his opening was that the
communication group was going to be responsible for the demise of the
disk division. The disk division was seeing drop in disk sales with
data fleeing mainframe datacenters to more distributed computing
friendly platforms. The disk division had come up with a number of
solutions, but they were constantly being vetoed by the communication
group (with their corporate ownership of everything that crossed the
datacenter walls) trying to protect their dumb terminal paradigm.

Senior disk software executive partial countermeasure was
investing in distributed computing startups that would use IBM disks
(he would periodically ask us to drop in on his investments to see if
we could offer any assistance). The communication group stranglehold
on mainframe datacenters wasn't just disk and a couple years later,
IBM has one of the largest losses in the history of US companies and
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company ("baby blues" take-off on the "baby bell"
breakup decade earlier).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/85

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/85
Date: 29 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#41 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#42 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#43 IBM 360/85

trivia: Early last decade, I was asked to track down decision to add
virtual memory to all 370s; found staff to executive making the
decision. Basically MVT storage management was so bad that region
sizes had to be specified four times larger than used, as result
1mbyte 370/165 only could run four regions concurrently
... insufficient to keep system busy and justified. Going to 16mbyte
virtual address space would allow number regions to be increased by
factor of four times (modulo 4bit storage protect keys capping it at
15) with little or no paging (sort of like running MVT in CP67 16mbyte
virtual machine) ... aka VS2/SVS. Ludlow was doing initial
implementation on 360/67 in POK and I would stop in and see off
shift. Basically a little bit of code to build the virtual memory
tables and very simple paging code (since they figured that paging
rate would never exceed 5). Biggest piece of code was EXCP/SVC0, same
problem as CP67 ... applications would invoke SVC0 with channel
programs with virtual addresses and channel required reall
addresses. He borrows CCWTRANS from CP67 to craft into EXCP/SVC0.

Then 360/165 engineers started complaining that if they had to
implement the full 370 virtual memory architecture, it would result in
having to slip virtual memory announce and ship by six months. The
decision was then made to reduce 370 virtual memory architecture to
the 165 "subset" (and other processors and software supporting full
architecture had to retrench to the 165 subset). Then as systems
increased in size/capacity, it was necessary to increase number of
concurrent regions again, to more than 15 (cap of the 4bit storage
protect key), VS2/MVS, each application and subsystem given their own
16mbyte virtual address space (creating a huge number of additional
problems, effectively requiring 370/xa architecture).

POK had huge problem initially with customers not converting from SVS
to MVS as planned, and then again later with customers not moving from
MVS to MVS/XA.

contributing ... in the mid-70s, I started saying that systems were
getting much faster than disks were getting faster. In early 80s,
wrote a tome that between then and original 360 announce, relative
disk system throughput had declined by an order of magnitude (disks
got 3-5 times faster while systems got 40-50 times faster). A disk
executive took exception and assigned the division performance group
to refute my claims. After a couple weeks, they basically came back
and said I had slightly understated the problem. They then respun the
analysis for SHARE presentation about how to better configure disks
for improved system throughput (16Aug1984, SHARE 63, B874).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning MVT->SVS->MVS->MVS/XA
https://www.garlic.com/~lynn/2025d.html#108 SASOS and virtually tagged caches
https://www.garlic.com/~lynn/2025d.html#22 370 Virtual Memory
https://www.garlic.com/~lynn/2025c.html#108 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#79 IBM System/360
https://www.garlic.com/~lynn/2025c.html#59 Why I've Dropped In
https://www.garlic.com/~lynn/2025b.html#33 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#108 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#107 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#93 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2019b.html#94 MVS Boney Fingers
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2018.html#92 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017b.html#8 BSAM vs QSAM
https://www.garlic.com/~lynn/2015h.html#116 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2013.html#22 Is Microsoft becoming folklore?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/85

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/85
Date: 29 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#41 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#42 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#43 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#46 IBM 360/85

MVS trivia: because heavily inherited OS/360 pointer passing APIs,
they map an 8mbyte image of the MVS kernel into every 16mbyte address
space (so kernel calls can take the calling address pointer and access
calling parameters directly, leaving 8mbytes for
applications). However each subsystem were also moved to their private
16mbyte address space, and to pass a pointer to parameter list they
created the 1mbyte common segment area image in every virtual address
space (leaving 7mbytes for applications. Then because common area
requirements were somewhat proportional to number of subsystems and
concurrent applications, common segment area explodes to common system
area (still "CSA") and by 3033 was pushing 5-6 mbytes, leaving 2-3
mbytes (but threatening to become 8mbytes, leaving zero for programs
... aka MVS theoretically could have unlimited number of virtual
address space, as the number increased, CSA requirement would expand
to take over everything besides the kernel image). 370/xa access
registers were to address this problem, calls to subsystem would
switch callers address space to secondary and load subsystem address
space as primary ... allowing subsystems to address caller's virtual
address space (in secondary, w/o using CSA). When the subsystem
returns, caller's address space pointer in secondary, would be moved
back to primary.

In the 3033 time-frame, the threat of CSA expanding to everything
... a subset of 370/xa access registers was retrofitted to 3033, as
dual-address space mode.

Another issue in MVS was its I/O supervisor ... its path length was
increasing ... pushing 10K instructions from interrupt to restarting
next queued I/O program. I've commented before that when I transferred
to SJR, I got to wander around IBM (and non-IBM) datacenters in
silicon valley, including disk bldg14 (engineering) and bldg15
(product test) across the street. At the time they were running
prescheduled, 7x24, stand-alone and mentioned that they had recently
tried MVS, but it had 15min mean-time-between-failures (in that
environment), requiring manual re-ipl. I offered to rewrite I/O
supervisor making it bullet proof and never fail, allowing any amount
of concurrent testing, greatly improving productivity. I then write a
(internal) research report on all the stuff done for I/O integrity and
happen to mention MVS 15min MTBF, bringing the wrath of the MVS
organization on my head. I also had shorten the channel redrive (aka
from interrupt to restarting with next queued I/O) to 500 instructions
(compared to MVS's nearly 10k). Turns out in CP/67, the I/O
instruction paths had "CHFREE" macro that was placed in-paths as soon
as it was safe to redrive the channel (rather than finish processing
the interrupting channel program) which had been dropped in the morph
of CP67->VM370.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

some posts mentioning CP67 CHFREE and channel redrive
https://www.garlic.com/~lynn/2025c.html#78 IBM 4341
https://www.garlic.com/~lynn/2024b.html#14 Machine Room Access
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper

post mentioning MVS, common segment/system area, access registers,
dual address space mode:
https://www.garlic.com/~lynn/2025d.html#108 SASOS and virtually tagged caches
https://www.garlic.com/~lynn/2025d.html#22 370 Virtual Memory
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#67 IBM Mainframe Addressing
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2018.html#92 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#40 Mainframe Family tree and chronology 2
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2015h.html#116 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2014k.html#36 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2013m.html#71 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2012n.html#21 8-bit bytes and byte-addressed machines
https://www.garlic.com/~lynn/2010c.html#41 Happy DEC-10 Day

--
virtualization experience starting Jan1968, online at home since Mar1970

The Weird OS Built Around a Database

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Weird OS Built Around a Database
Newsgroups: alt.folklore.computers
Date: Wed, 29 Oct 2025 14:01:09 -1000

rbowman <bowman@montana.com> writes:

I've never used that, only DB2. I have used Raima's db_Vista. It was a C
API and fast compared to big RDMS databases. For its day it was quite
sophisticated. It arguably would have been better than Esri's choice of
dBase for shapefiles.

when I transfered to SJR on west coast, I worked with Jim Gray and
Vera Watson on original SQL/relational, System/R ... and helped with
transfer technology to Endicott ("under the radar" while company was
involved with the next great DBMS "EAGLE" ... to replace IMS) for
SQL/DS. Then when "EAGLE" implodes there is request for how fast can
System/R be ported to MVS, which is eventually released as "DB2"
(originally for decision/support only) ... Jim refs:
https://jimgray.azurewebsites.net/
https://en.wikipedia.org/wiki/Jim_Gray_(computer_scientist)
and Vera Watson
https://en.wikipedia.org/wiki/Vera_Watson
... System/R
https://en.wikipedia.org/wiki/IBM_System_R

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM S/88

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM S/88
Date: 30 Oct, 2025
Blog: Facebook

Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr to do Multics,
https://en.wikipedia.org/wiki/Multics
https://en.wikipedia.org/wiki/Multics-like
which spawned both Unix
https://en.wikipedia.org/wiki/Unix
and Stratus (S/88)
https://en.wikipedia.org/wiki/Stratus_Technologies
https://en.wikipedia.org/wiki/Stratus_VOS

... and others went to the 4th flr to the IBM Cambridge Science Center
and did virtual machines (1st wanted 360/50 to modify with virtual
memory but all the extras were going to FAA/ATC, so had to settle for
360/40 and did CP/40 which morphs into CP/67 when 360/67 standard with
virtual memory becomes available), science center wide-area network
(that morphs into the corporate internal network, larger than
arpanet/internet from just about the beginning until sometime mid/late
80s about the time it was forced to convert to SNA/VTAM; also used for
the corporate sponsored univ BITNET), bunch of other stuff. Had some
friendly rivalry with the 5th flr.

When decision was made to add virtual memory to all 370s, some of the
CSC people move to the 3rd flr, taking over the IBM Boston Programming
System for the VM/370 development group.

1988, Nick Donofrio approved HA/6000, originally for NYTimes to move
their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, informaix that had DEC
VAXCluster support in same source base unix).

S/88 Product Administrator started taking us around to their customers
and also had me write a section for the corporate continuous
availability document (it gets pulled when both AS400/Rochester
and mainframe/POK complain they couldn't meet requirements).  I had
coined disaster survivability and geographic survivability
(as counter to disaster/recovery) when out marketing HA/CMP. One of
the visits to 1-800 bellcore development showed that S/88 would use a
century of downtime in one software upgrade while HA/CMP had a couple
extra "nines" (compared to S/88) One of the visits to 1-800 bellcore
development showed that S/88 would use a century of downtime in one
software upgrade while HA/CMP had a couple extra "nines" (compared to
S/88)

Also previously worked on original SQL/relational, System/R with Jim
Gray and Vera Watson.

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems, then leave
IBM a few months later.

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to
MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

other trivia: Kildall worked on (virtual machine) IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
before developing CP/M (name take-off on CP/67).
https://en.wikipedia.org/wiki/CP/M
which spawns Seattle Computer Products
https://en.wikipedia.org/wiki/Seattle_Computer_Products
which spawns MS/DOS
https://en.wikipedia.org/wiki/MS-DOS

more trivia, original SQL/Relational
https://en.wikipedia.org/wiki/IBM_System_R
had been developed on VM/370. Technology transfer to Endicott for
SQL/DS ("under the radar" while company was preoccupied with the next,
great DBMS, "EAGLE"). When "EAGLE" implodes, request for how fast
could System/R be ported to MVS, which is eventually released as DB2,
originally for decision/support only.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Disks

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Disks
Date: 31 Oct, 2025
Blog: Facebook

1981, original 3380, 3mbyte/sec transfer and datastreaming channels
3mbyte/sec (doubled channel data rate, and increased max channel cable
length), each model 2520mb, two sealed disks 1260mb, each disk with
two disk arms that each had 630mb.
http://www.bitsavers.org/pdf/ibm/dasd/3380/
http://www.bitsavers.org/pdf/ibm/dasd/3380/GK10-6311-0_3380_Brochure.pdf
Original disk had 20 track spacings between each data track. They then
cut data track spacing in half for double capacity, and then cut track
spacing again for triple capacity. NOTE: already transitioning CKD to
fixed-block (no CKD have been made for decades, all simulating on
industry standard fixed-block devices), can be seen in 3380
records/track formulas where record length has to be rounded up to
multiple of fixed cell size.

Next (1989) were 3390 disks
http://www.bitsavers.org/pdf/ibm/dasd/3390/
https://www.extremetech.com/extreme/142911-ibm-3390-the-worlds-largest-and-most-expensive-hard-drive-teardown

When I transferred out to SJR, I got to wander around to IBM (and
non-IBM) silicon valley datacenters, inbluding DASD bldg14
(engineering) and bldg15 (product test) across the street. At the
time, they were running pre-scheduled, 7x24, stand alone testing and
had mentioned that they had recently tried MVS, but it had 15min
mean-time-between failure (in that environment), requiring manual
re-ipl. I offer to rewrite I/O supervisor to make it bullet-proof and
never fail, so they can do any amount of on-demand concurrent testing,
significantly improving productivity. Later, just before 3880/3830
first-customer-ship, FE had a test suite of 57 simulated errors they
thought likely to occur and MVS was failing for all 57 errors
(requiring re-ipl) and in 2/3rds of errors, no indication of what was
causing the failure.

Bldg15 got first engineering 3033 (outside POK processor engineering)
for testing. Because testing only took a percent or two of processor,
we scrounge up 3830 controller and 3330 string and put up private
online service. At the time, air bearing simulation (for thin-film
head design) was only getting a couple turn arounds a month on SJR
370/195. We set it up on bldg15 3033 (slightly less than half 195
MIPS) and they could get several turn arounds a day. Used in 3370FBA
then in 3380CKD
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

Mid-1978, bldg15 got engineering 4341 and with little tweaking, could
do 3380 testing. Branch office also heard about it and in Jan1979, con
me into doing 4341 benchmark for national lab that was looking at
getting 70 for compute farm (sort of leading edge of the coming
cluster supercomputing tsunami).

Mid-80s, father of RISC cons me into helping with his idea for "wide"
disk head, parallel read-write 16 closely spaced data tracks (and
follow servo tracks on each side of data track grouping). Problem was
50mbyte/sec data transfer while high-end IBM mainframe channels were
still 3mbyte/sec. Then 1988, branch office asks if I can help LLNL
(national lab) standardize some serial stuff they were working with
which quickly becomes fibre channel standard (including some stuff I
had done in 1980) "FCS", initially 1gbit/sec transfer, full-duplex,
aggregate 200mbyte/sec). Then POK mainframe gets around to shipping
ESCON, initially 10mbyte/sec (when it was already obsolete), later
upgraded to 17mbyte/sec

posts mentioning getting to play disk enginneer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning DASD, CKD, FBA, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd
posts mentioning FCS &/or FICON
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VTAM/NCP

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VTAM/NCP
Date: 31 Oct, 2025
Blog: Facebook

I use to report to same executive as person responsible for AWP164
that evolves into APPN (for AS/400). I told him he would be much
better off coming over an working on real networking (TCP/IP), since
the SNA (not a system, not a network, not an architecture)
organization would never appreciate what he was doing. When it came
time to announce APPN, the SNA organization "non-concurred" ... there
was a month or two delay while the APPN announcement letter was
carefully rewritten to explicitly not imply any relationship between
APPN and SNA (later they tried to obfuscate things trying to show how
APPN somehow came under the SNA umbrella).

Early 1980s, I got HSDT; T1 and faster computer links (both
terrestrial and satellite) and battles with SNA group (60s, IBM had
2701 supporting T1, 70s with SNA/VTAM and issues, links were caped at
56kbit ... and I had to mostly resort to non-IBM hardware). Also was
working with NSF director and was suppose to get $20M to interconnect
the NSF Supercomputer centers. Then congress cuts the budget, some
other things happened and eventually there was RFP released (in part
based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet

I also got dragged into doing a project that would take a
37x5/NCP/VTAM emulator that one of the baby bells did for Series/1
... and turn it out as type-1 product. It was well known that SNA
group would pull all sorts of internal political strings ... so went
to the largest 37x5 customer and cut a deal for them to completely
fund the effort (with no strings) ... eliminating SNA group attack
through funding. The customer justification analysis was that they
recovered their complete costs within nine months ... just by having
it as IBM type1 product.

I did a comparison of sample of large customer configurations for real
3725/NCP/VTAM against the "baby bell" Series/1 implementation and
presented the information at SNA/ARB meeting in Raleigh (in part
showing SNA tunneled through real networking). Raleigh executives
started out by attacking that the 3725/NCP/VTAM data was
invalid. However, I showed that I took the information directly from
the SNA group's HONE 3725 configurators. They then tried generating
all sort of FUD to obfuscate the issues. Finally what they did to
totally shutdown the effort can only be described as truth is stranger
than fiction. Part of the SNA/ARB presentation:
https://www.garlic.com/~lynn/99.html#67
and from presentation that one of the baby bell people did at COMMON
user group meeting
https://www.garlic.com/~lynn/99.html#70

trivia: early on, the IBM science center had tried hard to convince
SNA group to use the S/1 "peachtree" processor for the 3705 rather
than the UC processor (which was a significantly less capable). Side
note: IBM Federal Systems Division had eventually done the T1 Series/1
ZIRPEL card for gov. agencies that had (ancient) failing 2701s.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
IBM internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some posts mentioning AWP164
https://www.garlic.com/~lynn/2025d.html#78 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#67 SNA & TCP/IP
https://www.garlic.com/~lynn/2025d.html#4 Mainframe Networking and LANs
https://www.garlic.com/~lynn/2025c.html#71 IBM Networking and SNA 1974
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#0 IBM APPN
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024g.html#40 We all made IBM 'Great'
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024.html#84 SNA/VTAM
https://www.garlic.com/~lynn/2023g.html#18 Vintage X.25
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#54 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#34 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2017d.html#29 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2015h.html#99 Systems thinking--still in short supply
https://www.garlic.com/~lynn/2014e.html#15 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014.html#99 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#26 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013j.html#66 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2013g.html#44 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2012o.html#52 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#68 ESCON
https://www.garlic.com/~lynn/2012c.html#41 Where are all the old tech workers?
https://www.garlic.com/~lynn/2011l.html#26 computer bootlaces
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010e.html#5 What is a Server?
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?
https://www.garlic.com/~lynn/2009q.html#83 Small Server Mob Advantage
https://www.garlic.com/~lynn/2009l.html#3 VTAM security issue
https://www.garlic.com/~lynn/2009i.html#26 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009e.html#56 When did "client server" become part of the language?
https://www.garlic.com/~lynn/2008d.html#71 Interesting ibm about the myths of the Mainframe
https://www.garlic.com/~lynn/2008b.html#42 windows time service
https://www.garlic.com/~lynn/2007r.html#10 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007q.html#46 Are there tasks that don't play by WLM's rules
https://www.garlic.com/~lynn/2007o.html#72 FICON tape drive?
https://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications
https://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?
https://www.garlic.com/~lynn/2007b.html#49 6400 impact printer
https://www.garlic.com/~lynn/2007b.html#48 6400 impact printer
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VTAM/NCP

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VTAM/NCP
Date: 01 Nov, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#51 IBM VTAM/NCP

trivia: mid-80s, Communication/sna group was trying to block release
of mainframe tcp/ip support. When they lost, they changed strategy and
said they had corporate ownership of everything that crossed
datacenter walls, it had to ship through them ... what ships got
44kbyte/sec aggregate throughput using nearly whole 3090 CPU. I then
add RFC1044 support and in tuning tests at Cray Research between Cray
and 4341, got sustained 4341 channel throughput using only modest
amount of 4341 processor (something like 500 times improvement in
bytes moved per instruction executed).

Late 80s, a senior disk engineer gets talk scheduled at annual,
internal, world-wide communication group conference, supposedly on
3174 performance. However, his opening was that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly being vetoed by the communication group (with
their corporate ownership of everything that crossed the datacenter
walls) trying to protect their dumb terminal paradigm. Senior disk
software executive partial countermeasure was investing in distributed
computing startups that would use IBM disks (he would periodically ask
us to drop in on his investments to see if we could offer any
assistance).

The SNA group stranglehold on datacenters wasn't just disks and a
couple years later, IBM has one of the largest losses in the history
of US companies and was being re-organized into the 13 "baby blues" in
preparation for breaking up the company ("baby blues" take-off on
"baby bells" breakup a decade earlier.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
3-tier, ethernet, tcp/ip, etc posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 01 Nov, 2025
Blog: Facebook

1972, Learson tried (and failed) to block bureaucrats, careerists, and
MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

First half of 70s, was IBM's Future System; FS was totally different
from 370 and was going to completely replace it. During FS, internal
politics was killing off 370 projects and lack of new 370 is credited
with giving the clone 370 makers, their market foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive.

... snip ...

20yrs after Learson's failure, IBM has one of the largest losses in
the history of US companies. IBM was being reorganized into the 13
"baby blues" in preparation for breaking up the company (take off on
"baby bells" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pension

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Workstations

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Workstations
Date: 01 Nov, 2025
Blog: Facebook

Early 80s, some people from Stanford approached IBM Palo Alto
Scientific Center to ask if IBM would be interested in picking up a
work stations that they had developed. PASC setup up a review and
invited several operations to listen to Stanford presentation. The net
was that some of the IBM operations were claiming that were doing
something much better than the Stanford workstation ... so Stanford
people went off and formed SUN.

AWD workstation division in Austin was formed to do PC/RT. Previously
it had started with ROMP RISC chip for Displaywriter follow-on; when
that was canceled, they pivoted to the unix workstation market
... getting the company that had done PC/IX for the IBM/PC, to do AIX
for ROMP. PC/RT had 16-bit PC/AT bus ... and developed their own
cards. Then RIOS chipset for RS/6000 that had 32-bit microchannel and
AWD was told they couldn't do their own cards, but had to use the
(that the communication group had heavily performance kneecaped) PS2
cards. Example was the PC/RT 4mbit token-ring card had higher
throughput than the PS2 16mbit token-ring card (joke that PC/RT 4mbit
token-ring server would have higher throughput than RS/6000 16mbit
token-ring server).

1988 Nick Donofrio approved HA/6000, originally for NYTimes to move
their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, informaix that had DEC
VAXCluster support in same source base unix; I do a distributed lock
manager/DLM with VAXCluster API and lots of scale-up improvements).

S/88 Product Administrator started taking us around to their customers
and also had me write a section for the corporate continuous
availability document (it gets pulled when both AS400/Rochester and
mainframe/POK complain they couldn't meet requirements).  Had coined
disaster survivability and geographic survivability (as counter to
disaster/recovery) when out marketing HA/CMP. One of the visits to
1-800 bellcore development showed that S/88 would use a century of
downtime in one software upgrade while HA/CMP had a couple extra
"nines" (compared to S/88).

Also previously worked on original SQL/relational, System/R with Jim
Gray and Vera Watson.

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems, then leave
IBM a few months later.

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to
MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

Former executive we had reported to, goes over to head up Somerset/AIM
(Apple, IBM, Motorola), single chip RISC with M88k bus/cache (enabling
clusters of shared memory multiprocessors)

Also 1988, IBM Branch Office asks me if I could help LLNL (national
lab) help standardize some serial stuff they were working with which
quickly becomes fibre-channel stardard, "FCS", including some stuff I
had done in 1980 (initially 1gbit/sec, full-duplex, aggregate
200mbytes/sec) and was planning on using it with commercial cluster
scaleup HA/CMP with 64port non-blocking switch (before kneecaping it
to four system clusters).

trivia: after leaving IBM, was brought in as consultant to a small
client/server startup. Two former Oracle employees (that had been in
the Hester/Ellison meeting) are there responsible for something called
"commerce server" and they want to do payment transactions on the
server. The startup had also invented a technology the called "SSL"
they wanted to use, sometimes called "electronic commerce". I had
responsibility for everything between e-commerce web servers and the
payment networks. I then do a "Why Internet Isn't Business Critical
Dataprocessor" presentation (based on documents, procedures and
software I did for e-commerce) that (Internet RFC/standards editor)
Postel sponsored at ISI/USC.

note, mid90s, i86 chip makers then do hardware layer that translate
i86 instructions into RISC micro-ops for actual execution (largely
negating throughput difference between RISC and i86); 1999 industry
benchmark:

IBM PowerPC 440: 1,000MIPS
Pentium3: 2,054MIPS (twice PowerPC 440)

Sometime after (mainframe) POK announce ESCON, they start some work
with FCS and define a heavy-weight protocol that significantly cut
"FCS" throughput, eventually ships as FICON. IBM publishes 2010 z196
"Peak I/O" benchmark that gets 2M IOPS using 104 FICON (20k
IOPS/FICON). About same time a (native) FCS is announced for E5-2600
server blade claiming over million IOPS (two such FCS having higher
throughput than 104 FICON). Also max-configured z196, goes for $30M
and benchmarked at 50BIPS (625MIPS/core) compared to IBM base list
price for E5-2600 server blade at $1815 and benchmarked at 500BIPS
(and 31BIPS/core). Also IBM pubs recommend that SAPs (system assist
processors that actually do I/O) be kept to 70% CPU (or 1.5M IOPS) and
no new CKD DASD has been made for decades, all being simulated on
industry standard fixed-block devices.

...  email (about halfway between when HA/CMP got kneecaped and when
we bailed from IBM) to some people in England:

We recently were on a two week trip thru Europe making calls on
various account teams (and in some cases the account teams had us
present/talk to customers). We went London, Amsterdam, Athens, Nice,
Madrid and back to London. Most recently we were in London from last
weds thru Sunday 3/29 (yesterday). For a more detailed run down talk
to xxxxxx (in NS EMEA marketing) ... they have the names of the
account teams that were at the respective meetings with customers (I
don't remember who the ibm'er was at the northwest water meeting with
David).

... one trip we did presentations at Oracle World, we also did
marketing swings through Asia

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
original SQL/relational, System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
e-commerce gateway to payment networks
https://www.garlic.com/~lynn/subnetwork.html#gateway
Fibre Channel Standard ("FCS") &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ACP/TPF

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ACP/TPF
Date: 01 Nov, 2025
Blog: Facebook

Late 80s, for short stint, my wife was chief architecture for Amadeus
(EU airline res system built off of Eastern "System One" ... 370/195
ACP/TPF), she got replaced by communication group for siding with EU
on using X.25. Didn't do them much good, EU went with X.25 and the
communication group replacement got replaced.

1988 Nick Donofrio approved HA/6000, originally for NYTimes to move
their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, informaix that had DEC
VAXCluster support in same source base unix; I do a distributed lock
manager/DLM with VAXCluster API and lots of scale-up improvements).

S/88 Product Administrator started taking us around to their customers
and also had me write a section for the corporate continuous
availability document (it gets pulled when both AS400/Rochester and
mainframe/POK complain they couldn't meet requirements).  Had coined
disaster survivability and geographic survivability (as
counter to disaster/recovery) when out marketing HA/CMP. One of the
visits to 1-800 bellcore development showed that S/88 would use a
century of downtime in one software upgrade while HA/CMP had a couple
extra "nines" (compared to S/88).

Also previously worked on original SQL/relational, System/R with Jim
Gray and Vera Watson.

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems, then leave
IBM a few months later.

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to
MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

After leaving IBM was brought into the largest ACP/TPF airline res
system to look at the ten impossible things they couldn't do; started
with ROUTES ... got a complete softcopy of OAG (all commercial
schedule airline flts in the world) and redid implementation on Unix
workstation that ran 100 times faster than on ACP/TPF. Then added the
impossible things and only ten times faster ... but after a couple
months was able to demo on unix workstation and show that ten
RS/6000-990s could handle all ROUTE request for all airlines in the
world.

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

a few posts mentioning Amadeus and routes
https://www.garlic.com/~lynn/2024e.html#92 IBM TPF
https://www.garlic.com/~lynn/2023g.html#90 Has anybody worked on SABRE for American Airlines
https://www.garlic.com/~lynn/2023c.html#8 IBM Downfall
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF

--
virtualization experience starting Jan1968, online at home since Mar1970

Tymshare

From: Lynn Wheeler <lynn@garlic.com>
Subject: Tymshare
Date: 02 Nov, 2025
Blog: Facebook

When I transferred out to SJR, I got to wander around IBM (and
non-IBM) datacenters in silicon valley, including Tymshare (and see
them at the monthly BAYBUNCH meetings sponsored by SLAC). Tymshare
started offering their CMS-based online computer conferencing system,
VMSHARE for free in Aug1976 to user group SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
https://www.share.org/
archives here
http://vm.marist.edu/~vmshare/

I cut a deal with Tymshare to get a tape dump of all VMSHARE files for
putting up on internal network and systems ... including online
world-wide sales&marketing support HONE systems (biggest problem I had
was lawyers concerned that internal employees might be contaminated by
being exposed to unfiltered customer information). When M/D bought
Tymshare ... VMSHARE had to be moved to a different platform. I was
also brought into evaluate GNOSIS for the spin-off
https://en.wikipedia.org/wiki/GNOSIS
http://cap-lore.com/CapTheory/upenn/Gnosis/Gnosis.html

also ... Ann Hardy
https://medium.com/chmcore/someone-elses-computer-the-prehistory-of-cloud-computing-bca25645f89

Ann Hardy is a crucial figure in the story of Tymshare and
time-sharing. She began programming in the 1950s, developing software
for the IBM Stretch supercomputer. Frustrated at the lack of
opportunity and pay inequality for women at IBM -- at one point she
discovered she was paid less than half of what the lowest-paid man
reporting to her was paid -- Hardy left to study at the University of
California, Berkeley, and then joined the Lawrence Livermore National
Laboratory in 1962. At the lab, one of her projects involved an early
and surprisingly successful time-sharing operating system.

... snip ...

Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167

Ann rose up to become Vice President of the Integrated Systems
Division at Tymshare, from 1976 to 1984, which did online airline
reservations, home banking, and other applications. When Tymshare was
acquired by McDonnell-Douglas in 1984, Ann's position as a female VP
became untenable, and was eased out of the company by being encouraged
to spin out Gnosis, a secure, capabilities-based operating system
developed at Tymshare. Ann founded Key Logic, with funding from Gene
Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl
mainframes. After closing Key Logic, Ann became a consultant, leading
to her cofounding Agorics with members of Ted Nelson's Xanadu project

... snip ...

On one Tymshare visit, they demo'ed a game that somebody had found on
a Stanford SAIL PDP10 and ported to VM370/CMS, Adventure
https://en.wikipedia.org/wiki/Adventure_game
I got a copy and started making available inside IBM.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
(virtual machine based) online commercial systems
https://www.garlic.com/~lynn/submain.html#online
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
internal corporate network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
dailup-banking posts
https://www.garlic.com/~lynn/submisc.html#dialup-banking

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/30 and other 360s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/30 and other 360s
Date: 03 Nov, 2025
Blog: Facebook

I had taken two credit hr intro to fortran/computers and at end of
semester was hired to rewrite 1401 MPIO for 360/30. Univ was getting
360/67 for TSS/360 replacing 709/1401 (360/30 temporarily replaced
1401 pending 360/67 available, for getting 360 experience). Univ.
shutdown datacenter on weekends and I had the place dedicated
(although 48hrs w/o sleep made monday classes hard). I was given pile
of hardware and software manuals and got to design and implement my
own monitor, device drivers, interrupt handlers, storage management,
error recovery, etc ... and within a few weeks had a 2000 card
assembler program. Within year of taking intro class, the 360/67
arrives and I was hired fulltime responsible for OS/360 (TSS/360 not
come to production). Student Fortran jobs had run under a second on
709, but ran over a minute on os/360 (360/67 run as 360/65). I install
HASP and it cuts time in half. I then start redoing STAGE2 SYSGEN to
carefully place datasets and PDS members to optimize arm seek and
multi-track search, cutting another 2/3rds to 12.9secs (Student
Fortran never got better than 709 until I install UoWaterloo WATFOR).

CSC came out to install CP/67 (CSC had wanted a 360/50 to add virtual
memory hardware, but all the extra 50s were going to FAA/ATC, and so
had to settle for 360/40 and did CP40/CMS, CP40 morphs into CP/67 when
a 360/67 was available standard with virtual memory) ... 3rd
installation after CSC itself and MIT Lincoln Labs ... and I mostly
got to play with it during my dedicated weekend time. I initially work
on pathlengths for running OS/360 in virtual machine. Test
stream ran 322secs on real machine, initially 856secs in virtual
machine (CP67 CPU 534secs), after a couple months I have reduced
that CP67 CPU from 534secs to 113secs. I then start rewriting the
dispatcher, (dynamic adaptive resource manager/default fair share
policy) scheduler, paging, adding ordered seek queuing (from FIFO) and
mutli-page transfer channel programs (from FIFO and optimized for
transfers/revolution, getting 2301 paging drum from 70-80 4k
transfers/sec to channel transfer peak of 270). Six months after univ
initial install, CSC was giving one week class in LA. I arrive on
Sunday afternoon and asked to teach the class, it turns out that the
people that were going to teach it had resigned the Friday before to
join one of the 60s CP67 commercial online spin-offs.

Before I graduate, I'm hired fulltime into a small group in the Boeing
CFO office to help with the formation of Boeing Computer Services
(consolidate all dataprocessing into independent business unit,
including offering services to non-Boeing entities). I think Renton
datacenter largest in the world, 360/65s arriving faster than they
could be installed, boxes constantly staged in hallways around the
machine room (although they did have single 360/75 that did various
classified work). Lots of politics between Renton director and CFO
(who only had a 360/30 up at Boeing Field for payroll, although they
enlarge the room to install a 360/67 for me to play with when I wasn't
doing other stuff).

In early 80s, I'm introduced to John Boyd and would sponsor his
briefings at IBM
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist) He had
lots of stories, including being very vocal that the electronics
across the trail wouldn't work ... possibly as punishment, he is put
in command of spook base (about the same time I'm at Boeing). Boyd
biography has spook base a $2.5B "windfall" for IBM (ten times
Renton?); some refs:
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
John Boyd posts and web urls
https://www.garlic.com/~lynn/subboyd.html

some univ & boeing posts
https://www.garlic.com/~lynn/2025e.html#3 Switching On A VAX
https://www.garlic.com/~lynn/2025d.html#112 Mainframe and Cloud
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#69 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025d.html#15 MVT/HASP
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#38 IBM Computers in the 60s
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/30 and other 360s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/30 and other 360s
Date: 03 Nov, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#57 IBM 360/30 and other 360s

Boyd trivia: 89-90, Marine Corps Commandant leverages Boyd in a corps
makeover (at a time when IBM was desperately in need of makeover). Two
yrs later, IBM has one of the largest losses in the history of US
companies and was being reorganized into the 13 "baby blues" in
preparation for breaking up the company (take-off on the "baby bell"
breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

We continued to have Boyd conferences at Quantico, Marine Corps Univ
(even after Boyd passed in 1997)

John Boyd posts and web urls
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/30 and other 360s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/30 and other 360s
Date: 03 Nov, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#57 IBM 360/30 and other 360s
https://www.garlic.com/~lynn/2025e.html#58 IBM 360/30 and other 360s

Univ library gets ONR grant to do online catalog, some of the money
goes for 2321 datacell. Was also selected by IBM for original CICS
product betatest and supporting CICS was added to my tasks. It
wouldn't come up initially, turns out CICS had (undocumented)
hard-coded BDAM option and library had built BDAM datasets with
different set of options. some CICS history ... website gone 404, but
lives on at the wayback machine
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

CICS &/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

Doing both software and hardware

From: Lynn Wheeler <lynn@garlic.com>
Subject: Doing both software and hardware
Date: 04 Nov, 2025
Blog: Facebook

Did lots of software as undergraduate ... univ hired me responsible
for os/360 ... but also rewrote a lot of (virtual machine) CP/67
... although four of us at the univ, did a clone controller; built 360
channel interface board for Interdata/3 programmed to emulate IBM
terminal/line controller with the addition that it could do auto
speed/baud (I had wanted a single dial-up number/hunt-group for all
terminals and speeds). Later upgraded to Interdata/4 for channel
interface and cluster of Interdata/3s for line/ports (sold by
Interdata and later Perkin-Elmer)
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

after graduating and joining IBM, was sucked into helping with
multithreading 370/195 ... see multhreading ref in this webpage about
death of ACS/360
https://people.computing.clemson.edu/~mark/acs_end.html
195 had out-of-order execution, but no branch prediction (or
speculative execution) and conditional branches drained pipeline so
most code only ran at half rated speed. Adding another i-stream,
simulating 2nd-CPU, each running at half speed, could keep all the
execution units running at full-speed (modulo MVT&MVS 2-CPU
multiprocessor only got 1.2-1.5 the throughput of single CPU; because
of the multiprocessor implementation). Then decision was made to add
virtual memory to all 370s and it was decided that it wasn't partical
to add virtual memory to 370/195 (and any new 195 work, including
multithreading was killed).

IBM had Future System project during 1st half of 70s, was killing off 370 efforts.
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

When FS implodes, there is mad rush to get stuff back into 370 product
pipelines including kicking off quick&dirty 3033&3081 efforts. I got
sucked into helping with tightly-coupled, shared memory 16-CPU 370
system and we con 3033 processor engineers into working on it in their
spare time (lot more interesting that remapping 168 logic to 20%
faster chips). Everybody thought it was great until somebody tells
head of POK (IBM high-end lab) that it could be decades before POK's
favorite son operating system ("MVS") had (effective) tightly-coupled
mutliprocessor support (IBM docs said that MVS 2-CPU systems only had
1.2-1.5 times throughput of 1-CPU system ... because of inefficient
multiprocessor support ... POK doesn't ship a 16-CPU system until
after start of century). Then head of POK invites some of us to never
visit POK again and directs 3033 processor engineers, heads down and
no distractions.

I then transfer out to SJR on the west coast and got to wander around
IBM (and non-IBM) datacenters in silicon valley ... including disk
bldg14 (engineering) and bldg15 (product test) across the street. They
were running prescheduled, 7x24, stand-alone testing. They mentioned
that they had recently tried MVS, but it had 15min MTBF (in that
environment), requiring manual re-IPL. I offer to rewrite I/O
supervisor, making it bullet proof and never fail, allowing any amount
of on-demand, concurrent testing, greatly improving productivity.
Downside was any problems they would point the finger at me and I had
to spend increasing time diagnosing hardware issues and design.

IBM clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

"Business Ethics" is Oxymoron

From: Lynn Wheeler <lynn@garlic.com>
Subject: "Business Ethics" is Oxymoron
Date: 05 Nov, 2025
Blog: Facebook

10+ yrs with IBM, I submitted a speak-up about being
significantly underpaid with all sorts of supporting information. I
got back a written response from head of HR saying after thoroughly
reviewing my complete career, I was being paid exactly what I was
suppose to.

I was also periodically being told I had no career, no promotions, no
raises ... so when head hunter asked me to interview for assistant to
president of clone 370 maker (sort of subsidiary of company on the
other side of the pacific), I thought why not. It was going along well
until one of the staff broached the subject of 370/xa documents
(referred to as "811" for their Nov78 publication date, I had a whole
drawer full of the documents, registered ibm confidential, kept under
double lock&key and subject to surprise audits by local security). In
response I mentioned that I had recently submitted some text to
upgrade ethics in the Business Conduct Guidlines (had to be read and
signed once a year) ... that ended the interview (note the BCG upgrade
wasn't accepted). That wasn't the end of it, later had a 3hr interview
with FBI agent, the gov. was suing the foreign parent company for
industrial espionage (and I was on the building visitor log). I told
the agent, I wondered if somebody in plant site security might of
leaked names of individuals who had registered ibm confidential
documents.

In my reply to the written response from head of HR about pay level, I
wrote a reply pointing out that I was being asked to interview college
students that were about to graduate, who would work in a new group
under my direction and their starting offer was 1/3rd more than I was
currently making. No more written responses from head of HR, but a
couple months later I got a 1/3rd raise (putting me on level ground
with offers being made to students I was interviewing). People would
have to periodically remind me that "business ethics" was oxymoron.

some past posts mentioning "business ethics" and oxymoron
https://www.garlic.com/~lynn/2025d.html#61 Amdahl Leaves IBM
https://www.garlic.com/~lynn/2025c.html#35 IBM Downfall
https://www.garlic.com/~lynn/2025b.html#3 Clone 370 System Makers
https://www.garlic.com/~lynn/2023b.html#35 When Computer Coding Was a 'Woman's' Job
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2018d.html#13 Workplace Advice I Wish I Had Known
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2009o.html#57 U.S. begins inquiry of IBM in mainframe market

some past posts mentioning interview
https://www.garlic.com/~lynn/2024f.html#77 IBM Registered Confidential and "811"
https://www.garlic.com/~lynn/2023.html#59 Classified Material and Security
https://www.garlic.com/~lynn/2022g.html#83 Anyone knew or used the Dialog service back in the 80's?
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022c.html#4 Industrial Espionage
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021j.html#38 IBM Registered Confidential
https://www.garlic.com/~lynn/2021d.html#86 Bizarre Career Events
https://www.garlic.com/~lynn/2021b.html#12 IBM "811", 370/xa architecture
https://www.garlic.com/~lynn/2019e.html#29 IBM History
https://www.garlic.com/~lynn/2019.html#83 The Sublime: Is it the same for IBM and Special Ops?
https://www.garlic.com/~lynn/2017f.html#35 Hitachi to Deliver New Mainframe Based on IBM z Systems in Japan
https://www.garlic.com/~lynn/2014f.html#27 Complete 360 and 370 systems found
https://www.garlic.com/~lynn/2011g.html#12 Clone Processors
https://www.garlic.com/~lynn/2011g.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
https://www.garlic.com/~lynn/2011c.html#67 IBM Future System
https://www.garlic.com/~lynn/2010h.html#3 Far and near pointers on the 80286 and later

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe Projects

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe Projects
Date: 06 Nov, 2025
Blog: Facebook

Early 70s, there was Future System effort, completely different and
plan to completely replace 370. I continued to work on 360&370 all
during FS, including periodically ridiculing what they were doing
(which wasn't exactly career enhancing). During FS, 370 efforts were
being killed off (lack of new 370s during FS, claimed to give clone
370 makers their market foothold). When FS
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

implodes, there is mad rush to get stuff back into the 370 product
pipelines, including kicking off quick&dirty 3033&3081 efforts in
parallel. The head of POK was also in the process of convincing
corporate to kill VM370 product, shutdown the development group and
move all the people to POK for MVS/XA. They weren't planning on
telling the people until the very last minute (to minimize the number
that might escape into the Boston/Cambridge area). The information
managed to leak early and several managed to esacpe (DEC VAX/VMS was
in its infancy and joke was that the head of POK was major contributor
to VMS) ... there was hunt for the leaker, but fortunately for me,
nobody gave the them up. Endicott eventually manages to acquire the
VM/370 product mission (for the mid-range), but had to recreate a
development group from scratch

I get con'ed into helping with 125 multiprocessor effort and ECPS for
138/148 (also used for 4331/4341). 115 position memory bus for
microprocessors ... all the same but with different microcoding
including processor for 370. 125 was the same, except the
microprocessor for 370, which was 50% faster. The 125SMP could have up
to five 370 processors (with four left for controller
microprogramming). Old archive post with the ECPS initial analysis for
moving VM370 code into microcode, 6kbytes of VM370 code representing
79.55% of VM370 CPU execution
https://www.garlic.com/~lynn/94.html#21

All the work done for 138/148 ECPS, I was also planning on using for
the 125SMP ... and most of 125SMP support would also be
microcoded. Then Endicott complains that the 125SMP would more than
overlap 148 and managed to get the project killed.

Note: when I joined IBM, one of my hobbies was enhanced production
operating systems for internal datacenters (and online sales&marketing
support HONE systems was one of the 1st and long time customer
originally with CP67 before vm370, also got my 1st overseas business
trips when asked to go along for some of the non-US HONE
installs). Then with the decision to add virtual memory to 370s, also
decided to do VM370 and some of the CSC CP67 people go to the 3rd flr
taking over the Boston Programming Center for VM370. In the morph of
CP67 to VM370 lots of features were simplified or dropped (including
SMP support). Then in 1974, I start adding bunch CP67 stuff into a
VM370R2-base for my CSC/VM (including kernel re-org needed for SMP,
but not SMP itself). Then with a VM370R3-base, I add multiprocessor
support back in, originally for HONE so they can upgrade all their
1-CPU 168s to 2-CPU (US HONE had consolidated all their datacenters in
Palo Alto; trivia when FACEBOOK 1st moves into silicon valley, it is
into a new bldg built next door to the former consolidated US HONE
datacenter, HONE clones were also starting to sprout up all over the
world). BTW, with some optimization the US HONE CSC/VM SMP 2-CPU
systems were getting twice throughput of previous 1-CPU. The 2-CPU,
2-times throughput, further aggravated head of POK who was in the
process of getting VM370 product killed (and MVS was only showing
2-CPU systems with 1.2-1.5 throughput by comparison).

With 125SMP killed, I get asked to help with high-end 16-CPU system
and we con the 3033 processor engineers into helping in their spare
time (a lot more interesting than remapping 168 chip logic to 20%
faster chip). Everybody thought it was great until somebody tells the
head of POK that it could be decades before POK's favorite son
operating system ("MVS") had "effective" 16-CPU support (at the time
MVS docs claimed 2-CPU support only had 1.2-1.5 times throughput of
1-CPU system; aka because of poor & high overhead multiprocessing
support; POK doesn't ship a 16-CPU system until after the turn of the
century, approx 25yrs later). Then the head of POK directs some of us
to never visit POK again, and the 3033 processor engineers, "heads
down, and no distractions".

I transfer out to SJR on the west coast and get to wander around IBM
(and non-IBM) datacenters in silicon valley, including disk bldgs 14
(engineering) and 15 (product test) across the street. They were
running pre-scheduled, 7x24, stand-alone testing and mentioned that
they had recently tried MVS, but MVS had 15min MTBF (requiring manual
re-ipl) in that environment. I offer to rewrite I/O supervisor making
it bullet proof and never fail, allowing any amount of on-demand,
concurrent testing, greatly improving productivity. Downside they
initial target me whenever they had problem and I had to increasingly
play engineer diagnosing (frequently) design problems. I then write an
internal research report about the I/O integrity work and happen to
mention the MVS 15min MTBF, bringing down the wrath of the MVS
organization on my head.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
370/125 SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/submain.html#bounce
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

LSRAD Report

From: Lynn Wheeler <lynn@garlic.com>
Subject: LSRAD Report
Date: 07 Nov, 2025
Blog: Facebook

I had copy of the SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
https://www.share.org/

LSRAD report
http://www.bitsavers.org/pdf/ibm/share/The_LSRAD_Report_Dec79.pdf

scanned it and went to contribute to bitsavers. Problem was it was
published shortly after congress extended the copyright period and had
devil of a time finding somebody at SHARE that would authorize
contribution.

Preface

This is a report of the SHARE Large Systems Requirements for
Application Development (LSRAD) task force. This report proposes an
evolutionary plan for MVS and VM/370 that will lead to simpler, more
efficient and more useable operating systems. The report is intended
to address two audiences: the uses of IBM's large operating systems
and the developers of those systems.

... snip ...

LSRAD published after head of POK had convinced corporate to kill the
VM370 product and shutdown the development group, transferring all the
people to POK for MVS/XA. Endicott did manage to acquire the VM370
product mission (for the mid-range), but was still in the processes of
recreating a development group from scratch.

Acknowledgements

The LSRAD task force would like to thank our respective employers for
the constant support they have given us in the form of resources and
encourgement. We further thank the individuals, both within and
outside SHARE Inc., who reviewed the various drafts of this report. We
would like to acknowledge the contribution of the technical editors,
Ruth Ashman, Jeanine Figur, and Ruth Oldfield, and also of the
clerical assistants, Jane Lovelette and Barbara Simpson

Two computers systems proved invaluable for producing this
report. Draft copies were edited on the Tymshare VM system. The final
report was produced on the IBM Yorktown Heights experimental printer
using the Yorktown Formatting Language under VM/CMS.

... snip ...

Past posts mentioning LSRAD report
https://www.garlic.com/~lynn/2024d.html#31 Future System and S/38
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2024b.html#90 IBM User Group Share
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2014j.html#53 Amdahl UTS manual
https://www.garlic.com/~lynn/2011p.html#11 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#10 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011n.html#62 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2010q.html#33 IBM S/360 Green Card high quality scan
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index
https://www.garlic.com/~lynn/2009n.html#0 Wanted: SHARE Volume I proceedings
https://www.garlic.com/~lynn/2009.html#70 A New Role for Old Geeks

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Module Prefixes

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Module Prefixes
Date: 07 Nov, 2025
Blog: Facebook

23Jan1969, IBM announce unbundling, included charging for
(application) software (but managed to make case that kernel software
was still free).

As undergraduate, univ hired me fulltime responsible for os/360. Then
before I graduate, I'm hired fulltime into small group in the Boeing
CFO office to help with the formation of Boeing Computing Services
Services (consolidate all dataprocessing into independent business
unit, including offering services to non-Boeing Entites). I think
Renton datacenter largest in the world (360/65s arriving faster than
could be installed, boxes constantly staged in hallways around machine
room, joke that Boeing got 360/65s like other companies got keypunch
machines). Lots of politics between Renton director and CFO (who only
had 360/30 up at Boeing field for payroll, although the enlarge the
room to install a 360/67 for me to play with when I wasn't doing other
stuff). Then when I graduate, I join IBM science center (instead of
staying with Boeing CFO).

One of my hobbies at IBM was enhanced production operating systems for
internal datacenters (the online sales&marketing support HONE
systems were one of first and long time customers dating back to CP67
days). Then early 70s, "Future System", completely different than and
going to completely replace 370 (internal politics was killing off 370
efforts and claim that the lack of new 370 during FS gave clone 370
makers their market foothold). I continued to work on 360/370 stuff
all during FS and periodically ridicule what they were doing (which
wasn't exactly career enhancing). Then with FS implosion there was mad
rush to get stuff back into the 370 product pipelines, including
quick&dirty 3033&3081 efforts in parallel
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

With the rise of 370 clone markers and the FS implosion, there is
decision to now start charging for kernel software, starting with
incremental add-ons, eventual transitioning to all kernel software. A
bunch of my internal VM/370 software (including my dynamic adaptive
resource scheduling and management) was selected for the initial
guinea pig. Before release, a corporate specialist (steeped in MVS)
evaluated and found no manual tuning knobs. He said he wouldn't
sign-off because everybody knew that manual tuning knobs was the state
of the art. I created a set of manual tuning knobs packaged as
"DMKSRM" (to ridicule MVS) and the dynamic adaptive implementation was
packaged as "DMKSTP" (after TV commercial) ... "DMK" was the code
prefix for VM370 kernel.

Trivia: before initial release, did a collection of 2000 benchmarks
that took 3months elapsed time to complete ... showing it could
dynamically adapt to wide variety of different hardware configurations
and workloads. Originally for CP67 had done automated benchmarking
(including autolog command, execs specifying simulated users to logon
and benchmark script each simulated user would execute). The first
1000 benchmarks had been manually selected. The 2nd 1000 benchmarks
were specified by modified version of the Performance Predictor
(that was being fed each workload&configuration and the
results). The Performance Predictor was a sophisticated
APL-based analytical system model that had been made available on the
online sales&marketing HONE systems (customer support people would
enter customer workload&configuration information and ask
"what-if" questions about what happens if something is changed
... frequently focused new/added hardware).

IBM 23un1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
CP67l, CSC/VM, SJR/VM internal system posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
automated benchmark posts
https://www.garlic.com/~lynn/submain.html#bench

some recent posts mentioning Performance Predictor
https://www.garlic.com/~lynn/2025e.html#27 Opel
https://www.garlic.com/~lynn/2025c.html#19 APL and HONE
https://www.garlic.com/~lynn/2025b.html#68 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#6 Testing
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024b.html#18 IBM 5100
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#33 Copyright Software
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#7 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history

--
virtualization experience starting Jan1968, online at home since Mar1970

Computing Power Consumption

From: Lynn Wheeler <lynn@garlic.com>
Subject: Computing Power Consumption
Date: 08 Nov, 2025
Blog: Facebook

New IBM Almaden Research bldg had problem with heat from PC/RT and
then RS/6000 in nearly every office being power cycled ... the air
condition systems weren't made to handle the spike in heat powering on
every machine in the morning (after being off all night) ... it will
have nearly stabilized by end of day, when it started all over again.

past refs:
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2000f.html#74 Metric System (was: case sensitivity in file names)

Towards the end of last century cloud and cluster supercomputers
started out sharing a lot of similar design/technology. Early on large
cloud could have score of large megadatacenters around the world, each
with half million or more blade servers with enormous automation,
70-80 staff/megadatacenter, (in 2010) each blade server possibly
500BIPS. The cloud operators so optimized their system costs
(assembling their own systems for fraction of the price of brand name
servers) ... that power & cooling were increasingly major
cost. They then started putting heavy pressure on component makers for
1) power drops to zero when idle, but instant on when needed, having
significantly over provisioned to meet "on-demand" online use and 2)
optimize server power consumption when active (introduction of power
consumption ratings in industry benchmarks; threatening to move to
less powerful chips optimized for battery portable use, the less power
hungry chips more than offset requiring more servers (for lower
aggregate power for cloud services).

Sometime last decade, there was industry press that server component
makers were shipping half their product directly to large cloud
operators (that assembled their own servers) and some brand name
server vendors selling off that business. At one point there was also
articles that use a credit card with large cloud operation to spin-up
a supercompter (that ranked in the top 40 in the world) for a few hrs.

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

"on-demand" supercomputer
https://www.garlic.com/~lynn/2025.html#16 On-demand Supercomputer
https://www.garlic.com/~lynn/2024c.html#74 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#36 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#24 HA/CMP
https://www.garlic.com/~lynn/2023.html#100 IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia
https://www.garlic.com/~lynn/2021f.html#18 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2017c.html#6 How do BIG WEBSITES work?
https://www.garlic.com/~lynn/2015g.html#19 Linux Foundation Launches Open Mainframe Project
https://www.garlic.com/~lynn/2013f.html#74 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013f.html#35 Reports: IBM may sell x86 server business to Lenovo

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM S/88

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM S/88
Date: 08 Nov, 2025
Blog: Facebook

re:
http:/www.garlic.com/~lynn/2025e.html#49 IBM S/88

In early 80s, I had gotten HSDT project, T1 and faster computer links
... and battles with communication group (60s, IBM had 2701 that
supported T1, but 70s move to SNA/VTAM and associated issues, caped
links at 56kbits. HSDT came with requirement to show some IBM
content. The only thing I could find was the ZIRPEL T1 card for
Series/1 that FSD did for gov. agencies with failing 2701s. I went to
order half dozen Series/1s and was told that ROLM had been recently
purchased and for them to show some IBM content they had ordered a
whole boatload of Series/1, creating a year's backlog. I knew the
director of ROLM datacenter (back before they left IBM for ROLM,
before ROLM purchase) and we cut a deal, I would help ROLM with some
problems in return for some of their Series/1s.

Later in 90s, after leaving IBM (and IBM had sold ROLM to Siemens), I
was doing security chip and dealing with Siemens guy that had office
on the old ROLM campus. Then before chip was ready to fab, Siemens
spun off their chip business as Infineon and the guy I was dealing
with, became Infineon president (and rang the bell at NYSE) ... moving
into new bldg over on 1st street. I was getting chip done at a new
(Infineon) security fab in Dresden (been certified by German and US
agencies) and they asked me also to do security audit.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

some posts mentioning HSDT, Series/1, ROLM
https://www.garlic.com/~lynn/2025d.html#47 IBM HSDT and SNA/VTAM
https://www.garlic.com/~lynn/2025b.html#114 ROLM, HSDT
https://www.garlic.com/~lynn/2025b.html#93 IBM AdStar
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#79 Early Email
https://www.garlic.com/~lynn/2024e.html#34 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2019d.html#81 Where do byte orders come from, Nova vs PDP-11
https://www.garlic.com/~lynn/2018b.html#9 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2016h.html#26 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016d.html#27 Old IBM Mainframe Systems
https://www.garlic.com/~lynn/2015e.html#83 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2014f.html#24 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013j.html#43 8080 BASIC
https://www.garlic.com/~lynn/2013j.html#37 8080 BASIC
https://www.garlic.com/~lynn/2013g.html#71 DEC and the Bell System?
https://www.garlic.com/~lynn/2009j.html#4 IBM's Revenge on Sun

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe to PC

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe to PC
Date: 09 Nov, 2025
Blog: Facebook

Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr to do Multics,
https://en.wikipedia.org/wiki/Multics
https://en.wikipedia.org/wiki/Multics-like
which spawned both Unix
https://en.wikipedia.org/wiki/Unix
and Stratus (S/88)
https://en.wikipedia.org/wiki/Stratus_Technologies
https://en.wikipedia.org/wiki/Stratus_VOS

Others went to the IBM Science Center on the 4th flr and did virtual
machines ... wanted 360/50 to modify with virtual memory, but all the
extra 50s were going to FAA/ATC, so had to settle for a 360/40 to add
virtual memory and did cp40/cms, which morphs into cp67/cms when
360/67 standard with virtual memory became available (precursor to
vm370), science center wide-area network which evolves into the
internal network (larger than arpanet/internet from just about the
beginning until sometime mid/late 80s when force to convert to
SNA/VTAM) technology also used for the corporate sponsored univ
BITNET, invented GML 1969 (decade later morphs into ISO standard SGML
and after another decade morphs into HTML at CERN), and bunch of other
stuff.

Kildall worked on (virtual machine) IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
before developing CP/M (name take-off on CP/67).
https://en.wikipedia.org/wiki/CP/M
which spawns Seattle Computer Products
https://en.wikipedia.org/wiki/Seattle_Computer_Products
which spawns MS/DOS
https://en.wikipedia.org/wiki/MS-DOS

Early 70s, IBM had Future System effort, totally different from 370
and going to replace 370 (internal politics killing off 370 efforts,
lack of new 370 different FS credited with giving clone 370 makers
their market foothold). I continued to work on 360/370 all during FS,
including periodically ridiculing what they were doing (which wasn't
exactly career enhancing). When FS finally implodes there is made rush
to get stuff back into 370 product pipelines including kicking off
quick&dirty 3033&3081 in parallel. Last nail in the FS coffin was a
study by the IBM Houston Since Center that if 370/195 apps were redone
for FS machine made out of the fastest available hardware technology,
they would have the throughput of 370/145 (about 30 times slowdown).
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

I then got con'ed into helping Endicott with ECPS for 138/148 (redoing
VM370 kernel code into microcode with 10 times speedup). Old archived
post with initial analysis for ECPS:
https://www.garlic.com/~lynn/94.html#21

I transfer to SJR on west coast and get to wander around IBM (&
non-IBM) datacenters in silicon valley, including disk
bldg14(engineering) and bldg15(product test) across the street. They
were doing pre-scheduled, 7x24, stand-alone mainframe testing and
mentioned they had tried MVS but it had 15min MTBF (in that
environment), requiring re-ipl. I offer to rewrite I/O supervisor to
make it bullet proof and never fail, allowing any amount of on-demand,
concurrent testing, greatly improving productivity. Bldg15 gets early
engineering mainframes for I/O testing, getting the 1st engineering
3033 (outside POK processor engineering). Testing took only a couple
percent of processor so we scrounge up 3830 controller and 3330 string
to setup our own online service. Then 1978 get engineering 4341 and
branch office hears about it. They Jan1979, the con me into doing
benchmark for national lab looking at getting 70 for compute farm
(sort of the precursor to the cluster supercomputing and cluster cloud
megadatacenter tsunami).

In the early 80s, I get HSDT, T1 and faster computer links and battles
with communication group (60s, IBM had 2701 that supported T1 links,
70s going to SNA/VTAM and various issues caped links at 56kbits). Also
working with NSF director and was suppose to get $20M to interconnect
the NSF supercomputer centers. Then congress cuts the budget, some
other things happen and eventually release an RFP (in part based on
what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet

Also in the early 80s, I got permission to give user group talks on
how ECPS was done. After some of the presentations, Amdahl would
corner me with more questions. They had done MACROCODE (370-like
instructions running in microcode mode) to quickly respond a plethora
of 3033 microcode changes required to run MVS. At the time, they were
then in the process of implementing HYPERVISOR ("multiple domain",
virtual machine subset microcode) ... IBM wasn't able to respond with
LPAR/PRSM until nearly decade later.  trivia: in the wake of FS
implosion, the head of POK was convincing corporate to kill VM370
product, shutdown the development group and transfer all the people to
POK for MVS/XA (Endicott managed to save the VM370 for the mid-range,
but had to recreate a development group from scratch). Some of the
VM370 people had done a simplified virtual machine for MVS/XA
development, VMTOOL ... it required SIE instruction to move in&out of
virtual machine mode, but 3081 didn't have enough microcode space
... so it was constantly "paging" making it quiet slow. Then customers
weren't converting from MVS to MVS/XA as planned (a lot of 370/XA was
to compensate for MVS shortcomings), somewhat similar to the earlier
problem getting customers to convert from VS2/SVS to VS2/MVS,
mentioned here
http://www.mxg.com/thebuttonman/boney.asp

However, Amdahl was having more success moving customers to MVS/XA
with "Multiple Domain", being able to run MVS & MVS/XA
concurrently. Note 308x was only going to be multiprocessor (no single
processors) and the initial 2-CPU 3081D had less aggregate MIPS than a
single processor Amdahl. IBM responds by doubling processor cache for
3081K, bringing aggregate MIPS up to about the same as Amdahl single
processor (althoug MVS docs had MVS 2-CPU support only getting 1.2-1.5
times the throughput of single processor because enormous MVS 2-CPU
overhead, meaning that MVS 3081K with same aggregate MIPS was still
less throughput than 1-CPU Amdahl). Then some Amdahl people do 370
emulation on SUN which was also ported to PCs, and some early PC
virtual machine.

At the time of the ARPANET cutover to internetworking on 1Jan1983,
there were approx. 100 IMP nodes and 255 hosts. By comparison the
internal network was rapidly approaching 1000 world-wide (not
SNA/VTAM). Old archive post with list of corporate locations that
added one or more network nodes in 1983
https://www.garlic.com/~lynn/2006k.html#8

1988 branch office asks me if I could help LLNL (national lab)
standardize some serial stuff they were working with, which quickly
becomes fibre-channel standard, "FCS" (not First Customer Ship),
initially 1gbit/sec transfer, full-duplex, aggregate 200mbye/sec
(including some stuff I had done in 1980). Then POK gets some of their
serial stuff released with ES/9000 as ESCON (when it was already
obsolete, initially 10mbytes/sec, later increased to
17mbytes/sec). Then some POK engineers become involved with FCS and
define a heavy-weight protocol that drastically cuts throughput
(eventually released as FICON). 2010, a z196 "Peak I/O" benchmark
released, getting 2M IOPS using 104 FICON (20K IOPS/FICON). About the
same time a FCS is announced for E5-2600 server blade claiming over
million IOPS (two such FCS having higher throughput than 104 FICON
running over FCS). Also IBM pubs recommend that SAPs (system assist
processors that actually do I/O) be kept to 70% CPU (or 1.5M IOPS) and
no new CKD DASD has been made for decades, all being simulated on
industry standard fixed-block devices. Note: 2010 E5-2600 server blade
(16 cores, 31BIPS/core) benchmarked at 500BIPS (ten times max
configured Z196).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
FCS and/or FICON Posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe to PC

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe to PC
Date: 09 Nov, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#67 Mainframe to PC

modified 360/40 with virtual memory and did CP40 (from '82 SEAS
presentation)
https://www.garlic.com/~lynn/cp40seas1982.txt

note atlas paging & virtual memory, Atlas reference (gone 403?, but
lives free at wayback):
https://web.archive.org/web/20121118232455/http://www.ics.uci.edu/~bic/courses/JaverOS/ch8.pdf
from above:

Paging can be credited to the designers of the ATLAS computer, who
employed an associative memory for the address mapping [Kilburn, et
al., 1962]. For the ATLAS computer, |w| = 9 (resulting in 512 words
per page), |p| = 11 (resulting in 2024 pages), and f = 5 (resulting in
32 page frames). Thus a 220-word virtual memory was provided for a
214- word machine. But the original ATLAS operating system employed
paging solely as a means of implementing a large virtual memory;
multiprogramming of user processes was not attempted initially, and
thus no process id's had to be recorded in the associative memory. The
search for a match was performed only on the page number p.

... snip ...

... somewhat analogous to add virtual memory to all 370s; early last
decade was asked to track down that decision and found staff to
executive making decision. Basically MVT storage management was so bad
that region sizes had to be specified four times larger than used, a
typical 1mbyte 370/165 could only run four concurrent regions,
insufficient to keep system busy and justified. Going to 16mbyte
virtual address space, allowed to increase number regions by factor of
4 times (capped at 15 because of 4bit storage protect keys) ... with
little or no paging (somewhat like running MVT in a CP67 16mbyte
virtual machine). I would periodically drop in on Ludlow doing initial
implementation (on 360/67), a little bit of code to create the virtual
memory table and some simple paging (VS2/SVS) ... the biggest task
(similar to CP67) was EXCP/SVC0 was being passed channel programs with
virtual addresses and channels required real addresses ... a copy of
the channel program had to be made substituting real addresses for
virtual (he borrows CCWTRANS from CP67 for merging into EXCP).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CEO 1993

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CEO 1993
Date: 10 Nov, 2025
Blog: Facebook

1972, Learson tried (and failed) to block bureaucrats, careerists, and
MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

First half of 70s, was IBM's Future System; FS was totally different
from 370 and was going to completely replace it. During FS, internal
politics was killing off 370 projects and lack of new 370 is credited
with giving the clone 370 makers, their market foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive.

... snip ...

In early 80s, I'm introduced to John Boyd and would sponsor his
briefings at IBM
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist) He had
lots of stories, including being very vocal that the electronics
across the trail wouldn't work ... possibly as punishment, he is put
in command of spook base. Boyd biography has spook base a $2.5B
"windfall" for IBM; some refs:
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

Boyd trivia: 89-90, Marine Corps Commandant leverages Boyd in a corps
makeover (at a time when IBM was desperately in need of makeover). Two
yrs later (and 20yrs after Learson's failure), IBM has one of the
largest losses in the history of US companies. IBM was being
reorganized into the 13 "baby blues" in preparation for breaking up
the company (take off on "baby bells" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

We continued to have Boyd conferences at Quantico, Marine Corps Univ
(even after Boyd passed in 1997)

Boyd posts and URLs:
https://www.garlic.com/~lynn/subboyd.html
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CEO 1993

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CEO 1993
Date: 10 Nov, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#69 IBM CEO 1993

Example was late 80s, senior disk engineer got a talk scheduled at
annual communication group world-wide internal conference supposedly
on 3174 performance but open the talk with statement that the
communication group was going to be responsible for demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. They had come up with a number of solutions that were
constantly being vetoed by the communication group (with their
corporate responsibility for everything that cross datacenter walls)
trying to preserve their SNA/VTAM paradigm. It wasn't just disks and a
couple years later, the communication group stranglehold on mainframe
datacenters resulted in IBM having one the largest losses in history
of US companies. The disk division senior executive of software
partial countermeasure was investing in distributed computing
startups that would use IBM disks ... and he would periodically ask us
to drop by his investments to see if we could offer any help.

Note: 1988, Nick Donofrio had approved HA/6000 project, originally for
NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, informaix that had DEC
VAXCluster support in same source base unix). We were also having some
number of distributed computing systems ported to HA/CMP (like LLNL's
"LINCS" and NCAR's "Mesa Archival")

S/88 Product Administrator started taking us around to their customers
and also had me write a section for the corporate continuous
availability document (it gets pulled when both AS400/Rochester
and mainframe/POK complain they couldn't meet requirements).  Had
coined disaster survivability and geographic
survivability (as counter to disaster/recovery) when out marketing
HA/CMP. One of the visits to 1-800 bellcore development showed that
S/88 would use a century of downtime in one software upgrade while
HA/CMP had a couple extra "nines" (compared to S/88).

Also previously worked on original SQL/relational, System/R with Jim
Gray and Vera Watson.

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems, then leave
IBM a few months later.

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to
industry MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

Communication group responsible for demise of disk divsion
https://www.garlic.com/~lynn/subnetwork.html#emulation
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
original sql/relational, System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Virtual Memory

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Virtual Memory
Date: 10 Nov, 2025
Blog: Facebook

Early last decade, I was asked to track down decision to add virtual
memory to all 370s; found staff to executive making the
decision. Basically MVT storage management was so bad that region
sizes had to be specified four times larger than used, as result
1mbyte 370/165 only could run four regions concurrently
... insufficient to keep system busy and justified. Going to 16mbyte
virtual address space would allow number regions to be increased by
factor of four times (modulo 4bit storage protect keys capping it at
15) with little or no paging (sort of like running MVT in CP67 16mbyte
virtual machine) ... aka VS2/SVS. Ludlow was doing initial
implementation on 360/67 in POK and I would stop in
periodically. Basically a little bit of code to build the virtual
memory tables and very simple paging code (since they figured that
paging rate would never exceed 5). Biggest piece of code was
EXCP/SVC0, same problem as CP67 ... applications would invoke SVC0
with channel programs with virtual addresses and channel required real
addresses. He borrows CCWTRANS from CP67 to craft into
EXCP/SVC0. Pieces of email exchange in this old archived post
https://www.garlic.com/~lynn/2011d.html#73

I had started posting that systems were getting larger and faster than
disks were getting faster ... and VS2 was working hard to get around
the 15 limit imposed by 4bit storage protect key ... going to a
separate 16mbyte virtual address space for every region ... aka
VS2/MVS. MVS heavily inherited OS/360 pointer passing APIs, they map
an 8mbyte image of the MVS kernel into every 16mbyte address space (so
kernel calls can take the calling API address pointer and access
calling parameters directly, leaving 8mbytes for
applications). However each subsystem were also moved to their private
16mbyte address space, and to pass a pointer to parameter list they
created the 1mbyte common segment area ("CSA") image in every virtual
address space (leaving 7mbytes for applications. Then because common
area requirements were somewhat proportional to number of subsystems
and concurrent applications, common segment area CSA explodes to
common system area (still "CSA") and by 3033 was pushing 5-6 mbytes,
leaving 2-3 mbytes (but threatening to become 8mbytes, leaving zero
for programs ... aka MVS theoretically could have unlimited number of
virtual address spaces, but as the number increased, CSA requirement
would expand to take over everything besides the kernel image). 370/XA
access registers were to address this problem, calls to subsystem
would switch callers address space to secondary and load subsystem
address space as primary ... allowing subsystems to address caller's
virtual address space (in secondary, w/o using CSA). When the
subsystem returns, caller's address space pointer in secondary, would
be moved back to primary. Because it was taking so long to get to
MVS/XA, a subset of access registers were retrofitted to 3033 as
"dual-address space" mode.

Early 80s, I wrote a tome that disk relative system throughput had
declined by order of magnitude since 360 was announced (disks got 3-5
times faster, but systems got 40-50 times faster). Disk division
executive took exception and assigned the division performance group
to refute my claims. After a couple weeks, they basically came back
and said I had slightly understated the problem. They then respun the
analysis for SHARE presentation about how to better configure disks
for improved system throughput (16Aug1984, SHARE 63, B874).

CSC posts (responsible for CP40 & CP67 virtual machines)
https://www.garlic.com/~lynn/subtopic.html#545tech

a few recent posts mentioning CSA, dual-address space, and B874 share
talk
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Virtual Memory
Date: 11 Nov, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#71 IBM 370 Virtual Memory

When "Future System" implodes, there is mad rush to get stuff back
into 370 product pipelines, including kicking off quick&dirty
3033&3081 efforts in parallel. Then the head of POK was convincing
corporate to kill VM370 product, shutdown the VM370 development group
and transfer all the people to POK for MVS/XA. They weren't planning
on telling the people until very last minute (to minimize the number
that might escape into the Boston/Cambridge area). The information
managed to leak and some number managed to escape (DEC VAX/VMS was in
its infancy and joke was that head of POK was major contributor to
VMS). Endicott eventually manages to acquire the VM370 product
mission, but had to recreate a development group from scratch.

With decision to add virtual memory to all 370s, included doing
VM370. Some of the CSC CP67/CMS people split off and move to the 3rd
flr, taking over the IBM Boston Programming Center for the VM370
development group (then as the group exceeds space on the 3rd flr,
they move out to empty IBM SBC bldg at burlington mall (off 128). In
the morph from CP67->VM370 lots of stuff was simplified and/or dropped
(including multiprocessor support).

When I graduated and joined CSC, one of my hobbies was enhanced
production operating systems for internal datacenters ... and the
internal online sales&marketing support HONE systems were one of my
1st (and long time) customers. Then in 1974, I start adding bunch of
stuff back into VM370R2-base for my internal CSC/VM (including kernel
restructure needed for SMP/multiprocessor, but not SMP actual
support). Then move my CSC/VM to VM370R3-base and add SMP back in,
initially for US HONE, so they can upgrade all their 158s&168s to
2-CPU (getting twice the throughput of single processor, at the time
MVS documentation said its 2-CPU support only got 1.2-1.5 times
throughput of 1-CPU).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
370/125 5-CPU posts
https://www.garlic.com/~lynn/submain.html#bounce

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Virtual Memory
Date: 11 Nov, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#71 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2025e.html#72 IBM 370 Virtual Memory

As undergraduate in the 60s at univ, I rewrote large amount of CP67,
including the page replacement algorithm and page I/O. It was global
LRU based (at time when the academic literature was all about "local
LRU" algorithm). After joining IBM, I upgrade my internal CP67 to
global LRU (and also use it for my CSC/VM and SJR/VM internal
systems). In the early 70s, CSC had 768kyte 360/67 (104 pageable 4k
pages) and IBM Grenoble Scientific Center had 1mbyte 360/67 (155
pageable 4k pages) and modified CP67 to correspond to the 60s "Local
LRU" literature ... and both centers ran similar workloads. CSC ran
75-80 users with better throughput and interactive response than the
Grenoble 35 users (with only 104 4k pages compared to grenoble 155
pages)

At Dec81 ACM SIGOPS meeting, Jim Gray asks me if I can help Tandem
co-worker get their Stanford Phd (one of the people that had work on
Stanford ORVYL for 360/67). It involved global LRU page replacements
and the forces from the late 60s ACM local LRU are lobbying to block
giving Phd for anything involving global LRU. I went to send a copy of
the CSC & Grenoble performance data (that could compare the two
strategies on same hardware and some system) but executives block me
from sending for almost a year. I hoped it was punishment for being
blamed for online computer conferencing (and not meddling in academic
dispute).

trivia: In late 70s & early 80s, I was blamed for online computer
conferencing on the internal network (larger than arpanet/internet
from just about the beginning until sometime mid/late 80s, about the
same time it was forced to convert to SNA/VTAM). It really took off
spring of 1981, when I distributed a trip report to visit Jim Gray at
Tandem, only about 300 directly participated, but claims that 25,000
were reading (folklore is that when corporate executive committee was
told, 5of6 wanted to fire me). Some of the results were official
software and officially sanctioned, moderated discussion groups. Also
a researcher was hired to study how I communicated, sat in the back of
my office for 9months, took notes on face-to-face, telephone, got
copies of all incoming&outgoing email (one stat was over 270 different
people/week), logs of instant messages. Results were IBM research
papers, conference talks and papers, books, and Stanford PHD (joint
between language and computer AI).

From IBM Jargon:

Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

... snip ...

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
paging, replacement algorithms, I/O posts
https://www.garlic.com/~lynn/subtopic.html#clock
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

some posts mentioning the paging, local/global LRU, Grenoble, Tandem,
Stanford, computer conferencing
https://www.garlic.com/~lynn/2025c.html#17 IBM System/R
https://www.garlic.com/~lynn/2024g.html#107 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#34 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024b.html#95 Ferranti Atlas and Virtual Memory
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2018f.html#63 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#62 LRU ... "global" vs "local"
https://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013i.html#30 By Any Other Name
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012g.html#21 Closure in Disappearance of Computer Scientist
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Virtual Memory
Date: 11 Nov, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025e.html#71 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2025e.html#72 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2025e.html#73 IBM 370 Virtual Memory

trivia: before VS2/SVS & VS2/MVS ... Boeing Huntsville had run into
the MVT storage management problem and modified MVTR13 to run in
virtual memory mode on 360/67 (w/o doing any actual paging, but would
reorganize addresses as countermeasure). Lots of places places had
gotten 360/67s for TSS/360 but TSS/360 wasn't coming to production
... so started out running with OS/360 as 360/65. CSC had modified
360/40 with virtual memory and did CP40/CMS ... which morphs into
CP67/CMS when 360/67 started shipping, standard with virtual memory
(and numerous customers then started installing CP67). Univ. of
Michigan (MTS) and Stanford (ORVYL) had done their own virtual memory
systems. Boeing Huntsville configured their 2-CPU system as two two
360/65s with MVT. They ran into the MVT storage management problem
early and modified MVT13 early to run in virtual memory mode.

I had taken two credit hr intro Fortran/computers and at end of
semester I was hired to rewrite 1401 MPIO for 360/30. Univ. was
getting 360/67 for TSS/360, replacing 709/1401 and got 360/30
temporarily (replacing 1401) pending 360/67 shipping. Univ shutdown
datacenter on weekends and I would have the whole place dedicated,
although 48hrs w/o sleep made Monday classes hard. They gave me a pile
of hardware & software manuals and I got to design and implement my
own monitor, device drivers, interrupt handlers, error recovery,
storage management, etc ... and within a few weeks had 2000 card
360/30 assembler program. Within a year of taking intro class, the
360/67 arrives and I'm hired fulltime responsible for OS/360. Student
Fortran job ran under second on 709, but initially over a minute on
360. I install HASP and it cuts the time in half. I then start redoing
STAGE2 SYSGEN, ordering placement of datasets and PDS members to
optimize arm seek and multi-track search, cutting another 2/3rds to
12.9secs. It never got better than 709 until I install UofWaterloo
WATFOR. CSC then comes out and installs CP67 and mostly played with it
during my weekend window, starting to rewrite lots of CP67.

Later, before I graduate, I'm hired fulltime into small group in
Boeing CFO office to help with the formation of Boeing Computer
Service, consolidate all dataprocessing into an independent business
unit. I think Renton datacenter largest in the world, 360/65s arriving
faster than they could be installed, boxes constantly staged in
hallways around machine room (joke that Boeing was getting 360/65s
like other companies got keypunches). Lots of politics between Renton
director and CFO, who only had a 360/30 up at Boeing Field for payroll
(although they enlarge the machine room for a 360/67 for me to play
with when I wasn't doing other stuff). Then the 2-CPU 360/67 is
brought up to Seattle from Boeing Huntsville. Then when I graduate, I
join IBM CSC, rather than staying with Boeing CFO.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Some recent posts mentioning Univ. student fortran, watfor, Boeing
CFO, renton, huntsville, mvt
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems

--
virtualization experience starting Jan1968, online at home since Mar1970

Interactive response

From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive response
Date: 14 Nov, 2025
Blog: Facebook

One of my hobbies after joining IBM was highly optimized operating
systems for internal datacenters (moving a lot of stuff that I had
done as undergraduate to CP67L for internal systems In early 80s, one
of the 1st (and long time) internal customers, was the internal online
sales&marketing support HONE systems. With the decision to add virtual
memory to all 370s, also decision to do VM370 and some of the CSC
(from 4th flr), take-over the IBM Boston Programming group on the 3rd
flr. In the morph of CP67->VM370, a lot of features were simplified
and/or dropped (like multiprocessor support). In 1974, I started
moving stuff to VM370R2-base for CSC/VM (including kernel reorg for
SMP, but not the actual SMP-support). Then for VM370R3-base CSC/VM, I
put in SMP support, initially for HONE systems, so they could ugprade
their 158&168 systems from 1-CPU to 2-CPU (2-CPU throughput twice that
of 1-CPU, at a time when MVS docs was claiming 2-CPU only had 1.2-1.5
times the throughput of 1-CPU).

I transfer out to SJR, and CSC/VM becomes SJR/VM with further
enhancements was getting .11sec system response for internal
systems. In early 80s, there were increasing studies showing quarter
second response improved productivity. 3272/3277 had .086 hardware
response (plus .11sec system response resulted in .196sec response
seen by users). Then 3274/3278 was introduced with lots of 3278
hardware moved back to 3274 controller, cutting 3278 manufacturing
costs and significantly driving up coax protocol chatter
... increasing hardware response to .3sec to .5sec depending on amount
of data (making it impossible to achieve quarter second). Letters to
the 3278 product administrator complaining about interactive computing
got a response that 3278 wasn't intended for interactive computing but
data entry (sort of electronic keypunch). 3272/3277 required .164sec
system response (for human to see quarter second). I don't believe any
TSO users ever noticed 3278 issues, since they rarely ever saw even
one sec system response).

When I arrived at SJR, it had a MVT 370/195 .... but was shortly
scheduled to get a MVS/168 and a VM/158 (they had a VM/145 that had
been used to develop original SQL/relational RDBMS, System/R, I worked
on it with Jim Gray and Vera Watson). The 168/158 had couple strings
of 3330 with two channel switches connecting to both systems, but
strict orders about MVS packs could never be mounted on 158 3330
strings. One morning operations had mounted a MVS 3330 on VM370
string, and within a few minutes operations were getting irate calls
from all over the bldg about CMS response. One of the issues is MVS
PDS datasets can have multi-cylinder directories that were searched
with multi-track search ... a PDS module load can have a couple
multi-track searches, requiring .317sec for each full cylinder search
(which locks the controller and all associated drives for the duration
of the search). Demands that operations move the offending 3330 was
answered that it would be done 2nd shift. We then have a highly
optimized (for VM) single-pack VS1 mounted on "MVS" drive and start a
PDS directory operation ... and even though the 158 was heavily loaded
... it was able to bring MVS168 nearly to a halt. Operations then
agree to move the "MVS" 3330 if we move VS1 (VS1 had about 1/4 the MVS
pathlength to go from interrupt to redrive of queued channel program)

Later I would be periodically brought into customers multi-CEC MVS
configurations after all the company MVS experts had given up. Example
was 4-CEC 168 shared data configuration ... one of the largest
national grocer, loading store controller apps would almost come to
crawl during peak load.  I was brought into class room with large
piles of system activity reports. After 30mins or so, I noticed that
specific 3330 aggregate activity across all systems was peaking at
7/sec (during worst performance). Turns out it was (shared) store
controller PDS dataset ... basically it peaked at aggregate of two
store controller app loads/sec for large hundreds of stores across the
US. Basically for each store controller app load, 1st required avg of
two multi-track searches (one .317secs plus .158secs or .475secs)

Other trivia: when 1st transfer to SJR, I get to wander around IBM
(and non-IBM) datacenters in silicon valley, including disk
bldg14(/engineering) and bldg15(/product test) across the street. They
were doing prescheduled, 7/24, stand alone testing and mentioned that
they had tried MVS, but it had 15min MTBF (in that environment)
requiring manual re-IPL. I offer to rewrite I/O supervisor to make it
bullet-proof and never fail, allowing any amount of on-demand,
concurrent testing, greatly improving productivity. I also got
"channel redrive" pathlength down to about 1/20th that of MVS (and
1/5th of optimized VS1) ... aka from interrupt to redirve of queued
channel program). I then write an internal research report on the I/O
integrity work and happen to mention MVS 15min MTBF ... bringing down
the wrath of the MVS organization on my head.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

some recent posts mentioning interactive response, 3272/3277,
3274/3278
https://www.garlic.com/~lynn/2025e.html#31 IBM 3274/3278
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#47 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025b.html#115 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025.html#127 3270 Controllers and Terminals
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024f.html#12 3270 Terminals
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023f.html#78 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023e.html#0 3270
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?
https://www.garlic.com/~lynn/2022h.html#96 IBM 3270
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021c.html#92 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing Computer Services

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing Computer Services
Date: 15 Nov, 2025
Blog: Facebook

trivia: As undergraduate at the univ, I had been hired fulltime by the
univ responsible for os/360. Then before I graduate, I was hired
fulltime into small group in the Boeing CFO office to help with the
formation of Boeing Computer Services (consolidate all Boeing
dataprocessing into independent business unit ... as well as offering
services to non-Boeing entities). I thought Renton datacenter was
largest in the world, 360/65s arriving faster than they could be
installed, boxes constantly staged in hallways around machine room
(joke that Boeing was getting 360/65s like other companies got
keypunches). Renton did have one 360/75. Lots of politics between
Renton director and CFO who only had a 360/30 up at Boeing field for
payroll (although they enlarge the machine room to install a 360/67
for me to play with when I wasn't doing other stuff).

Boeing Huntsville had previously gotten a 2-CPU 360/67 with a bunch of
2250s (originally for TSS/360, but ran with MVTR13, they had run into
the MVT storage management problem, and modified MVTR13 to run in
virtual memory mode as partial solution, same problem that was later
used to justify adding virtual memory to all 370s) which was then
brought up to Seattle. There was disaster plan to replicate Renton up
at the new 747 plant in Everett (Mt Rainier heats up and thre
resulting mud slide takes out Renton).

747#3 was flying skies of Seattle getting FAA flt certification. There
was 747 cabin mockup just south of Boeing field and tours would claim
there would be so many 747 passengers that 747 would never be served
by fewer than four jetways.

After graduating I join IBM CSC (instead of staying with CFO)
... mid-70s, a CSC co-worker had left IBM and joined the BCS
gov. services office in DC ... and would visit him ... one visit, he
was running the analysis for USPS justifying a penny increase in 1st
class postal stamps.

"BIS" was sold to SAIC in 1999.

a few posts mentioning boeing computer services:
https://www.garlic.com/~lynn/2025e.html#7 Mainframe skills
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022g.html#63 IBM DPD
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#54 Learning PDP-11 in 2021
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021b.html#5 Availability
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2019e.html#151 OT:  Boeing to temporarily halt manufacturing of 737 MAX
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM I/O & DASD

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM I/O & DASD
Date: 16 Nov, 2025
Blog: Facebook

1980, STL (now SVL) was bursting at the seams and they were moving 300
from the IMS group to offsite bldg, with dataprocessing back to the
STL datacenter. They tried "remote 3270" but found the human factors
unacceptable. I get con'ed into channel extender support so they can
place channel-attached 3270 controllers at the offsite bldg. They
found no perceptible difference in human factors between inside STL
and offsite.

STL had been placing 3270 controllers spread across all channels with
3830 DASD controllers. Then they found the 168s for the offsite IMS
group (with channel extenders) increased throughput by 10-15% (aka
3270 controllers had really high channel busy, but the
channel-extender boxes had significantly lower channel busy for same
amount of 3270 traffic, reducing channel contention with DASD
controllers (and increasing throughput). Then there was some
consideration to put all 3270 controllers on channel extenders (even
those inside STL).

1988, branch office asks if I could help LLNL (national lab)
standardize some serial stuff they were working with, which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980), initially 1gbit/sec transfer, full-duplex, aggregate
200mbyte/sec. Then POK releases some serial stuff with ES/9000 as
ESCON, when it is already obsolete, initially 10mbytes/sec, later
upgraded to 17mbytes/sec.

Then some POK engineers become involved with FCS and define a
heavy-weight protocol for FCS that radically reduces throughput,
eventually released as FICON. Latest public benchmark I've seen is
2010 z196 "Peak I/O" getting 2M IOPS, using 104 FICON (20K
IOPS/FICON). About the same time a FCS is released for E5-2600 server
blades, claiming over a million IOPS (two such FCS having higher
throughput than 104 FICON). Note, IBM docs recommend that SAPs (system
assist processors that do the I/O) be kept to 70% CPU, which would be
1.5M IOPS. Also no CKD DASD has been made for decades, all being
simulated on industry standard fixed-block devices.

trivia: max configured z196 had industry benchmark (number of program
iterations compared to industry reference platform) of 50BIPS (and
$30M). E5-2600 server blade had (same) industry benchmark of 500BIPS
(and IBM list price of $1815). Then industry press had articles that
server component vendors were shipping half their product directly to
large cloud operations (that assemble their own server blades) and IBM
unloads its server blade business.

posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning DASD, FBA, CKD, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd
posts mentioning channel extender
https://www.garlic.com/~lynn/submisc.html#channel.extender
posts mentioning FCS and/or FICON
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360, Future System

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360, Future System
Date: 16 Nov, 2025
Blog: Facebook

Amdahl wins battle to make ACS, 360-compatible. Then ACS/360 is killed
(folklore was concern that it would advance state-of-the-art too fast)
and Amdahl then leaves IBM (before Future System); end ACS/360:
https://people.computing.clemson.edu/~mark/acs_end.html

After joining IBM one of hobbies was doing enhanced production
operating systems for internal datacenters (online sales&marketing
support HONE system was one of the 1st and longtime customers) ... and
wandering around internal datacenters ... I spent some amount of time
at user group meetings (like SHARE) and wandering around
customers. Director of one of the largest (customer) financial
datacenters liked me to drop in and talk technology. At one point, the
branch manager horribly offended the customer and in retaliation, they
ordered an Amdahl machine (lonely Amdahl clone 370 in a vast sea of
"blue). Up until then Amdahl had been selling into univ. &
tech/scientific markets, but clone 370s had yet to break into the IBM
true-blue commercial market ... and this would be the first. I got
asked to go spend 6m-12m on site at the customer (to help obfuscate
the reason for the Amdahl order?). I talked it over with the customer,
who said while he would like to have me there it would have no affect
on the decision, so I declined the offer. I was then told the branch
manager was good sailing buddy of IBM CEO and I could forget a career,
promotions, raises.

In the Future System (completely different than 370 and was going to
completely replace 370s), internal politics was killing 370 projects
which is credited with giving the clone 370 makers (including Amdahl)
their market foothold. Then when FS implodes, there was mad rush to
get stuff back into the 370 product pipelines, including quick&dirty
3033&3081 efforts.
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

The 370 emulator minus the FS microcode was eventually sold in 1980 as
as the IBM 3081. The ratio of the amount of circuitry in the 3081 to
its performance was significantly worse than other IBM systems of the
time; its price/performance ratio wasn't quite so bad because IBM had
to cut the price to be competitive. The major competition at the time
was from Amdahl Systems -- a company founded by Gene Amdahl, who left
IBM shortly before the FS project began, when his plans for the
Advanced Computer System (ACS) were killed. The Amdahl machine was
indeed superior to the 3081 in price/performance and spectaculary
superior in terms of performance compared to the amount of circuitry.]

... snip ...

Originally, 308x was only going to be multiprocessor, no single
processor systems. Initial 2-CPU 3081D was less aggregate MIPS than
Amdahl single processor. The 3081 processor cache sizes doubled for
2-CPU 3081K, bringing its aggregate MIPS rate up to about the same as
Amdahl single processor. However MVS docs said that 2-CPU throughput
was only 1.2-1.5 the throughput of 1-CPU (high overhead and
inefficient MVS multiprocessor support) aka MVS 3081K 2-CPU throughput
only .6-.75 times the throughput of MVS Amdahl 1-CPU (even though the
MIPS rate were the same).

Other triva: In the wake of FS implosion, I asked to help with 16-CPU
370 and we con the 3033 processor engineers into helping (a lot more
interesting than remapping 168 logic to 20% faster chips). Everybody
thought it was really great until somebody tells the head of POK that
it could be decades before POK's favorite son operating system ("MVS")
has (effective) 16-CPU support (POK doesn't ship 16-CPU system to
after the beginning of new century).

trivia: I got sucked into the 16-CPU effort, in part, because I
recently finished 2-CPU support, initially for HONE, so they could add
2nd CPU to all their 158 & 168 systems (and seeing twice
throughput of their 1-CPU systems).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

Some recent posts mentioning end of ACS:
https://www.garlic.com/~lynn/2025e.html#60 Doing both software and hardware
https://www.garlic.com/~lynn/2025e.html#43 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#42 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#16 CTSS, Multics, Unix, CSC
https://www.garlic.com/~lynn/2025e.html#11 Interesting timing issues on 1970s-vintage IBM mainframes
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#84 IBM Water Cooled Systems
https://www.garlic.com/~lynn/2025d.html#76 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#61 Amdahl Leaves IBM
https://www.garlic.com/~lynn/2025d.html#56 IBM 3081
https://www.garlic.com/~lynn/2025d.html#52 Personal Computing
https://www.garlic.com/~lynn/2025d.html#25 IBM Management
https://www.garlic.com/~lynn/2025d.html#7 IBM ES/9000
https://www.garlic.com/~lynn/2025c.html#112 IBM Virtual Memory (360/67 and 370)
https://www.garlic.com/~lynn/2025c.html#79 IBM System/360
https://www.garlic.com/~lynn/2025c.html#49 IBM And Amdahl Mainframe
https://www.garlic.com/~lynn/2025c.html#33 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#21 Is Parallel Programming Hard, And, If So, What Can You Do About It?
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#100 IBM Future System, 801/RISC, S/38, HA/CMP
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025b.html#69 Amdahl Trivia
https://www.garlic.com/~lynn/2025b.html#57 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#56 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#35 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025.html#100 Clone 370 Mainframes
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2025.html#53 Canned Software and OCO-Wars
https://www.garlic.com/~lynn/2025.html#34 The Greatest Capitalist Who Ever Lived: Tom Watson Jr. and the Epic Story of How IBM Created the Digital Age
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2025.html#27 360/65 and 360/67
https://www.garlic.com/~lynn/2025.html#22 IBM Future System
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#76 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#32 What is an N-bit machine?
https://www.garlic.com/~lynn/2024g.html#26 IBM Move From Leased To Sales
https://www.garlic.com/~lynn/2024g.html#24 2001/Space Odyssey
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#122 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#24 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024f.html#23 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024e.html#115 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#109 Seastar and Iceberg
https://www.garlic.com/~lynn/2024e.html#100 360, 370, post-370, multiprocessor
https://www.garlic.com/~lynn/2024e.html#65 Amdahl
https://www.garlic.com/~lynn/2024e.html#37 Gene Amdahl
https://www.garlic.com/~lynn/2024e.html#18 IBM Downfall and Make-over
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024d.html#101 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#100 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#66 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#65 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#52 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2024c.html#0 Amdahl and IBM ACS
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#103 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#91 7Apr1964 - 360 Announce
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2024.html#98 Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 8100, SNA, OSI, TCP/IP, Amadeus

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 8100, SNA, OSI, TCP/IP, Amadeus
Date: 17 Nov, 2025
Blog: Facebook

Evans asked my wife to audit 8100 ... shortly later 8100 was
decommited. Later she did short stint as chief architect of Amadeus
(EU airline system system built off of Eastern "System One" ... 370
ACP/TPF). She sided with EU on x.25 and the ibm communication group
had her replaced. It didn't do them much good, Amadeus went with x.25
anyway and my wife's replacement was replaced.

OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems
Interconnection standards to become the global protocol for computer
networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt

Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."


... snip ...

Other trivia: early 80s, got HSDT project, T1 and faster computer
links and battles with communication group (60s, IBM had 2701 that
supported T1 links, 70s going to SNA/VTAM and various issues caped
links at 56kbits). Also working with NSF director and was suppose to
get $20M to interconnect the NSF supercomputer centers. Then congress
cuts the budget, some other things happen and eventually release an
RFP (in part based on what we already had running). NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.


... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet

Communication group was also trying to block release of mainframe
TCP/IP support. When that failed, they changed tactic and said that
since they had corporate responsibility for everything that cross
datacenter walls, it had to be released through them. What shipped got
aggregate 44kbytes/sec using nearly whole 3090 processor. I then added
RFC1044 support and in some tuning tests at Cray Research, between
Cray and 4341, got sustained 4341 channel media throughput using only
modest amount of 4341 processor (something like 500 times improvement
in bytes moved per instruction executed).

Late 80s, my wife was asked to co-author response to gov. agency RFI
... where she included 3-tier network architecture. Then we were were
out making customer executive presentations, highlighting 3-tier,
Ethernet, TCP/IP, Internetworking, high-speed routers, etc (and taking
arrows in the back from the SNA, SAA, & Token-Ring forces)

I was on Chessin's XTP TAB and because there were several
gov. agencies, took XTP to ANSI X3S3.3 as HSP. Eventually X3S3.3 said
that ISO didn't let them standardize anything that didn't conform to
OSI model. XTP/HSP didn't because 1) it supported internetworking
which doesn't exist in OSI, 2) it went directly to LAN MAC bypassing
level 3/4 interface, 3) it went directly to to LAN MAC interface which
doesn't exist in OSI, sitting somewhere in the middle of level
3. There was joke that while IETF (internet) had requirement for two
interoperable implementations before progressing in standards process,
while ISO didn't even require standard be implementable.

1988 Nick Donofrio approved HA/6000, originally for NYTimes to move
their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename
it HA/CMP when start doing technical/scientific cluster scale-up with
national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up
with RDBMS vendors (Oracle, Sybase, Ingres, informix that had DEC
VAXCluster support in same source base as unix; I do a distributed
lock manager/DLM with VAXCluster API and lots of scale-up
improvements).

S/88 Product Administrator started taking us around to their customers
and also had me write a section for the corporate continuous
availability document (it gets pulled when both AS400/Rochester and
mainframe/POK complain they couldn't meet requirements).  Had coined
disaster survivability and geographic survivability (as counter to
disaster/recovery) when out marketing HA/CMP. One of the visits to
1-800 bellcore development showed that S/88 would use a century of
downtime in one software upgrade while HA/CMP had a couple extra
"nines" (compared to S/88).

Also previously worked on original SQL/relational, System/R with Jim
Gray and Vera Watson.

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems, then leave
IBM a few months later.

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to
MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS


After leaving IBM, was called in as consultant into small
client/server startup. Two former Oracle employees (that had been in
the Hester/Ellison meeting) are there responsible for something called
"commerce server" and they want to do payment transactions on the
server. The startup had also invented a technology the called "SSL"
they wanted to use, now sometimes called "electronic commerce". I had
responsibility for everything between e-commerce web servers and the
payment networks. I then do a "Why Internet Isn't Business Critical
Dataprocessor" presentation (based on documents, procedures and
software I did for e-commerce) that (IETF Internet RFC/standards
editor) Postel sponsored at ISI/USC.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
xtp/hsp posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
3tier posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
ecommerce payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, index - home