List of Archived Posts
2025 Newsgroup Postings (10/06 - )
- Mainframe and Cloud
- Mainframe skills
- PS2 Microchannel
- Switching On A VAX
- Mainframe skills
- Kuwait Email
- VM370 Teddy Bear
- Mainframe skills
- IBM Somers
- IBM Interactive Response
- IBM Interactive Response
- Interesting timing issues on 1970s-vintage IBM mainframes
- Interesting timing issues on 1970s-vintage IBM mainframes
- IBM CP67 Multi-leavel Update
- IBM DASD, CKD and FBA
- IBM DASD, CKD and FBA
- CTSS, Multics, Unix, CSC
- IBM DASD, CKD and FBA
- IBM Mainframe TCP/IP and RFC1044
- IBM Mainframe TCP/IP and RFC1044
- IBM HASP & JES2 Networking
- IBM Token-Ring
- IBM Token-Ring
- IBM Token-Ring
- IBM Mainframe TCP/IP and RFC1044
- Opel
- Opel
- Opel
- IBM Germany
- IBM Thin Film Disk Head
- IBM Germany
- IBM 3274/3278
- What Is A Mainframe
- What Is A Mainframe
- Linux Clusters
- Linux Clusters
- Linux Clusters
- Linux Clusters
- Amazon Explains How Its AWS Outage Took Down the Web
- Amazon Explains How Its AWS Outage Took Down the Web
- IBM Boca and IBM/PCs
- IBM 360/85
- IBM 360/85
- IBM 360/85
- IBM SQL/Relational
Mainframe and Cloud
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe and Cloud
Date: 06 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/2025d.html#112 Mainframe and Cloud
... note online/cloud tends to have capacity much greater than avg use
... to meet peek on-demand use which could be order of magnitude
greater. cloud operators had heavily optimized server blade systems
costs (including assembling their own systems for a fraction of brand
name servers) ... and power consumption was increasingly becoming a
major expense. There was then increasing pressure on makers of server
components to optimize power use as well as allowing power use to drop
to zero when idle ... but instant on to meet on-demand requirements.
large cloud operation can have a score (or more) of megadatacenters
around the world, each with half million or more server blades, and
each server blade with ten times rocessing of max. configured
mainframe .... and enormous automation; a megadatacenter with 70-80
staff (upwards of 10,000 or more systems per staff). In the past were
articles about being able to use a credit card to on-demand spin up
for a couple of hrs, a cloud ("cluster") supercomputer (that ranked in
the top 40 in the world)
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
GML was invented at the IBM Cambridge Science Center in 1969 (about
the same time that CICS product appeared) .... after decade morphs
into ISO standard SGML and after another decade morphs into HTML at
CERN.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#bdam
--
virtualization experience starting Jan1968, online at home since Mar1970
Mainframe skills
Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe skills
Date: 06 Oct, 2025
Blog: Facebook
In CP67->VM370 (after decision to add virtual memory to all 370s),
lots of stuff was simplified and/or dropped (including multiprocessor
support) ... adding virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73
Then in 1974 with a VM370R2, I start adding a bunch of stuff back in
for my internal CSC/VM (including kernel re-org for multiprocessor,
but not the actual SMP support). Then with a VM370R3, I add SMP back
in, originally for (online sales&marketing support) US HONE so
they could upgrade all their 168s to 2-CPU 168s (with little slight of
hand getting twice throughput).
Then with the implosion of Future System
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
I get asked to help with a 16-CPU 370, and we con the 3033 processor
engineers into helping in their spare time (a lot more interesting
than remapping 168 logic to 20% faster chips). Everybody thought it
was great until somebody tells the head of POK that it could be
decades before POK favorite son operating system ("MVS") had
(effective) 16-CPU support (MVS docs at the time saying 2-CPU systems
had 1.2-1.5 times throughput of single CPU; POK doesn't ship 16-CPU
system until after turn of century). Then head of POK invites some of
us to never visit POK again and directs 3033 processor engineers heads
down and no distractions.
2nd half of 70s transferring to SJR on west coast, I worked with Jim
Gray and Vera Watson on original SQL/relational, System/R (done on
VM370); we were able to tech transfer ("under the radar" while
corporation was pre-occupied with "EAGLE") to Endicott for
SQL/DS. Then when "EAGLE" implodes, there was request for how fast
could System/R be ported to MVS ... which was eventually released as
DB2, originally for decision-support *only*.
I also got to wander around IBM (and non-IBM) datacenters in silicon
valley, including DISK bldg14 (engineering) and bldg15 (product test)
across the street. They were running pre-scheduled, 7x24, stand-alone
testing and had mentioned recently trying MVS, but it had 15min MTBF
(requiring manual re-ipl) in that environment. I offer to redo I/O
system to make it bullet proof and never fail, allowing any amount of
on-demand testing, greatly improving productivity. Bldg15 then gets
1st engineering 3033 outside POK processor engineering ... and since
testing only took percent or two of CPU, we scrounge up 3830
controller and 3330 string to setup our own private online
service. Then bldg15 also gets engineering 4341 in 1978 and some how
branch hears about it and in Jan1979 I'm con'ed into doing a 4341
benchmark for a national lab that was looking at getting 70 for
compute farm (leading edge of the coming cluster supercomputing
tsunami).
Decade later in 1988, got approval for HA/6000 originally for NYTimes
to port their newspaper system (ATEX) from DEC VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Ingres, Sybase, Informix that have DEC
VAXCluster support in same source base with UNIX). IBM S/88 Product
Administrator was also taking us around to their customers and also
had me write a section for corporate continuous availability strategy
document (it gets pulled when both Rochester/AS400 and POK/mainframe
complain).
Early Jan92 meeting with Oracle CEO, AWD executive Hester tells
Ellison that we would have 16-system clusters by mid92 and 128-system
clusters by ye92. Mid-jan92 convince FSD to bid HA/CMP for
gov. supercomputers. Late-jan92, cluster scale-up is transferred for
announce as IBM Supercomputer (for technical/scientific *only*) and we
were told we couldn't work on clusters with more than four systems (we
leave IBM a few months later).
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to the
industry MIPS reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Former executive we had reported to, goes over to head up Somerset/AIM
(Apple, IBM, Motorola), single chip RISC with M88k bus/cache (enabling
clusters of shared memory multiprocessors)
i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:
• IBM PowerPC 440: 1,000MIPS
• Pentium3: 2,054MIPS (twice PowerPC 440)
early numbers actual industry benchmarks, later used IBM pubs giving
percent change since previous
z900, 16 processors 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS (1250MIPS/core), Jun2025
Also in 1988, the branch office asks if I could help LLNL (national
lab) standardize some serial they were working with, which quickly
becomes fibre-channel standard, "FCS" (not First Customer Ship),
initially 1gbit/sec transfer, full-duplex, aggregate 200mbye/sec
(including some stuff I had done in 1980). Then POK gets some of their
serial stuff released with ES/9000 as ESCON (when it was already
obsolete, initially 10mbytes/sec, later increased to
17mbytes/sec). Then some POK engineers become involved with FCS and
define a heavy-weight protocol that drastically cuts throughput
(eventually released as FICON).
2010, a z196 "Peak I/O" benchmark released, getting 2M IOPS using 104
FICON (20K IOPS/FICON). About the same time a FCS is announced for
E5-2600 server blade claiming over million IOPS (two such FCS having
higher throughput than 104 FICON). Also IBM pubs recommend that SAPs
(system assist processors that actually do I/O) be kept to 70% CPU (or
1.5M IOPS) and no new CKD DASD has been made for decades, all being
simulated on industry standard fixed-block devices. Note: 2010 E5-2600
server blade (16 cores, 31BIPS/core) benchmarked at 500BIPS (ten times
max configured Z196).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
original SQL/relational posts
https://www.garlic.com/~lynn/submain.html#systemr
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Fibre Channel Standard ("FCS") &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
DASD, CKD, FBA, multi-track posts
https://www.garlic.com/~lynn/submain.html#dasd
--
virtualization experience starting Jan1968, online at home since Mar1970
PS2 Microchannel
From: Lynn Wheeler <lynn@garlic.com>
Subject: PS2 Microchannel
Date: 07 Oct, 2025
Blog: Facebook
IBM AWD had done their own cards for PC/RT workstation (16bit AT-bus),
including a token-ring 4mbit T/R card. For RS/6000 w/microchannel,
they were told they couldn't do their own cards, but had to use
(heavily performance kneedcaped by communication group) PS2
microchannel cards. It turned out that the PC/RT 4mbit token ring card
had higher card throughput than the PS2 16mbit token ring card (joke
that PC/RT 4mbit T/R server would have higher throughput than RS/6000
16mbit T/R server).
Almaden research had been heavily provisioned with IBM CAT wiring,
assuming 16mbit T/R use. However they found that 10mbit Ethernet LAN
(running over IBM CAT wiring) had lower latency and higher aggregate
throughput than 16mbit T/R LAN. Also $69 10mbit Ethernet cards had
much higher throughput than $800 PS2 microchannel 16mbit T/R cards
(joke communication group trying to severely hobble anything other
than 3270 terminal emulation).
30yrs of PC market
https://arstechnica.com/features/2005/12/total-share/
Note above article makes reference to success of IBM PC clones
emulating IBM mainframe clones. Big part of the IBM mainframe clone
success was the IBM Future System effort in 1st half of 70s (going to
completely replace 370 mainframes, internal politics was killing off
370 efforts and claims is the lack of new 370s during the period is
what gave the 370 clone makers their market foothold)
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
PC/RT 4mbit T/R, PS2 16mbit T/R, 10mbit Ethernet
https://www.garlic.com/~lynn/2025d.html#81 Token-Ring
https://www.garlic.com/~lynn/2025d.html#8 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#2 Mainframe Networking and LANs
https://www.garlic.com/~lynn/2025c.html#114 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#88 IBM SNA
https://www.garlic.com/~lynn/2025c.html#56 IBM OS/2
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#41 SNA & TCP/IP
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring
https://www.garlic.com/~lynn/2024g.html#101 IBM Token-Ring versus Ethernet
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024e.html#64 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#83 IBM's Near Demise
https://www.garlic.com/~lynn/2023b.html#50 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#18 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021d.html#15 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
--
virtualization experience starting Jan1968, online at home since Mar1970
Switching On A VAX
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Switching On A VAX
Newsgroups: alt.folklore.computers
Date: Tue, 07 Oct 2025 16:49:38 -1000
Lars Poulsen <lars@beagle-ears.com> writes:
As someone who spent some time on async terminal drivers, for both
TTYs and IBM 2741-family terminals as well as in the communications
areas of OS/360 (minimally), Univac 1100 EXEC-8, RSX-11M, VMS and
embedded systems on PDP-11/10, Z80 and MC68000, I can maybe contribute
some perspective here.
In 60s, lots of science/technical and univ were sold 360/67 (w/virtual
memory) for tss/360 ... but when tss/360 didn't come to production
... lots of places just used it as 360/65 with os/360 (Michigan and
Stanford wrote their own virtual memory systems for 360/67).
Some of the CTSS/7094 people went to the 5th flr to do Multics, others
went to the 4th flr to the IBM science center and did virtual
machines, internal network, lots of other stuff. CSC originally
wanted 360/50 to do virtual memory hardware mods, but all the spare
50s were going to FAA/ATC and had to settle for 360/40 to modify and
did (virtual machine) CP40/cms. Then when 360/67 standard with
virtual memory came available, CP40/CMS morphs into CP67/CMS.
Univ was getting 360/67 to replace 709/1401 and I had taken two credit
hr intro to fortran/computers; at end of semester was hired to rewrite
1401 MPIO for 360/30 (temporarily replacing 1401 pending 360/67). Within
a yr of taking intro class, the 360/67 arrives and I'm hired fulltime
responsible for OS/360 (Univ. shutdowns datacenter on weekend and I got
it dedicated, however 48hrs w/o sleep made monday classes hard).
Eventually CSC comes out to install CP67 (3rd after CSC itself and MIT
Lincoln Labs) and I mostly get to play with it during my 48hr weekend
dedicated time. I initially work on pathlengths for running OS/360 in
virtual machine. Test stream ran 322secs on real machine,
initially 856secs in virtual machine (CP67 CPU 534secs), after
a couple months I have reduced CP67 CPU from 534secs to 113secs. I
then start rewriting the dispatcher, (dynamic adaptive resource
manager/default fair share policy) scheduler, paging, adding ordered
seek queuing (from FIFO) and mutli-page transfer channel programs
(from FIFO and optimized for transfers/revolution, getting 2301 paging
drum from 70-80 4k transfers/sec to channel transfer peak of 270). Six
months after univ initial install, CSC was giving one week class in
LA. I arrive on Sunday afternoon and asked to teach the class, it
turns out that the people that were going to teach it had resigned the
Friday before to join one of the 60s CP67 commercial online spin-offs.
Original CP67 came with 1052 & 2741 terminal support with
automagic terminal identification, used SAD CCW to switch controller's
port terminal type scanner. Univ. had some number of TTY33&TTY35
terminals and I add TTY ASCII terminal support integrated with
automagic terminal type. I then wanted to have a single dial-in number
("hunt group") for all terminals. It didn't quite work, IBM had taken
short cut and had hard-wired line speed for each port. This kicks off
univ effort to do our own clone controller, built channel interface
board for Interdata/3 programmed to emulate IBM controller with the
addition it could do auto line-speed/(dynamic auto-baud). It was later
upgraded to Interdata/4 for channel interface with cluster of
Interdata/3s for port interfaces. Interdata (and later Perkin-Elmer)
was selling it as clone controller and four of us get written up
responsible for (some part of) the IBM clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
Univ. library gets an ONR grant to do online catalog and some of the
money is used for a 2321 datacell. IBM also selects it as betatest for
the original CICS product and supporting CICS added to my tasks.
Then before I graduate, I'm hired fulltime into a small group in the
Boeing CFO office to help with the formation of Boeing Computer
Services (consolidate all dataprocessing into independent business
unit). I think Renton datacenter largest in the world (360/65s
arriving faster than they could be installed, boxes constantly staged
in hallways around machine room; joke that Boeing was getting 360/65s
like other companies got keypunch machines). Lots of politics between
Renton director and CFO, who only had a 360/30 up at Boeing field for
payroll (although they enlarge the machine room to install 360/67 for
me to play with when I wasn't doing other stuff). Renton did have a
(lonely) 360/75 (among all the 360/65s) that was used for classified
work (black rope around the area, heavy black felt draopped over
console lights & 1403s with guards at perimeter when running
classified). After I graduate, I join IBM CSC in cambridge (rather
than staying with Boeing CFO).
One of my hobbies after joining IBM CSC was enhanced production
operating systems for internal datacenters. At one time had rivalry
with 5th flr over whether they had more total installations (internal,
development, commercial, gov, etc) running Multics or I had more
internal installations running my internal CSC/VM.
A decade later, I'm at SJR on the west coast and working with Jim Gray
and Vera Watson on the original SQL/relational implementation
System/R. I also had numerous internal datacenters running my internal
SJR/VM system ... getting .11sec trivial interactive system
response. This was at the time of several studies showing .25sec
response getting improved productivity.
The 3272/3777 controller/terminal had .089 hardware response (plus the
.11 system response resulted in .199 response, meeting .25sec
criteria). The 3277 still had half-duplex problem attempting to hit a
key at same time as screen write, keyboard would lock and would have
to stop and reset. YKT was making a FIFO box available, unplug the
3277 keyboard for the head, plug-in the FIFO box and plug keyboard
into FIFO ... which avoided the half-duplex keyboard lock).
Then IBM produced 3274/3278 controller/terminal where lots of
electronics were moved back into the controller, reducing cost to make
the 3278, but significantly increase coax protocol chatter
... significantly increasing hardware response to .3-.5secs depending
on how much data was written to screen. Letters to the 3278 product
administrator complaining, got back response that 3278 wasn't designed
for interactive computing ... but data entry.
clone (pc ... "plug compatible") controller built w/interdata posts
https://www.garlic.com/~lynn/submain.html#360pcm
csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
original sql/relational, "system/r" posts
https://www.garlic.com/~lynn/submain.html#systemr
some posts mentioning undergraduate work at univ & boeing
https://www.garlic.com/~lynn/2025d.html#112 Mainframe and Cloud
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#91 IBM VM370 And Pascal
https://www.garlic.com/~lynn/2025d.html#69 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#103 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#0 System Response
posts rementioning response
https://www.garlic.com/~lynn/2025d.html#102 Rapid Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2023d.html#27 IBM 3278
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015g.html#58 [Poll] Computing favorities
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2013g.html#21 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012d.html#19 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2012.html#15 Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2012.html#13 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#53 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009e.html#19 Architectural Diversity
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2003k.html#22 What is timesharing, anyway?
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
--
virtualization experience starting Jan1968, online at home since Mar1970
Mainframe skills
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe skills
Date: 08 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#1 Mainframe skills
re: bldg15 3033, during FS (that was going to completely replace 370),
internal politics was killing off 370 efforts and the lack of new 370s
during the period is credited with given the 370 clone makers
(including Amdahl) their marketing foothold. When FS implodes there
was mad rush to get stuff back into the 370 product pipelines,
including kicking off quick&dirty 3033&3081 in parallel. For the 303x
channel director, they took 158 engine with just the integrated
channel microcode (and no 370 microcode). A 3031 was two 158 engines,
one with just the channel microcode and 2nd with just the 370
microcode. A 3032 was 168 redone to use 303x channel director. A 3033
started out 168 logic remapped to 20% faster chips.
One of the bldg15 early engineering 3033 problems were channel
director boxes would hang and somebody would have to manual re-impl
hung channel director box. Discovered doing variation on missing
interrupt handler, where CLRCH done quickly for all six channel
addresses for the hung box ... which would force the box re-impl.
posts mentioning getting to play disk engineer in bldgs 14&15:
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning 303x CLRCH force re-impl:
https://www.garlic.com/~lynn/2025d.html#58 IBM DASD, CKD, FBA
https://www.garlic.com/~lynn/2024b.html#48 Vintage 3033
https://www.garlic.com/~lynn/2024b.html#11 3033
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#91 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2021i.html#85 IBM 3033 channel I/O testing
https://www.garlic.com/~lynn/2021b.html#2 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2014m.html#74 SD?
https://www.garlic.com/~lynn/2011o.html#23 3270 archaeology
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2000c.html#69 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/97.html#20 Why Mainframes?
--
virtualization experience starting Jan1968, online at home since Mar1970
Kuwait Email
From: Lynn Wheeler <lynn@garlic.com>
Subject: Kuwait Email
Date: 09 Oct, 2025
Blog: Facebook
From long ago and far away; one of my hobbies after joining IBM was
enhanced production operating systems for internal datacenters and the
online sales&marketing HONE systems was (one of the 1st and) long time
customer; ... in the mid-70s, all the US HONE datacenters were
consolidated in Palo Alto (trivia: when FACEBOOK 1st moved into
silicon valley, it was into a new bldg built next door to the former
consolidated US HONE datacenter), then HONE systems started cropping
up all over the world.
Co-worker at science center was responsible for the the wide-area
network
https://en.wikipedia.org/wiki/Edson_Hendricks
reference by one of the CSC 1969 GML inventors
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
which morphs into the corporate internal network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s ... about the time it was forced to convert to SNA/VTAM);
technology also used for the corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/BITNET
also EARN in Europe
https://en.wikipedia.org/wiki/European_Academic_Research_Network#EARN
I also would visit various datacenters around silicon valley,
including TYMSHARE:
https://en.wikipedia.org/wiki/Tymshare
which started providing their CMS-base online computer conferencing
system in Aug1976, "free" to the mainframe user group SHARE as vmshare
... archives here
http://vm.marist.edu/~vmshare
I cut a deal with Tymshare to get monthly tape dump of all VMSHARE
files for putting up on internal network and systems (including
HONE). The following is email from IBM sales/marketing branch office
employee in Kuwait:
Date: 14 February 1983, 09:44:58 CET
From: Fouad xxxxxx
To: Lynn Wheeler
Subject: VMSHARE registration
Hello , I dont know if you are the right contact , but I really dont
know whom to ask.
My customer wants to get into TYMNET and VMSHARE.
They have a teletype and are able to have a dial-up line to USA.
How can they get a connection to TYMNET and a registration to VMSHARE.
The country is KUWAIT, which is near to SAUDI ARABIA
Can you help me
thanks
... snip ... top of post, old email index
TYMSHARE's TYMNET:
https://en.wikipedia.org/wiki/Tymnet
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
some recent posts mentioning VMSHARE:
https://www.garlic.com/~lynn/2025b.html#14 IBM Token-Ring
https://www.garlic.com/~lynn/2024d.html#74 Some Email History
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024b.html#90 IBM User Group Share
https://www.garlic.com/~lynn/2024b.html#87 Dialed in - a history of BBSing
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024.html#109 IBM User Group SHARE
https://www.garlic.com/~lynn/2022g.html#16 Early Internet
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#37 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#126 Google Cloud
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2021j.html#71 book review: Broad Band: The Untold Story of the Women Who Made the Internet
--
virtualization experience starting Jan1968, online at home since Mar1970
VM370 Teddy Bear
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370 Teddy Bear
Date: 09 Oct, 2025
Blog: Facebook
the SHARE MVS mascot was the "turkey" (because MVS/TSO was so bad) as
opposed to the VM370 mascot as the teddy bear.
https://www.jaymoseley.com/hercules/downloads/pdf/$OSTL33.pdf
pg8:
And let us not forget that the performance of the first releases of
MVS was so bad that the MVS people in the SHARE user group adopted the
Turkey as their symbol.
... vs the SHARE VM370 mascot:
The symbol of the VM group was the Teddy Bear since it was said to be
better, warmer, and more user-friendly than MVS.
trivia1: 1974, CERN did SHARE presentation comparing MVS/TSO and
VM370/CMS ... copies inside IBM were stamped "IBM Confidential -
Restricted", 2nd highest security classification, available on need to
know only.
trivia2: customers not migrating to MVS as planned (I was at the the
initial SHARE when it was played).
http://www.mxg.com/thebuttonman/boney.asp
trivia3: after FS imploded:
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
the head of POK managed to convince corporate to kill the VM370
product, shutdown the development group and transfer all the people to
POK for MVS/XA (Endicott eventually managed to save the VM370 product
mission for the mid-range, but had to recreate a development group
from scratch). They weren't planning on telling the people until the
very last minute to minimize the number that might escape. It managed
to leak early and several managed to escape (it was during infancy of
DEC VAX/VMS ... before it even first shipped and joke was that head of
POK was a major contributor to VMS). There was then a hunt for the
leak, fortunately for me, nobody gave up the leaker.
Melinda's history
https://www.leeandmelindavarian.com/Melinda#VMHist
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
Mainframe skills
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe skills
Date: 10 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#1 Mainframe skills
https://www.garlic.com/~lynn/2025e.html#4 Mainframe skills
... I had coined disaster survivability and "greographic
survivability" (as counter to disaster/recovery) when out marketing
HA/CMP.
trivia: as undergraduate, when 360/67 arrived (replacing 709/1401)
within a year of taking 2 credit hr intro to fortran/computers, I was
hired fulltime responsible for os/360 (360/67 as 360/65, tss/360 never
came to production). then before I graduate, I was hired fulltime into
small group in the Boeing CFO office to help with the formation of
Boeing Computer Services (consolidate all dataprocessing into an
independent business unit). I think Renton is the largest datacenter
(in the world?) ... 360/65s arriving faster than they could be
installed, boxes constantly staged in the hallways around the machine
room (joke that Boeing was getting 360/65s like other companies got
keypunches).
Disaster plan was to replicate Renton up at the new 747 plant at Paine
Field (in Everett) as countermeasure to Mt. Rainier heating up and the
resulting mud slide taking out Renton.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
recent posts mentioning Boeing disaster plan to replicate Renton up at
Paine Field:
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#31 Mainframe Datacenter
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#101 Operating System/360
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023.html#66 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022g.html#63 IBM DPD
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2021k.html#55 System Availability
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#54 Learning PDP-11 in 2021
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Somers
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Somers
Date: 11 Oct, 2025
Blog: Facebook
Late 80s, a senior disk engineer gets talk scheduled at annual,
internal, world-wide communication group conference, supposedly on
3174 performance. However, the opening was that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly being vetoed by the communication group (with
their corporate ownership of everything that crossed the datacenter
walls) trying to protect their dumb terminal paradigm. Senior disk
software executive partial countermeasure was investing in distributed
computing startups that would use IBM disks (he would periodically ask
us to drop in on his investments to see if we could offer any
assistance).
Early 90s, head of SAA (in late 70s, worked with him on ECPS microcode
assist for 138/148) had top flr, corner office in Somers and and would
perodically drop in to talk about some of his people. We were out
doing customer executive presentations on Ethernet, TCP/IP, 3-tier
networking, high-speed routers, etc and taking barbs in the back from
SNA&SAA members. We would periodically drop in on other Somers'
residents asking shouldn't they be doing something about the way the
company was heading.
The communication group stranglehold on mainframe datacenters wasn't
just disks and IBM has one of the largest losses in the history of US
corporations and was being reorganized into the 13 "baby blues" in
preparation for breaking up the company ("baby blues" take-off on the
"baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
20yrs earlier, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
90s, we were doing work for large AMEX mainframe datacenters spin-off
and former AMEX CEO.
3-tier, ethernet, tcp/ip, etc posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
recent posts mentioning IBM Somers:
https://www.garlic.com/~lynn/2025d.html#98 IBM Supercomputer
https://www.garlic.com/~lynn/2025d.html#86 Cray Supercomputer
https://www.garlic.com/~lynn/2025d.html#73 Boeing, IBM, CATIA
https://www.garlic.com/~lynn/2025d.html#68 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025d.html#51 Computing Clusters
https://www.garlic.com/~lynn/2025d.html#24 IBM Yorktown Research
https://www.garlic.com/~lynn/2025d.html#7 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#1 Chip Design (LSM & EVE)
https://www.garlic.com/~lynn/2025c.html#116 Internet
https://www.garlic.com/~lynn/2025c.html#110 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#104 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#98 5-CPU 370/125
https://www.garlic.com/~lynn/2025c.html#93 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#69 Tandem Computers
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#40 IBM & DEC DBMS
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025b.html#118 IBM 168 And Other History
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#100 IBM Future System, 801/RISC, S/38, HA/CMP
https://www.garlic.com/~lynn/2025b.html#92 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#78 IBM Downturn
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025b.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025.html#119 Consumer and Commercial Computers
https://www.garlic.com/~lynn/2025.html#96 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput
https://www.garlic.com/~lynn/2025.html#74 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2025.html#37 IBM Mainframe
https://www.garlic.com/~lynn/2025.html#23 IBM NY Buildings
https://www.garlic.com/~lynn/2025.html#21 Virtual Machine History
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Interactive Response
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Interactive Response
Date: 11 Oct, 2025
Blog: Facebook
I took a 2 credit hr intro to fortran/computers and at end of semester
was hired to rewrite 1401 MPIO in assembler for 360/30 ... univ was
getting 360/67 for tss/360 to replace 709/1401 and got a 360/30
temporarily pending availability of a 360/67 (part of getting some 360
experience). Univ. shutdown datacenter on weekends and I would have it
dedicated (although 48hrs w/o sleep made monday classes hard), I was
given a pile of hardware & software manuals and got to design and
implement my own monitor, device drivers, interrupt handlers, storage
management, error recovery, etc ... and within a few weeks had a 2000
card program.
Then within a yr of taking intro class, 360/67 arrives and i'm hired
fulltime responsible for os/360 (tss/360 never came to production so
ran as 360/65).
CSC comes out to univ for CP67/CMS (precursor to VM370/CMS) install
(3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play
with it during my 48hr weekend dedicated time. I initially work on
pathlengths for running OS/360 in virtual machine. Test stream
ran 322secs on real machine, initially 856secs in virtual
machine (CP67 CPU 534secs), after a couple months I have reduced
CP67 CPU from 534secs to 113secs. I then start rewriting the
dispatcher/scheduler (dynamic adaptive resource manager/default fair
share policy) paging, adding ordered seek queuing (from FIFO) and
mutli-page transfer channel programs (from FIFO and optimized for
transfers/revolution, getting 2301 paging drum from 70-80 4k
transfers/sec to channel transfer peak of 270). Six months after univ
initial install, CSC was giving one week class in LA. I arrive on
Sunday afternoon and asked to teach the class, it turns out that the
people that were going to teach it had resigned the Friday before to
join one of the 60s CP67 commercial online spin-offs.
Before I graduate, I'm hired fulltime into a small group in the Boeing
CFO office to help with form Boeing Computer Services (consolidate all
dataprocessing into an independent business unit ... including
offering services to non-Boeing entities). When I graduate, I leave to
join CSC (instead of staying with CFO).
One of my hobbies after joining IBM was enhance production operating
systems for internal datacenters ... and in the morph of
CP67->VM370 a lot of stuff was dropped and/or simplified and every
few years I would be asked to redo stuff that had been dropped and/or
rewritten (... in part dynamic adaptive default policy calculated
dispatching order based on resource utilization over the last several
minutes compared to target resource utilization established by their
priority and number of users).
Late 80s, the OS2 team was told to contact VM370 group (because VM370
dispatching was much better than OS2) ... it was passed between the
various groups before being forwarded to me.
Example I didn't have much control over was late 70s, IBM San Jose
Research got a MVS 168 and a VM370 158 replacing MVT 195. My internal
VM370s were getting 90th percentile .11sec interactive system response
(with 3272/3277 hardware response of .086sec resulted in .196sec seen
by users ... better than the .25sec requirement mentioned in various
studies). All the SJR 3830 controllers and 3330 strings were
dual-channel connection to both systems but strong rules that no MVS
3330s can be mounted on VM370 strings. One morning operators mounted a
MVS 3330 on a VM370 string and within minutes they were getting irate
calls from all over the bldg complaining about response. The issue was
MVS has a OS/360 heritage of multi-track search for PDS directory
searches ... a MVS multi-cylinder PDS directory search can have
multiple full multi-track cylinder searches that lockout the (vm370)
controller for the duration (60revs/sec, 19tracks/search, .317secs
lockout per multi-track search I/O). Demand to move the pack was
answered with they would get around to it on 2nd shift.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource dispatch/scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
commercial virtual machine service posts
https://www.garlic.com/~lynn/submain.html#timeshare
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
some posts mentioning .11sec system response and .086sec
3272/3277 hardware response for .196sec
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015g.html#58 [Poll] Computing favorities
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012.html#15 Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2012.html#13 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#53 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009e.html#19 Architectural Diversity
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Interactive Response
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Interactive Response
Date: 12 Oct, 2025
Blog: Facebook
re: https://www.garlic.com/~lynn/2025e.html#9 IBM Interactive Response
... further detail ... during the .317sec multi-track search, the
vm-side could build up several queued I/O requests for other vm 3330s
on the busy controller (SM+BUSY) ... when it ends, the vm-side might
get in one of the queued requests ... before the MVS hits it with
another multi-track search ... and so vm-side might see increasing
accumulating queued I/O requests waiting for nearly second (or more).
.. trivia: also after transfer to San Jose, I got to wander around IBM
(& non-IBM) datacenters in silicon valley, including disk bldg14
(engineering) and bldg15 (product test) across the street. They were
running pre-scheduled, 7x24, stand-alone mainframe testing and
mentioned that they had recently tried MVS, but it had 15min MTBF
(requiring manual re-ipl) in that environment. I offered to rewrite
I/O supervisor to make it bullet proof and never fail allowing any
amount of on-demand, concurrent testing ... greatly improving
productivity. I write an internal IBM paper on the I/O integrity work
and mention the MVS 15min MTBF ... bringing down the wrath of the MVS
organization on my head. Later, a few months before 3880/3380 FCS, FE
(field engineering) had test of 57 simulated errors that were likely
to occur and MVS was failing in all 57 cases (requiring manual re-ipl)
and in 2/3rds of the cases no indication of what caused the failure.
posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning MVS failing in test of 57 simulated errors
https://www.garlic.com/~lynn/2025d.html#58 IBM DASD, CKD, FBA
https://www.garlic.com/~lynn/2025d.html#45 Some VM370 History
https://www.garlic.com/~lynn/2025d.html#35 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#34 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025b.html#44 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2024e.html#35 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#75 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024.html#114 BAL
https://www.garlic.com/~lynn/2024.html#88 IBM 360
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#115 IBM RAS
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#72 Some Virtual Machine History
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#80 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#70 IBM 3380 disks
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019b.html#53 S/360
https://www.garlic.com/~lynn/2018d.html#86 3380 failures
--
virtualization experience starting Jan1968, online at home since Mar1970
Interesting timing issues on 1970s-vintage IBM mainframes
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Interesting timing issues on 1970s-vintage IBM mainframes
Newsgroups: alt.folklore.computers
Date: Sun, 12 Oct 2025 08:24:39 -1000
James Dow Allen <user4353@newsgrouper.org.invalid> writes:
Interest was minimal in the (startling?) fact that the main 145 oscillator
is immediately passed through an XOR gate. But I'll persist and mention
interesting(?) facts about the clocks on the Big IBM Multiprocessors.
The 370/168 was, arguably, the Top Of The Line among IBM mainframes
in the mid-1970s. Sure, there was a 370 Model 195 but it was almost just
a gimmick: Salesmen might say "You're complaining about the $3 Million
price-tag on our wonderful Model 168?
Just be thankful I'm not trying to sell you a Model 195!"
After joining IBM, the 195 group talk me into helping with
hyperthreading the 195. 195 had out-of-order, but conditional branches
drained the pipeline ... so most codes only ran at half the rated
speed. hyperthreading, simulating 2CPU multiprocessor possibly would
keep the hardware fully busy ... hyperthreading patent mentioned in
this about the death of ACS/360 (Amdahl had won the battle to make
ACS, 360 compatible, the ACS/360 was killed, folklore was executives
felt it would advance the state of art to fast and company would loose
control of the market).
https://people.computing.clemson.edu/~mark/acs_end.html
modulo MVT (VS2/SVS & VS2/MVS) documentation (heavy-weight
multiprocessor overhead) SMP, only had 2-CPU throughtput 1.2-1.5 times
single processor throughput.
early last decade, I was asked to tract down the decision to add
virtual memory to all 370s (pieces, originally posted here and in
ibm-main NGs) ... adding virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73
Basically MVT storage management was so bad that region sizes had to
be specified four times larger than used ... as result typical 1mbyte
370/165 only ran four concurrent regions, insufficient to keep system
busy and justified. Going to single 16mbyte virtual address space
(i.e. VS2/SVS ... sort of like running MVT in a CP67
16mbyte virtual machine) allowed concurrent regions to be
increased by factor of four (modulo caped at 15 because 4bit storage
protect keys) with little or no paging.
It was deemed that it wasn't worth the effort to add virtual memory to
370/195 and all new work was killed.
Then there was the FS effort, going to completely replace 370 and
internal politics was killiing off 370 efforts, claims that lack of new
370s during FS gave the clone 370 makers their market foothold).
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
Note 370/165 avg 2.1 machine cycles per 370 instruction. for 168 they
significantly increase main memory size & speed and microcode was
optimized resulting in avg of 1.6 machine cycles per instruction.
Then for 168-3, they doubled the size of processor cache, increasing
rated MIPS from 2.5MIPS to 3.0MIPS.
With the implosion of FS there was mad rush to get stuff back into the
370 product pipelines, kicking off the quick&dirty 3033 and 3081
efforts. The 3033 started off remapping 168 logic to 20% faster chips
and then optimized the microcode getting it down to avg of one machine
cycle per 370 instruction.
I was also talked into helping with a 16-CPU SMP/multiprocessor effort
and we con the 3033 processor engineers into helping (a lot more
interesting than remapping 168 logic). Everybody thought it was great
until somebody reminds the head of POK that POK's favorite son
operating system ("VS2/MVS") 2CPU multiprocessor overhead only getting
1.2-1.5 times throughput of non-multiprocessor (and overhead
increasing significantly as #CPUs increased ... POK doesn't ship a
16-CPU machine until after the turn of century). Then head of POK
invites some of us to never visit POK again and directs the 3033
processor engineers, heads down and no distractions.
trivia: when I graduate and join IBM Cambridge Science Center, one of
my hobbies was enhanced production operating systems and one of my
first (and long time) customers was the online sales&marketing
HONE systems. With the decision to add virtual memory to all 370s,
there was also decision to form development group to do VM370. In the
morph of CP67->VM370, lots of stuff was simplified and/or dropped
(including multiprocessor support). 1974, I start adding stuff back
into a VM370R2-base for my interal CSC/VM (including kernel-reorg for
SMP, but not the actual SMP support). Then for VM370R3-base CSC/VM, I
add multiprocessor support back in, originally for HONE so they could
upgrade their 168s to 2-CPU systems (with some slight-of-hand and
cache affinity, was getting twice throughput of single processor).
other trivia: US HONE had consolidated all their datacenters in
silicon valley, when FACEBOOK first moved into silicon valley, it was
into new bldg built next door to the former consolidated US HONE
datacenter.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
--
virtualization experience starting Jan1968, online at home since Mar1970
Interesting timing issues on 1970s-vintage IBM mainframes
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Interesting timing issues on 1970s-vintage IBM mainframes
Newsgroups: alt.folklore.computers
Date: Sun, 12 Oct 2025 11:21:54 -1000
James Dow Allen <user4353@newsgrouper.org.invalid> writes:
About 1978, the 370/168 was superseded by the 3032 and 3033. These were
a disappointment for anyone infatuated with blinking lights and fancy
switches and dials. The console disappeared entirely except for an ordinary
CRT, keyboard, light-pen and a tiny number of lights and buttons (e.g. "IPL").
This trend began a few years earlier when the fancy front-panel of
the 370/155 was replaced with a boring CRT/light-pen for the 370/158.
re:
https://www.garlic.com/~lynn/2025e.html#11 Interesting timing issues on 1970s-vintage IBM mainframes
when FS imploded, they start on 3033 (remap 168 logic to 20% faster
chips). They take a 158 engine with just the integrated channel
microcode for the 303x channel director. A 3031 was two 158 engines,
one for the channel director (integrated channel microcode) and 2nd
with just the 370 microcode. The 3032 was 168-3 reworked to use the
303x channel director for external channels.
I had transferred out to the west coast and got to wander around IBM
(and non-IBM) datacenters in silicon valley, including disk bldg14
(engineering) and bldg15 (product test) across the street. They were
running pre-scheduled, 7x24, stand-alone mainframe testing and
mentioned that they had recently tried MVS, but it had 15min MTBF
(requiring manual re-IPL) in that environment. I offer to rewrite I/O
supervisor making it bullet-proof and never fail, allowing any amount
of on-demand, concurrent testing ... greatly improving productivity.
Then bldg15 gets the 1st engineering 3033 outside POK processor
engineering. Testing was only taking percent or two of CPU, so we
scrounge up a 3830 controller and string of 3330 drives and setup our
own private online service.
One of the things found was the engineering channel directors (158
engines) still had habit of periodic hanging, requiring manual
re-impl. Discover then if you hit all six channels of a channel
director quickly with CLRCH, it would force automagic re-impl ... so
modify missing interrupt handler to deal with hung channel director.
getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
some recent posts mentioning MVS 15min MTBF
https://www.garlic.com/~lynn/2025e.html#10 IBM Interactive Response
https://www.garlic.com/~lynn/2025e.html#1 Mainframe skills
https://www.garlic.com/~lynn/2025d.html#107 Rapid Response
https://www.garlic.com/~lynn/2025d.html#98 IBM Supercomputer
https://www.garlic.com/~lynn/2025d.html#87 IBM 370/158 (& 4341) Channels
https://www.garlic.com/~lynn/2025d.html#78 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#71 OS/360 Console Output
https://www.garlic.com/~lynn/2025d.html#68 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025d.html#58 IBM DASD, CKD, FBA
https://www.garlic.com/~lynn/2025d.html#45 Some VM370 History
https://www.garlic.com/~lynn/2025d.html#35 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#34 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#11 IBM 4341
https://www.garlic.com/~lynn/2025d.html#1 Chip Design (LSM & EVE)
https://www.garlic.com/~lynn/2025c.html#107 IBM San Jose Disk
https://www.garlic.com/~lynn/2025c.html#101 More 4341
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#78 IBM 4341
https://www.garlic.com/~lynn/2025c.html#62 IBM Future System And Follow-on Mainframes
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#47 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#42 SNA & TCP/IP
https://www.garlic.com/~lynn/2025c.html#29 360 Card Boot
https://www.garlic.com/~lynn/2025c.html#12 IBM 4341
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025b.html#112 System Throughput and Availability II
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#82 IBM 3081
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#44 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#20 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025.html#122 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#113 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput
https://www.garlic.com/~lynn/2025.html#77 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#71 VM370/CMS, VMFPLC
https://www.garlic.com/~lynn/2025.html#59 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#29 IBM 3090
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM CP67 Multi-leavel Update
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CP67 Multi-leavel Update
Date: 12 Oct, 2025
Blog: Facebook
Some of the MIT CTSS/7094 people went to the 5th flr to do
MULTICS. Others went to the IBM Science Center on the 4th flr and did
virtual machines (1st modified 360/40 w/virtual memory and did
CP40/CMS, morphs into CP67/CMS when 360/67 standard with virtual
memory becomes available), science center wide-area network (that
grows into corporate internal network, larger than arpanet/internet
from science-center beginning until sometime mid/late 80s; technology
also used for the corporate sponsored univ BITNET), invented GML 1969
(precursor to SGML and HTML), lots of performance tools, etc.
I took a 2 credit hr intro to fortran/computers and at end of semester
was hired to rewrite 1401 MPIO in assembler for 360/30 ... univ was
getting 360/67 for tss/360 to replace 709/1401 and got a 360/30
temporarily pending availability of a 360/67 (part of getting some 360
experience). Univ. shutdown datacenter on weekends and I would have it
dedicated (although 48hrs w/o sleep made monday classes hard), I was
given a pile of hardware & software manuals and got to design and
implement my own monitor, device drivers, interrupt handlers, storage
management, error recovery, etc ... and within a few weeks had a 2000
card program. 360/67 arrived within a year of taking intro class and I
was hired fulltime responsible for OS/360 (tss/360 never came to
production, so ran as 360/65). Student Fortran jobs ran under second
on 709. Initially MFTR9.5 ran well over minute. I install HASP cutting
time in half. I then start redoing MFTR11 STAGE2 SYSGEN to carefully
place datasets and PDS members to optimize arm seek and multi-track
seach cutting another 2/3rds to 12.9secs. Student Fortran never got
better than 709 until I install UofWaterloo WATFOR (on 360/65 ran at
20,000 cards/min or 333q cards/sec, student Fortran jobs typically
30-60 cards).
CSC came out to univ for CP67/CMS (precursor to VM370/CMS) install
(3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play
with it during my 48hr weekend dedicated time. I initially work on
pathlengths for running OS/360 in virtual machine. Test stream ran
322secs on real machine, initially 856secs in virtual machine (CP67
CPU 534secs), after a couple months I have reduced that CP67 CPU from
534secs to 113secs. I then start rewriting the dispatcher, (dynamic
adaptive resource manager/default fair share policy) scheduler,
paging, adding ordered seek queuing (from FIFO) and mutli-page
transfer channel programs (from FIFO and optimized for
transfers/revolution, getting 2301 paging drum from 70-80 4k
transfers/sec to channel transfer peak of 270). Six months after univ
initial install, CSC was giving one week class in LA. I arrive on
Sunday afternoon and asked to teach the class, it turns out that the
people that were going to teach it had resigned the Friday before to
join one of the 60s CP67 commercial online spin-offs.
Initially CP67 source was deliivered on OS/360, source modified,
assembled, txt decks collected, marked with stripe & name across top,
all fit in card tray, BPS loader placed in front and IPLed ... would
write memory image to disk for system IPL. A couple months later, new
release now resident on CMS ... modifications in CMS "UPDATE" files,
exec that applied update and generated temp file that was assembled. A
system generation exec, "punched" txt decks spooled to virtual reader
that was then IPLed.
After graduating and joining CSC, one of my hobbies was enhanced
production operating systems ("CP67L") for internal datacenters
(inluding online sales&marketing support HONE systems, was one of the
first, and long time customer). With the decision to add virtual
memory to all 370s, there was also decision to do CP67->VM370 and some
of the CSC people went to the 3rd flr, taking over the IBM Boston
Programming Center for the VM370 group. CSC developed set of CP67
updates that provided (simulated) VM370 virtual machines
("CP67H"). Then there were a set of CP67 updates that ran on 370
virtual memory architecture ("CP67I"). At CSC, because there were
profs, staff, and students from Boston area institutions using the CSC
systems, CSC would run "CP67H" in a 360/67 virtual machine (to
minimize unannounced 370 virtual memory leaking).
CP67L ran on real 360/67
... CP67H ran in a CP67L 360/67 virtual machine
...... CP67I ran in a CP67H 370 virtual machine
CP67I was in general use, a year before the 1st engineering 370 (with
virtual memory) was operation ... in fact, IPL'ing CP67I on the real
machine was test case.
As part of CP67L, CP67H, CP67I effort, the CMS Update execs were
improved to support multi-level update operation (later multi-level
update support was added to various editors). Three engineers come out
from San Jose and add 2305 & 3330 support to CP67I, creating CP67SJ
which was widely use on internal machines, even after VM370 was
available.
Mid-80s, Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist
asked if I could send her the original exec multi-level update
implementation. I had large archive dating back to the 60s on triple
redundant tapes in the IBM Almaden Research tape library. It was
fortunate since within a few weeks, Almaden had an operation problem
mounting random tapes as scratch and I lost nearly dozen tapes,
including triple redundant tape archive.
In the morph of CP67->VM370, a lot of stuff was simplified and/or
dropped (including multiprocessor support). 1974, I start adding a lot
of stuff back into VM370R2-base for my internal CSC/VM (including
kernel re-org for SMP, but not the actual SMP support). Then with
VM370R3-base, I add multiprocessor support into CSC/VM, initially for
HONE so they could upgrade all their 168 systems to 2-CPU (getting
twice throughput of 1-CPU systems). HONE trivia: All the US HONE
datacenters had been consolidated in Palo Alto ... when FACEBOOK 1st
moved into silicon valley, it was into a new bldg built next door to
the former consolidate US HONE datacenter.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
some CP67L, CP67H, CP67I, CP67SJ, CSC/VM posts
https://www.garlic.com/~lynn/2025d.html#91 IBM VM370 And Pascal
https://www.garlic.com/~lynn/2025.html#122 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#121 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM DASD, CKD and FBA
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM DASD, CKD and FBA
Date: 13 Oct, 2025
Blog: Facebook
When I offered the MVS group fully tested FBA support ... they said I
needed $26M incremental new revenue (some $200M in sales) to cover
cost of education and documentation. However, since IBM at the time,
was selling every disk it made, FBA support would just move some CKD
sales to FBA ... also I wasn't allowed to use life-time savings as
part of the business case.
All disk technology was actually moving to FBA ... can be seen in 3380
formulas for records/track calculations, having to round record sizes
up to multiple of fixed cell size. Now no real CKD have been made for
decades, all being simulated on industry standard fixed-block
devices. A big part of FBA was error correcting technology performance
... part of recent FBA technology moving from 512byte blocks to 4k
blocks.
trivia: 80s, large corporations were ordering hundreds of vm/4341s at
a time for deploying out in departmental areas (sort of the leading
edge of the coming distributed departmental computing tsunami). Inside
IBM, conference rooms were becoming scarce, being converted into
departmental vm/4341 rooms. MVS looked at the big explosion in sales
and wanted a piece of the market. However the only new non-datacenter
disks were FBA/3370 ... eventually get CKD emulation as 3375. However
didn't do MVS much good, distributed departmental dataprocessing was
looking at scores of systems per support person ... while MVS was
scores of support people per system.
Note: ECKD was originally channel commands for Calypso, 3880 speed
matching buffer allowing 3mbyte/sec 3380 to be used with existing
1.5mbyte/sec channels ... it went through significant teething
problems ... lots and lots of sev1.
Other trivia: I had transferred from CSC out to SJR on west coast and
got to wander around IBM (and non-IBM) datacenters in silicon valley,
including disk bldg14 (engineering) and bldg15 (product test) across
the street. They were running pre-scheduled, 7x24, stand-alone
mainframe testing and mentioned that they had recently tried MVS, but
it had 15min MTBF (requiring manual re-ipl) in that environment. I
offered to rewrite I/O supervisor to make it bullet proof and never
fail allowing any amount of on-demand, concurrent testing ... greatly
improving productivity. I write an internal IBM paper on the I/O
integrity work and mention the MVS 15min MTBF ... bringing down the
wrath of the MVS organization on my head. Later, a few months before
3880/3380 FCS, FE (field engineering) had test of 57 simulated errors
that were likely to occur and MVS was failing in all 57 cases
(requiring manual re-ipl) and in 2/3rds of the cases no indication of
what caused the failure.
DASD, CKD, FBA, and multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
a few posts mentioning calypso, eckd, mtbf
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2024g.html#3 IBM CKD DASD
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer
a few posts mentioning FBA fixed-block 512 4k
https://www.garlic.com/~lynn/2021i.html#29 OoO S/360 descendants
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2017f.html#39 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2014g.html#84 real vs. emulated CKD
https://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2010d.html#9 PDS vs. PDSE
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM DASD, CKD and FBA
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM DASD, CKD and FBA
Date: 13 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#14 IBM DASD, CKD and FBA
semi-related, 1988 IBM branch asks if I could help LLNL (national lab)
standardize some serial stuff they were working with ... which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980), initially 1gbit/sec transfer, full-duplex, 200mbyte/sec.
Then POK finally gets their serial stuff released (when it is already
obsolete), initially 10mbyte/sec, later improved to 17mbyte/sec.
Some POK engineers then become involved with FCS and define a
heavy-weight protocol that radically reduces throughput, eventually
released as FICON
Latest public benchmark I've seen is 2010 z196 "Peak I/O", getting 2M
IOPS using 104 FICON (20K IOPS/FICON). About the same time a FCS is
announced for E5-2600 server blades claiming over million IOPS (two
such FCS having higher throughput than 104 FICON). Also IBM pubs
recommend that SAPs (system assist processors that actually do I/O) be
kept to 70% CPU (or 1.5M IOPS).
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
CTSS, Multics, Unix, CSC
From: Lynn Wheeler <lynn@garlic.com>
Subject: CTSS, Multics, Unix, CSC
Date: 13 Oct, 2025
Blog: Facebook
Some of the MIT CTSS/7094 people went to the 5th flr for MULTICs
http://www.bitsavers.org/pdf/honeywell/large_systems/multics/
Note that original UNIX had been done at AT&T ... somewhat after the
become disenchanted with MIT Multics ... UNIX is supposedly take-off
on the name MULTICS and is simplification.
https://en.wikipedia.org/wiki/Multics#Unix
Others from MIT CTSS/7094 went to the IBM Cambridge Scientific Center
on the 4th flr and did virtual machines, science center wide-area
network that morphs into the internal network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s, about the time it was forced to convert to SNA/VTAM), invented
GML in 1969 (decade morphs into ISO standard SGML, after another
decade morphs into HTML at CERN), bunch of other stuff.
I was at univ that had gotten a 360/67 for tss/360. The 360/67
(replacing 709/1401) arrives within a year of my taking a 2 credit hr
intro to fortran/computers and I'm hired fulltime responsible of
OS/360 (tss/360 didn't come to production, so ran as 360/65). Later
CSC comes out to install CP67 (3rd install after CSC itself and MIT
Lincoln Labs). Nearly two decades later I'm dealing with some UNIX
source and notice some similarity between UNIX code and that early
CP67 (before I started reWriting a lot of the code) ... possibly
indicating some common heritage back to CTSS. Before I graduate, I'm
hired fulltime into small group in Boeing CFO office to help with the
formation of Boeing Computer services (consolidate all dataprocessing
into independent business unit, including offering services to
non-Boeing entities). Then when I graduate, I join CSC, instead of
staying with the CFO.
At IBM was one of my hobbies was enhanced production operating systems
for internal networks. There was some friendly rivalry between 4th &
5th flrs ... it wasn't fair to compare total number of MULTICS
installations with total number of IBM customer virtual machine
installations or even number of internal IBM virtual machine
installations, but at one point I could show more of my internal
installations than all MULTICS that ever existed.
Amdahl wins battle to make ACS, 360-compatible. Then ACS/360 is killed
(folklore was concern that it would advance state-of-the-art too fast)
and Amdahl then leaves IBM (before Future System); end ACS/360:
https://people.computing.clemson.edu/~mark/acs_end.html
During Future System (1st half of 70s), going to totally replace 370
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
Internal politics was killing off 370 efforts and lack of new 370 is
credited with giving the clone 370 makers (including Amdahl) their
market foothold. When FS implodes there is mad rush to get stuff back
into 370 product pipelines, including kicking off quick&dirty
3033&3081 efforts in parallel.
Besides hobby of doing enhanced production operating systems for
internal datacenters ... and wandering around internal datacenters
... I spent some amount of time at user group meetings (like SHARE)
and wandering around customers. Director of one of the largest
(customer) financial datacenters liked me to drop in and talk
technology. At one point, the branch manager horribly offended the
customer and in retaliation, they ordered an Amdahl machine (lonely
Amdahl clone 370 in a vast sea of "blue). Up until then Amdahl had
been selling into univ. & tech/scientific markets, but clone 370s had
yet to break into the IBM true-blue commercial market ... and this
would be the first. I got asked to go spend 6m-12m on site at the
customer (to help obfuscate the reason for the Amdahl order?). I
talked it over with the customer, who said while he would like to have
me there it would have no affect on the decision, so I declined the
offer. I was then told the branch manager was good sailing buddy of
IBM CEO and I could forget a career, promotions, raises.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
other posts mentioning I could forget career, promotions, raises
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#61 Amdahl Leaves IBM
https://www.garlic.com/~lynn/2025c.html#49 IBM And Amdahl Mainframe
https://www.garlic.com/~lynn/2025c.html#35 IBM Downfall
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s
https://www.garlic.com/~lynn/2025.html#64 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#19 60s Computers
https://www.garlic.com/~lynn/2024f.html#122 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#23 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024f.html#1 IBM (Empty) Suits
https://www.garlic.com/~lynn/2024e.html#65 Amdahl
https://www.garlic.com/~lynn/2023g.html#42 IBM Koolaid
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023c.html#56 IBM Empty Suits
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#45 IBM 3081 TCM
https://www.garlic.com/~lynn/2022g.html#66 IBM Dress Code
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#60 IBM CEO: Only 60% of office workers will ever return full-time
https://www.garlic.com/~lynn/2022e.html#14 IBM "Fast-Track" Bureaucrats
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022d.html#21 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#88 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021k.html#105 IBM Future System
https://www.garlic.com/~lynn/2021j.html#93 IBM 3278
https://www.garlic.com/~lynn/2021j.html#4 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#81 IBM Downturn
https://www.garlic.com/~lynn/2021e.html#66 Amdahl
https://www.garlic.com/~lynn/2021e.html#63 IBM / How To Stuff A Wild Duck
https://www.garlic.com/~lynn/2021e.html#15 IBM Internal Network
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2021.html#39 IBM Tech
https://www.garlic.com/~lynn/2021.html#8 IBM CEOs
https://www.garlic.com/~lynn/2018e.html#27 Wearing a tie cuts circulation to your brain
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2017b.html#2 IBM 1970s
https://www.garlic.com/~lynn/2016h.html#86 Computer/IBM Career
https://www.garlic.com/~lynn/2016e.html#95 IBM History
https://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM DASD, CKD and FBA
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM DASD, CKD and FBA
Date: 13 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#14 IBM DASD, CKD and FBA
https://www.garlic.com/~lynn/2025e.html#15 IBM DASD, CKD and FBA
co-worker at CSC was responsible for the CP67-based wide-area network
for the science centers
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
... CP67-baseed wide-area network reference by one of the inventors of
GML at the science center
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
... morphs into the IBM internal network, larger than arpanet/internet
from just about the beginning until sometime mid/late 80s when forced
to convert to SNA/VTAM. Technology also used for the corporate
sponsored univ BITNET (& EARN in Europe) ... and the VM/4341s
distributed departmental systems ... until change over to large PC and
workstation servers (again in part because of IBM pressure to move to
SNA/VTAM).
then late 80s, senior disk engineer gets a talk scheduled at annual,
internal, world-wide communication group conference, supposedly on
3174 performance. However, the opening was that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly being vetoed by the communication group (with
their corporate ownership of everything that crossed the datacenter
walls) trying to protect their dumb terminal paradigm. Senior disk
software executive partial countermeasure was investing in
distributed computing startups that would use IBM disks (he would
periodically ask us to drop in on his investments to see if we could
offer any assistance).
The communication group stranglehold on mainframe datacenters wasn't
just disk and a couple years later, IBM has one of the largest losses
in the history of US companies and was being reorganized into the 13
"baby blues" in preparation for breaking up the company ("baby blues"
take-off on the "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
20yrs earlier, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET/EARN posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Mainframe TCP/IP and RFC1044
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe TCP/IP and RFC1044
Date: 13 Oct, 2025
Blog: Facebook
Communication group was fighting to block release of mainframe TCP/IP
support .... when they lost, they changed their tactic and said that
since they had corporate ownership of everything that crosse the
datacenter walls, it had to be released through them, what shipped had
aggregate 44kbytes/sec using nearly whole 3090 processor. It was
eventually ported to MVS by simulating some VM370 diagnose
instructions.
I then added support for RFC1044 and in some tuning tests at Cray
Research between Cray and 4341, got sustained 4341 channel throughput
using only modest amount of 4341 processor (something like 500 times
improvement in bytes moved per instruction executed). Part of the
difference was 8232 was configured as bridge .... while RFC1044
supported mainframe channel attached TCP/IP router (for about same
price as 8232, channel attached router could support dozen ethernet
interfaces, T1&T3, FDDI, and other).
Also con'ed one of the high-speed router vendors into adding support
for RS6000 "SLA" (serial link adapter ... sort of enhanced ESCON,
faster, full-duplex, capable of aggregate 440mbits/sec, they had to
buy the "SLA" chips from IBM) and planning for fiber-channel standard
("FCS"). Part of the original issues was RS6000 SLAs would only talk
to other RS6000s.
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Mainframe TCP/IP and RFC1044
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe TCP/IP and RFC1044
Date: 14 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#18 IBM Mainframe TCP/IP and RFC1044
note early 90s, CSG hired silicon valley contractor to implement
TCP/IP directly in VTAM. What he initially demo'ed had TCP much faster
than LU6.2. He was then told that everybody knows that a "proper"
TCP/IP implementation is much slower than LU6.2 and they would only be
paying for a "proper" implementation.
Senior disk engineer gets talk scheduled at annual, internal,
world-wide communication group conference and opens with the statement
that communication group was going to be responsible for the demise of
the disk division ... datacenter stranglehold wasn't just disk
division and couple years later IBM has one of the largest losses in
the history of US companies ... and was being re-orged into the 13
"baby blues" in preperation for breaking up the company ... see lot
more at a post comment in "public" mainframe group:
https://www.garlic.com/~lynn/2025e.html#17 IBM DASD, CKD and FBA
... also OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open
Systems Interconnection standards to become the global protocol for
computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."
... snip ...
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM HASP & JES2 Networking
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM HASP & JES2 Networking
Date: 14 Oct, 2025
Blog: Facebook
recent comment in another post (in this group)
https://www.garlic.com/~lynn/2025e.html#17 IBM DASD, CKD and FBA
co-worker at the science center was responsible for science center
wide-area network that morphs into the corporate internal network and
technology also used for the corporate sponsored univ BITNET (&
EARN in Europe).
When went to try and announce VNET/RSCS for customers it was blocked
by head of POK, this was after the FS implosion
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
and head of POK was in the process of convincing corporate to kill the
VM370 product, shutdown the development group and transfer all the
people to POK for MVS/XA ... and so he was never going to agree to
announcing new VM370-related products (Endicott eventually manages to
acquire the VM370 product mission, but had to recreate a development
group from scratch). This was also after 23jun1969 unbundling announce
and charging for software (with requirement that software revenue had
to cover originally development, and ongoing support and maint) JES2
network code (from HASP that originally carried "TUCC" in cols 68-71)
couldn't meet the revenue requirement ... standard process was
forecast the sales at low, medium, & high ($300/$600/$1200 per
month) price ... and there was no NJE price at which revenue met the
requirement. They then came up with the idea of announcing JES2
Networking & RSCS/VNET as a combined product (merged expenses and
revenue) ... where the RSCS/VNET revenue (which had acceptable
forecast at $30/month) was able to cover JES2 networking.
RSCS/VNET was clean layered design and a NJE emulation driver was easy
to do to connect JES2 into the RSCS/VNET network. However, JES2 still
had to be restricted to boundary nodes: 1) the original HASP used
spare entries in the 255 psuedo device table, usually 160-180 and the
internal network was approaching 768 (and NJE would trash traffice
where the origin or destination node weren't defined), 2) it also
somewhat intermixed network & job control info in header fields,
traffic between two JES2 systems at different release levels had a
habit of crashing the destination MVS, as a result a body of RSCS/VNET
emulated NJE drivers grew up that could recognize header versions and
if necessary reorganize the fileds to keep a destination MVS system
from crashing (there is the infamous case of new SAN Jose JES2 system
crashing a Hursley MVS system ... blamed on the Hursley RSCS/VNET
because they hadn't updated the Hursley RSCS/VNET NJE driver with the
most recent updates to account for San Jose new JES2).
There is also the story of trying to set-up offshift use between San
Jose STL (west coast) and Hursley (England) with a double-hop
satellite circuit. Initially it was brought up between two RSCS/VNET
systems and everything worked fine. Then a STL executive (steeped in
MVS), insisted the circuit be brought up between two JES2 systems
... and nothing worked. They then dropped back to the two RSCS/VNET
systems and everything worked. The executive then commented that
RSCS/VNET must be too dumb to realize it wasn't really working
(despite valid traffic flowing fine in both directions).
other trivia: at the univ, I had taken two credit hr intro to
Fortran/Computers and at end of semester was hired to rewrite 1401
MPIO for 360/30. The univ was getting 360/67 for tss/360 and got
360/30 temporarily replacing 1401 until 360/67 was available. Within a
year of taking intro class, 360/67 arrives and I'm hired fulltime
responsible for OS/360 (tss/360 never came to production). Student
Fortran jobs had run under second on 709 and well over a minute
w/MFTr9.5. I install HASP and cuts the time in half. I then redo
STAGE2 SYSGEN MFTr11 to carefully place datasets and order PDS members
to optimize arm seek and multi-track search, cutting another 2/3rds to
12.9secs (student fortran never got better than 709 until I install
UofWaterloo WATFOR). Later for MVT18 HASP, I strip out 2780 support
(to reduce real storage requirements) and add-in 2741&TTY terminal
support and editor that implemented the CMS EDITOR syntax (code
rewritten from scratch since the environments were so different) for
CRJE support.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
23jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Token-Ring
Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 15 Oct, 2025
Blog: Facebook
AWD had done their own cards for PC/RT (16bit PC/AT bus) ... including
4mbit Token-Ring card. For the RS/6000 (w/microchannel, 32bit bus),
AWD was told they couldn't do their own cards, but had to use the
(communication group heavily performance kneecaped) PS2 cards. It
turns out the PS2 microchannel 16mbit Token-Ring card had lower
throughput than the PC/RT 4mbit Token-Ring card (i.e. joke that PC/RT
server with 4mbit T/R would have higher throughput than RS/6000 server
with 16mbit T/R).
New Almaden Research bldg had been heavily provisioned with CAT wiring
assuming 16mbit token-ring, but found 10mbit ethernet (over CAT
wiring) LAN had lower latency and higher aggregate throughput than
16mbit token-ring LAN. They also found $69 10mbit ethernet cards had
higher throughput than the PS2 $800 16mbit token-ring cards. We were
out giving customer executive presentations on TCP/IP, 10mbit
ethernet, 3-tier architecture, high-performance routers, distributed
computing presentations (including comparisons with standard IBM
offerings) and taking misinformation barbs in the back by the SNA,
SAA, & token-ring forces. The Dallas E&S center published something
purported to be 16mbit T/R compared to Ethernet ... but only (remotely
valid) explanation I could give was that they compared to early 3mbit
ethernet prototype predating listen-before-transmit (CSMA/CD) part of
Ethernet protocol standard.
About the same time, senior disk engineer gets talk scheduled at
annual, internal, world-wide communication group conference,
supposedly on 3174 performance. However, his opening was that the
communication group was going to be responsible for the demise of the
disk division. The disk division was seeing drop in disk sales with
data fleeing mainframe datacenters to more distributed computing
friendly platforms. The disk division had come up with a number of
solutions, but they were constantly being vetoed by the communication
group (with their corporate ownership of everything that crossed the
datacenter walls) trying to protect their dumb terminal paradigm.
Senior disk software executive partial countermeasure was investing in
distributed computing startups that would use IBM disks (he would
periodically ask us to drop in on his investments to see if we could
offer any assistance). The communication group stranglehold on
mainframe datacenters wasn't just disk and a couple years later, IBM
has one of the largest losses in the history of US companies and was
being reorganized into the 13 "baby blues" in preparation for breaking
up the company ("baby blues" take-off on the "baby bell" breakup
decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Token-Ring
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 15 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#21 IBM Token-Ring
also OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open
Systems Interconnection standards to become the global protocol for
computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."
... snip ...
2nd half 80s, I served on (Chessin's) XTP TAB and there were several
gov agencies participating in XTP ... so took XTP (as "HSP") to (ISO
chartered) ANSI X3S3.3 standards group (transport and
network). Eventually was told they were only allowed to do standards
for things that conform to OSI model; XTP didn't because 1) supported
internetworking (non-existed layer between 3&4), 2) bypassed layer 3/4
interface going directly to LAN MAC, and 3) supported LAN MAC,
non-existed layer somewhere in the middle of layer 3. Joke at the time
was Internet/IETF required two interoperable implementations before
standards progression while ISO didn't even require a standard to be
implementable.
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Token-Ring
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 15 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#21 IBM Token-Ring
https://www.garlic.com/~lynn/2025e.html#22 IBM Token-Ring
trivia: 1988, Nick Donofrio approved HA/6000, originally for NYTimes
to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I
rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, informaix that had DEC
VAXCluster support in same source base unix). S/88 Product
Administrator started taking us around to their customers and also had
me write a section for the corporate continuous availability
document (it gets pulled when both AS400/Rochester and mainframe/POK
complain they couldn't meet requirements). Also previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson.
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on anything with more than 4-system clusters, then
leave IBM a few months later.
Had left IBM and was brought in as consultant to small client/server
startup by two former Oracle employees (worked with on RDBMS and were
in the Ellison/Hester meeting) that were there responsible for
something called "commerce server" and they wanted to do payment
transactions on the server. The startup had also invented this
technology called SSL they wanted to use, it is now sometimes called
"electronic commerce". I had responsibility for everything between
webservers and payment networks. Based on procedures, documentation
and software I had to do for electronic commerce, I did a talk on "Why
Internet Wasn't Business Critical Dataprocessing" that (Internet/IETF
standards editor) Postel sponsored at ISI/USC.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
Original SQL/relational, System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Mainframe TCP/IP and RFC1044
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe TCP/IP and RFC1044
Date: 15 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#18 IBM Mainframe TCP/IP and RFC1044
https://www.garlic.com/~lynn/2025e.html#19 IBM Mainframe TCP/IP and RFC1044
yes, NSC RFC1044. I got sucked into doing the NSC channel extender
support first for IBM STL. In 1980 ... STL was bursting at the seams
and they were moving 300 people from STL IMS group to offsite bldg
with dataprocessing back to STL. They had tried "remote 3270" and
found human factors totally unacceptable. I then implemented the
channel extender support for NSC and they found no perceptible
difference between offsite and inside STL. An unanticipated
side-effect was it improved system throughput by 10-15%. STL had
spread the 3270 controllers across same channels with 3830 disk
controllers ... the NSC channel extender significantly reduced the
channel busy for the same amount of 3270 activity ... improving disk
and overall system throughput. There was some consideration of then
configuring all STL systems with channel extender (even for 3270s
inside STL).
Then NSC tried to get IBM to release my support, but there were some
people in POK working on serial and they got it vetoed (because they
were worried it might affect justify the release of their serial
stuff.
1988, the branch office asks if I could help LLNL (national lab)
standardize some serial they were working with, which quickly becomes
fibre-channel standard, "FCS" (not First Customer Ship), initially
1gbit/sec transfer, full-duplex, aggregate 200mbye/sec (including some
stuff I had done in 1980). Then POK finally gets their serial stuff
released with ES/9000 as ESCON (when it was already obsolete,
initially 10mbytes/sec, later increased to 17mbytes/sec). Then some
POK engineers become involved with FCS and define a heavy-weight
protocol that drastically cuts throughput (eventually released as
FICON). 2010, a z196 "Peak I/O" benchmark released, getting 2M IOPS
using 104 FICON (20K IOPS/FICON). About the same time a FCS is
announced for E5-2600 server blade claiming over million IOPS (two
such FCS having higher throughput than 104 FICON). Also IBM pubs
recommend that SAPs (system assist processors that actually do I/O) be
kept to 70% CPU (or 1.5M IOPS) and no new CKD DASD has been made for
decades, all being simulated on industry standard fixed-block devices.
channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
Opel
From: Lynn Wheeler <lynn@garlic.com>
Subject: Opel
Date: 16 Oct, 2025
Blog: Facebook
related; 1972, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
Early/mid 70s, was IBM's Future System; FS was totally different from
370 and was going to completely replace it. During FS, internal
politics was killing off 370 projects and lack of new 370 is credited
with giving the clone 370 makers (including Amdahl), their market
foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
20yrs after Learson's failure, IBM has one of the largest losses in
the history of US companies. IBM was being reorganized into the 13
"baby blues" in preparation for breaking up the company (take off on
"baby bells" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist
some more from token-ring post yesterday in (public) mainframe group:
https://www.garlic.com/~lynn/2025e.html#23 IBM Token-Ring
https://www.garlic.com/~lynn/2025e.html#22 IBM Token-Ring
https://www.garlic.com/~lynn/2025e.html#21 IBM Token-Ring
senior disk engineer gets talk scheduled at annual, internal,
world-wide communication group conference, supposedly on 3174
performance. However, his opening was that the communication group was
going to be responsible for the demise of the disk division. The disk
division was seeing drop in disk sales with data fleeing mainframe
datacenters to more distributed computing friendly platforms. The disk
division had come up with a number of solutions, but they were
constantly being vetoed by the communication group (with their
corporate ownership of everything that crossed the datacenter walls)
trying to protect their dumb terminal paradigm.
Senior disk software executive partial countermeasure was investing in
distributed computing startups that would use IBM disks (he would
periodically ask us to drop in on his investments to see if we could
offer any assistance). The communication group stranglehold on
mainframe datacenters wasn't just disk and a couple years later, IBM
has one of the largest losses in the history of US companies
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
--
virtualization experience starting Jan1968, online at home since Mar1970
Opel
From: Lynn Wheeler <lynn@garlic.com>
Subject: Opel
Date: 17 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#25 Opel
1972, Learson tried (and failed) to block bureaucrats, careerists, and
MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
20yrs after Learson's failure, IBM has one of the largest losses in
the history of US companies. IBM was being reorganized into the 13
"baby blues" in preparation for breaking up the company (take off on
"baby bells" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
trivia: AMEX & KKR were in competition for private equity, LBO
take-over of RJR.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
and KKR wins. KKR runs into some troubles and hire away AMEX president
to help. Then IBM board hires away former AMEX president to try and
save IBM from the breakup.
Other trivia: the year that IBM has one of the largest losses in the
history of US companies, AMEX spins off its mainframe datacenters and
financial transaction outsourcing business in the largest IPO up until
that time (as First Data). Then 15yrs later, KKR does a private equity
LBO (in the largest LBO up until that time) of FDC (before selling it
off to Fiserv)
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
--
virtualization experience starting Jan1968, online at home since Mar1970
Opel
From: Lynn Wheeler <lynn@garlic.com>
Subject: Opel
Date: 17 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#25 Opel
https://www.garlic.com/~lynn/2025e.html#26 Opel
disclaimer: turn of the century I'm FDC Chief Scientist. One of FDC
datacenters is handling credit card outsourcing for half of all credit
card accounts in the US. It was 40+ max configured mainframes (@$30M,
aggregate >$1.2B), none older than 18 months, constant rolling
upgrades. The number of mainframes are necessary to handle account
settlement in the overnight batch window. They have hired a (EU)
performance consultant to look at the 450K cobol statement (credit
card processing) application that runs on all systems.
In the 70s, one of the CSC co-workers had developed an APL-based
analytical system model ... which was made available on (the online
sales&marketing) HONE (when I joined IBM, one of my hobbies was
enhanced production operating systems for internal datacenters and
HONE was one of my 1st and long time customers) as Performance
Predictor (branch enters customer configuration and workload
information and then can ask what-if questions about effect of
changing configuration and/or workloads). The consolidated US HONE
single-system-image also uses a modified version of performance
predictor to make system (logon) load balancing decisions. The
consultant had acquired a descendent of the Performance Predictor
(during IBM's troubles in the early 90s when lots of stuff was being
spun off). He managed to identify a 7% improvement (of the >$1.2B). In
parallel, I'm using some other CSC performance technology from the 70s
and identify a 14% improvement (of the >$1.2B) for aggregate 21%
improvement.
Mar/Apr '05 eserver magazine article (gone 404, at wayback machine),
some info somewhat garbled
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
posts mentioning Performance Predictor and FDC 450k cobol statment app
https://www.garlic.com/~lynn/2025c.html#19 APL and HONE
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2014b.html#83 CPU time
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Germany
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Germany
Date: 18 Oct, 2025
Blog: Facebook
When I graduated and joined the IBM Cambridge Science Center, one of
my hobbies was enhanced production operating systems for internal
datacenters and (online branch office sales&marketing support) US HONE
systems was one of the first (and long time) customer. Then got one of
my overseas business trips to both Paris, for 1st non-US HONE install
and to Boeblingen (put me up in small business traveler's hotel in
residential district and hotels changed four times the telco tariff
(like $60).
During FS in the early 70s, internal politics were killing 370 efforts
and the lack of new 370 is credited with giving the clone 370 makers
their market foothold). After Future System implodes, there is mad
rush to get stuff back into the 370 product pipelines (including
quick&dirty 3033&3081 efforts)
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
About the same time Endicott cons me into helping with 138/148 ECPS
microcode assist and was also con'ed into helping with a 125-II five
CPU implementation (115 has nine microprocessor memory bus, all the
microprocessors the same, including 370, just different microcode
loads; 125 is identical except the microprocessor running 370
microcode is 50% faster). Endicott then complains that the 5-CPU 125
would overlap the throughput of 148/ECPS ... and at escalation
meeting, I had to argue both sides of the table (but the 125 5-CPU
gets shutdown).
Later in 70s I had transferred to SJR on the west coast and get to
wander around IBM (and non-IBM) datacenters in silicon valley,
including disk bldg14/engineering and bldg15/product test across the
street. They were running 7x24, pre-scheduled, stand-alone testing and
mentioned that they had recently tried MVS, but it had 15min MTBF (in
that environment) requiring manual re-ipl. I offer to rewrite I/O
system to make it bullet proof and never fail, allowing any amount of
on-demand testing, greatly improving productivity. Bldg15 then got 1st
engineering 3033 (1st outside POK processor engineering) and since I/O
testing only used a percent or two of CPU, we scrounge up a 3830 and a
3330 string for our own, private online service. At the time, air
bearing simulation (for thin-film head design) was only getting a
couple turn arounds a month on SJR 370/195. We set it up on bldg15
3033 (slightly less than half 195 MIPS) and they could get several
turn arounds a day. Used in 3370FBA then in 3380CKD
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/
trivia: 3380 was already transitioning to fixed-block, can be seen in
the records/track formulas where record size had to be rounded up to
multiple of fixed "cell" size.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
cp/67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
125 5-cpu project
https://www.garlic.com/~lynn/submain.html#bounce
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Thin Film Disk Head
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Thin Film Disk Head
Date: 19 Oct, 2025
Blog: Facebook
2nd half of 70s transferring to SJR on west coast, I worked with Jim
Gray and Vera Watson on original SQL/relational, System/R (done on
VM370); we were able to do tech transfer ("under the radar" while
corporation was pre-occupied with "EAGLE") to Endicott for
SQL/DS. Then when "EAGLE" implodes, there was request for how fast
could System/R be ported to MVS ... which was eventually released as
DB2, originally for decision-support *only*.
I also got to wander around IBM (and non-IBM) datacenters in silicon
valley, including DISK bldg14 (engineering) and bldg15 (product test)
across the street. They were running pre-scheduled, 7x24, stand-alone
testing and had mentioned recently trying MVS, but it had 15min MTBF
(requiring manual re-ipl) in that environment. I offer to redo I/O
subsystem to make it bullet proof and never fail, allowing any amount
of on-demand testing, greatly improving productivity. Bldg15 then gets
1st engineering 3033 outside POK processor engineering ... and since
testing only took percent or two of CPU, we scrounge up 3830
controller and 3330 string to setup our own private online service.
Air bearing simulation (for thin film heads) was getting a couple turn
arounds a month on the SJR MVT 370/195. We set it up on the bldg15
3033 and it was able to get as many turn arounds a day as they wanted.
Then bldg15 also gets engineering 4341 in 1978 and some how branch
hears about it and in Jan1979 I'm con'ed into doing a 4341 benchmark
for a national lab that was looking at getting 70 for compute farm
(leading edge of the coming cluster supercomputing tsunami).
first thin film head was 3370
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/
then used for 3380; original 3380 had 20 track spacings between each
data track, then cut the spacing in half for double the capacity, then
cut the spacing again for triple the capacity (3380K). The "father of
risc" then talks me into helping with a "wide" disk head design,
read/write 16 closely spaced data tracks in parallel (plus two servo
tracks, one on each side of 16 data track groupings). Problem was data
rate would have been 50mbytes/sec at a time when mainframe channels
were still 3mbytes/sec. However 40mbyte/sec disk arrays were becoming
common and Cray channel had been standardized as HIPPI (100mbyte/sec)
https://en.wikipedia.org/wiki/HIPPI
1988, IBM branch asks if I could help LLNL (national lab) standardize
some serial stuff they were working with, which quickly becomes fibre
channel standard ("FCS", initially 1gbit, full-duplex, got RS/6000
cards capable of 200mbytes/sec aggregate for use with 64-port FCS
switch). In 1990s, some serial stuff that POK had been working with
for at least the previous decade is released as ESCON (when it is
already obsolete, 10mbytes/sec, upgraded to 17mbytes/sec). Then some
POK engineers become involved with FCS and define heavy weight
protocol that significantly reduces ("native") throughput, which ships
as "FICON".
Latest public benchmark I've seen was z196 "Peak I/O" getting 2M IOPS
using 104 FICON (20K IOPS/FICON). About the same time a FCS is
announced for E5-2600 blades claiming over a million IOPS (two such
FCS having higher throughput than 104 FICON). Also IBM pubs
recommended that SAPs (system assist processors that do actual I/O) be
held to 70% CPU (or around 1.5M IOPS) and no CKD DASD have been made
for decades, all being simulated on industry standard fixed-block
devices. https://en.wikipedia.org/wiki/Fibre_Channel
posts mentioning working with Jim Gray and Vera Watson on original SQL/relational, System/R
https://www.garlic.com/~lynn/submain.html#systemr
posts mentioning getting to work in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Germany
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Germany
Date: 20 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#28 IBM Germany
After 125 5-cpu got shutdown was asked to help with a 16-cpu 370 and
we con the 3033 processor engineers into helping with it in their
spare time (a lot more interesting than remapping 168 logic to 20%
faster chips). Everybody thought it was great until somebody tells the
head of POK that it could be decades until POK's favorite son
operating system ("MVS") had (effective) 16-cpu support (POK doesn't
ship a 16-CPU system until after the turn of the century). At the
time, MVS documents had 2-CPU support only getting 1.2-1.5 the times
the throughput of 1-CPU (because of the heavy weight multiprocessor
support). Head of POK then invites some of us to never visit POK
again and instructs the 3033 processor engineers keep heads down and
no distractions.
Contributing was head of POK was in the process of convincing
corporate to kill the VM370 product, shutdown the development group
and transfer all the people to POK for MVS/XA. Endicott eventually
manages to save the VM370 product mission (for the mid-range), but has
to recreate a development group from scratch.
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 3274/3278
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3274/3278
Date: 20 Oct, 2025
Blog: Facebook
3278 moved a lot of electronics back into the (shared) 3274 controller
(reducing 3278 manufacturing cost), but driving up protocol chatter
and latency. 3272/3277 had .086 hardware response. 3274/3278 had .3-.5
hardware response (depending on amount of data transferring). Later
3277 IBM/PC emulation board had 4-5 times the throughput of 3278
emulation board. Letters to the 3278 product administrator got replies
that 3278 wasn't met for interactive computing, but data entry
(i.e. electronic keypunch).
Back when 3278 was introduced there were published studies about
.25sec response improved productivity. I had several systems (after
joining IBM one of my hobbies was enhanced production operating
systems for internal datacenters) that had .11 interactive response
... with 3277 .086 ... users saw .196 (meeting .25 requirement). It
wasn't possible with 3278.
trivia: One of my 1st (and long time) internal customers was branch
office online sales&marketing support HONE systems, 1st CP67l ... then
CSC/VM and SJR/VM. Got one of the early 3274s in bldg15 ... and it was
frequently hanging up, requiring re-impl. I had modified missing
interrupt handler to deal with early engineering 3033 channel director
that would hang and required re-impl ... discovering if I quickly
executed CLRCH for all six channel addresses, it would automagically
re-impl. Then discovered something similar for 3274, doing HDV/CLRIO
in tight loop for all 3274 subchannel addresses, it would (also)
re-impl
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
posts mentioning using CLRCH and/ HDV/CLRIO to force re-impl
https://www.garlic.com/~lynn/2025d.html#58 IBM DASD, CKD, FBA
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2011o.html#23 3270 archaeology
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/97.html#20 Why Mainframes?
posts mentioning comparing 3272/3277 & 3274/3278 response
https://www.garlic.com/~lynn/2025d.html#102 Rapid Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015g.html#58 [Poll] Computing favorities
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012d.html#19 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2012.html#13 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#53 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009e.html#19 Architectural Diversity
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
--
virtualization experience starting Jan1968, online at home since Mar1970
What Is A Mainframe
From: Lynn Wheeler <lynn@garlic.com>
Subject: What Is A Mainframe
Date: 21 Oct, 2025
Blog: Facebook
Early numbers actual industry benchmarks (number of program iterations
compared to industry standard MIPS/BIPS reference platform), later
used IBM pubs giving percent change since previous
z900, 16 processors 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS (1250MIPS/core), Jun2025
In 1988, the IBM branch office asks if I could help LLNL (national
lab) standardize some serial they were working with, which quickly
becomes fibre-channel standard, "FCS" (not First Customer Ship),
initially 1gbit/sec transfer, full-duplex, aggregate 200mbye/sec
(including some stuff I had done in 1980).
Then POK gets some of their serial stuff released with ES/9000 as
ESCON (when it was already obsolete, initially 10mbytes/sec, later
increased to 17mbytes/sec). Then some POK engineers become involved
with FCS and define a heavy-weight protocol that drastically cuts
throughput (eventually released as FICON). 2010, a z196 "Peak I/O"
benchmark released, getting 2M IOPS using 104 FICON (20K
IOPS/FICON). About the same time a FCS is announced for E5-2600 server
blade claiming over million IOPS (two such FCS having higher
throughput than 104 FICON). Also IBM pubs recommend that SAPs (system
assist processors that actually do I/O) be kept to 70% CPU (or 1.5M
IOPS) and no new CKD DASD has been made for decades, all being
simulated on industry standard fixed-block devices.
Note: 2010 E5-2600 server blade (16 cores, 31BIPS/core) benchmarked at
500BIPS (ten times max configured Z196). At the time, commonly used in
cloud megadatacenters (each having half million or more server blades)
trivia: in the wake of the Future System implosion in the mid-70s,
there is mad rush to get stuff back into the 370 product pipelines and
I get asked to help with a 16-CPU 370 effort and we con the 3033
processor (started out remapping 168 logic to 20% faster chips)
engineers into working on it in their spare time (lot more interesting
than the 168 logic remapping). Everybody thought it was great until
somebody tells head of POK (IBM high-end 370) that it could be decades
before POK's favorite son operating system ("MVS") had ("effective")
16-CPU support (MVS docs had 2-CPU throughput only 1.2-1.5 times
throughput of single CPU because of its high-overhead multiprocessor
support, POK doesn't ship 16-CPU system until after turn of
century). Head of POK then invites some of us to never visit POK again
and directs the 3033 processor engineers heads down and no
distractions.
Also 1988, Nick Donofrio approves HA/6000, originally for NYTimes to
move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Ingres, Sybase, Informix that have DEC
VAXCluster support in same source base with UNIX). IBM S/88 Product
Administrator was also taking us around to their customers and also
had me write a section for corporate continuous availability strategy
document (it gets pulled when both Rochester/AS400 and POK/mainframe
complain).
Early Jan92 meeting with Oracle CEO, AWD executive Hester tells
Ellison that we would have 16-system clusters by mid92 and 128-system
clusters by ye92. Mid-jan92 convince FSD to bid HA/CMP for
gov. supercomputers. Late-jan92, cluster scale-up is transferred for
announce as IBM Supercomputer (for technical/scientific *only*) and we
were told we couldn't work on clusters with more than four systems (we
leave IBM a few months later).
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to the
industry MIPS/BIPS reference platform):
ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Former executive we had reported to, goes over to head up Somerset/AIM
(Apple, IBM, Motorola), single chip RISC with M88k bus/cache (enabling
clusters of shared memory multiprocessors)
i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:
IBM PowerPC 440: 1,000MIPS
Pentium3: 2,054MIPS (twice PowerPC 440)
Fibre Channel Standard ("FCS") &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
DASD, CKD, FBA, multi-track posts
https://www.garlic.com/~lynn/submain.html#dasd
--
virtualization experience starting Jan1968, online at home since Mar1970
What Is A Mainframe
From: Lynn Wheeler <lynn@garlic.com>
Subject: What Is A Mainframe
Date: 21 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#32 What Is A Mainframe
During FS period, internal politics was killing off 370 efforts
(claims that lack of new 370s during FS gave the 370 clone makers
their market foothold).
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
when FS implodes there was mad rush to get stuff back into the 370
product pipelines, including kicking off quick&dirty 3033&3081
efforts in parallel. Original 3081D (2-CPU) aggregate MIPS was less
than Amdahl 1-CPU system. Then IBM doubles the processors' cache size,
making 3081K 2-CPU about the same MIPS as Amdahl 1-CPU (modulo 3081K
2-CPU MVS throughput was only .6-.75 the throughput of Amdahl 1-CPU,
because of its significant multiprocessor overhead). Then because
ACP/TPF didn't have SMP, tightly-coupled, multiprocessor support,
there was concern that the whole ACP/TPF market would move to
Amdahl. The 3081 2nd CPU was in the middle of the box, concern just
removing it (for 1-CPU 3083) would make it top heavy and prone to
falling over (so box had to be rewired to move CPU0 to the middle)
trivia: before MS/DOS:
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M (name take-off on IBM CP/67)
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, kildall worked on IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
... more trivia:
I took two credit hr, intro to fortran/computers and at end of
semester, I was hired to rewrite 1401 MPIO for 360/30. Univ. was
getting 360/67 for tss/360 replacing 709/1401 and 360/30 temporarily
replacing 1401 pending availability of 360/67 ... and within few weeks
had 2000 card 360 assembler program (univ. shutdown datacenter on
weekends and I had the datacenter dedicated, although 48hrs w/o sleep
made monday classes hard). Within a year of taking intro class and I
was hired fulltime responsible for OS/360 (360/67 running 360/65,
tss/360 didn't come to production). Jan, 1968, CSC came out to install
CP/67 (precursor to VM370), 3rd installation after CSC itself and MIT
Lincoln Lab. I mostly got to play with it during my dedicated weekend
time, initially working on pathlengths for running OS/360
in virtual machine. OS/360 benchmark ran 322secs on real
machine, initially 856secs in virtual machine (CP67 CPU
534secs), after a couple months I have reduced that CP67 CPU from
534secs to 113secs. I then start rewriting the dispatcher, (dynamic
adaptive resource manager/default fair share policy) scheduler,
paging, adding ordered seek queuing (from FIFO) and mutli-page
transfer channel programs (from FIFO and optimized for
transfers/revolution, getting 2301 paging drum from 70-80 4k
transfers/sec to channel transfer peak of 270). Six months after univ
initial install, CSC was giving one week class in LA. I arrive on
Sunday afternoon and asked to teach the class, it turns out that the
people that were going to teach it had resigned the Friday before to
join one of the 60s CP67 commercial online spin-offs.
Originally CP67 delivered with 1052&2741 terminal support and
automagic terminal type (switching terminal type port scanner with SAD
CCW). Univ. had some ASCII TTYs and I add TTY integrated with
automagic termainl type. I then wanted to have single terminal dial-in
phone number ("hunt group"), but it didn't work since IBM had taken
short cut and hard-wired line-speed for each port. This kicks off univ
clone controller effort, build channel interface board for Interdata/3
programmed to emulate IBM controller with addition of auto-baud
terminal line support. This is upgraded to Interdata/4 for channel
interface and clusters of Interdata/3s for ports. Interdata (and then
Perrkin-Elmer) sells it as clone controller and four of use are
written up as responsible for (some part of) the clone controller
business
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
--
virtualization experience starting Jan1968, online at home since Mar1970
Linux Clusters
Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Linux Clusters
Date: 21 Oct, 2025
Blog: Facebook
basically cluster supercomputers and cloud clusters are similar with
large numbers of linux servers all tied together. recent article the
top 500 supercomputers are all linux clusters. a large cloud operation
can have a score or more of megadatacenters around the world, each
megadatacenter with half million or more of linux server blades and
each server blade with ten times the processing of max configured
mainframe (and enormous automation, a megadatacenter run with 70-80
staff). Decade ago there were articles of being able to use credit
card at a cloud operation to automagically spin up blades for a 3hr
supercomputer ranking in the top 40 in the world.
trivia: 1988, IBM branch asks if I could help LLNL (national lab)
standardize some serial stuff they were working with, which quickly
becomes fibre channel standard ("FCS", initially 1gbit, full-duplex,
got RS/6000 cards capable of 200mbytes/sec aggregate for use with
64-port FCS switch). In 1990s, some serial stuff that POK had been
working with for at least the previous decade is released as ESCON
(when it is already obsolete, 10mbytes/sec, upgraded to
17mbytes/sec). Then some POK engineers become involved with FCS and
define heavy weight protocol that significantly reduces ("native")
throughput, which ships as "FICON". Latest public benchmark I've seen
was z196 "Peak I/O" getting 2M IOPS using 104 FICON (20K
IOPS/FICON). About the same time a FCS is announced for E5-2600 server
blades claiming over a million IOPS (two such FCS having higher
throughput than 104 FICON). Also IBM pubs recommended that SAPs
(system assist processors that do actual I/O) be held to 70% CPU (or
around 1.5M IOPS) and no CKD DASD have been made for decades, all
being simulated on industry standard fixed-block devices..
The max configured z196 benchmarked at 50BIPS and went for $30M. An
E5-2600 server blade benchmarked at 500BIPS (ten times z196, same
industry benchmark, number of program iterations compared to industry
MIPS/BIPS reference platform) and had a IBM base list price of $1815
(shortly later industry pubs had blade server component makers
shipping half their products directly to cloud operations that
assemble their own servers at 1/3rd the cost of brand name servers
... and IBM sells off its server business).
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter poss
https://www.garlic.com/~lynn/submisc.html#megadatacenter
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
--
virtualization experience starting Jan1968, online at home since Mar1970
Linux Clusters
Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Linux Clusters
Date: 24 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#34 Linux Clusters
Also 1988, Nick Donofrio approved HA/6000, originally for NYTimes to
move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I
rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, informaix that had DEC
VAXCluster support in same source base unix). S/88 Product
Administrator started taking us around to their customers and also had
me write a section for the corporate continuous availability document
(it gets pulled when both AS400/Rochester and mainframe/POK complain
they couldn't meet requirements). Also previously worked on original
SQL/relational, System/R with Jim Gray and Vera Watson.
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems, then leave
IBM a few months later.
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to
MIPS/BIPS reference platform):
ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Had left IBM and was brought in as consultant to small client/server
startup by two former Oracle employees (worked with on RDBMS and were
in the Ellison/Hester meeting) that were there responsible for
something called "commerce server" and they wanted to do payment
transactions on the server. The startup had also invented this
technology called SSL they wanted to use, it is now sometimes called
"electronic commerce". I had responsibility for everything between
webservers and payment networks. Based on procedures, documentation
and software I had to do for electronic commerce, I did a talk on "Why
Internet Wasn't Business Critical Dataprocessing" that (Internet/IETF
standards editor) Postel sponsored at ISI/USC.
Executive we reported to, had gone over to head-up Somerset/AIM
(Apple, IBM, Motorola) and does single chip Power/PC ... using
Motorola 88K bus&cache ... enabling multiprocessor configurations
... enabling beefing up clusters with multiprocessor systems.
i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:
IBM PowerPC 440: 1,000MIPS
Pentium3: 2,054MIPS (twice PowerPC 440)
early benchmark numbers actual industry benchmarks, later used IBM
pubs giving percent change since previous
z900, 16 processors 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS (1250MIPS/core), Jun2025
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
original SQL/relational posts
https://www.garlic.com/~lynn/submain.html#systemr
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
SMP. tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
posts mentioning being asked Jan1979 to do 4341 benchmark for national lab
looking at getting 70 for compute farm (sort of the leading edge of the coming
cluster supercomputing tsunami:
https://www.garlic.com/~lynn/2025b.html#67 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#48 VAX MIPS whatever they were, indirection in old architectures
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018d.html#42 Mainframes and Supercomputers, From the Beginning Till Today
https://www.garlic.com/~lynn/2017j.html#88 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2017c.html#87 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#44 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016e.html#116 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#65 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#64 PL/I advertising
https://www.garlic.com/~lynn/2015g.html#98 PROFS & GML
https://www.garlic.com/~lynn/2015f.html#35 Moving to the Cloud
https://www.garlic.com/~lynn/2015.html#78 Is there an Inventory of the Inalled Mainframe Systems Worldwide
https://www.garlic.com/~lynn/2014j.html#37 History--computer performance comparison chart
https://www.garlic.com/~lynn/2014g.html#83 Costs of core
https://www.garlic.com/~lynn/2013i.html#2 IBM commitment to academia
https://www.garlic.com/~lynn/2012j.html#2 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012d.html#41 Layer 8: NASA unplugs last mainframe
--
virtualization experience starting Jan1968, online at home since Mar1970
Linux Clusters
From: Lynn Wheeler <lynn@garlic.com>
Subject: Linux Clusters
Date: 24 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#34 Linux Clusters
https://www.garlic.com/~lynn/2025e.html#35 Linux Clusters
In the 70s, with the implosion of Future System (internal politics had
been killing off 370 efforts, claim is the lack of new 370s during FS
is credited with giving clone 370 makers their market foothold), there
was mad rush to get stuff back into the 370 product pipelines,
including kicking off quick&dirty 3033&3081 efforts in parrallel.
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
I get asked to help with a 16-CPU 370, and we con the 3033 processor
engineers into helping in their spare time (a lot more interesting
than remapping 168 logic to 20% faster chips). Everybody thought it
was great until somebody tells the head of POK that it could be
decades before POK favorite son operating system ("MVS") had
(effective) 16-CPU support (MVS docs at the time saying 2-CPU systems
only had 1.2-1.5 times throughput of single CPU because of heavy SMP
overhead, aka MVS 2-CPU 3081K at same aggregate MIPS as Amdahl single
processor, only had .6-.75 the throughput); POK doesn't ship 16-CPU
system until after turn of century. Then head of POK invites some of
us to never visit POK again and directs 3033 processor engineers heads
down and no distractions.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP. tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
Linux Clusters
From: Lynn Wheeler <lynn@garlic.com>
Subject: Linux Clusters
Date: 24 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#34 Linux Clusters
https://www.garlic.com/~lynn/2025e.html#35 Linux Clusters
https://www.garlic.com/~lynn/2025e.html#36 Linux Clusters
related; 1972, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
Early/mid 70s, was IBM's Future System; FS was totally different from
370 and was going to completely replace it. During FS, internal
politics was killing off 370 projects and lack of new 370 is credited
with giving the clone 370 makers (included Amdahl), their market
foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
Late 80s, a senior disk engineer gets talk scheduled at annual,
internal, world-wide communication group conference, supposedly on
3174 performance. However, his opening was that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly being vetoed by the communication group (with
their corporate ownership of everything that crossed the datacenter
walls) trying to protect their dumb terminal paradigm. Senior disk
software executive partial countermeasure was investing in
distributed computing startups that would use IBM disks (he would
periodically ask us to drop in on his investments to see if we could
offer any assistance).
The communication group stranglehold on mainframe datacenters wasn't
just disk and a couple years later (20yrs after Learson's failure),
IBM has one of the largest losses in the history of US companies, and
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company (take off on "baby bells" breakup decade
earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
--
virtualization experience starting Jan1968, online at home since Mar1970
Amazon Explains How Its AWS Outage Took Down the Web
From: Lynn Wheeler <lynn@garlic.com>
Subject: Amazon Explains How Its AWS Outage Took Down the Web
Date: 25 Oct, 2025
Blog: Facebook
Amazon Explains How Its AWS Outage Took Down the Web
https://www.wired.com/story/amazon-explains-how-its-aws-outage-took-down-the-web/
1988, Nick Donofrio approved HA/6000, originally for NYTimes to move
their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, informaix that had DEC
VAXCluster support in same source base unix). S/88 Product
Administrator started taking us around to their customers and also had
me write a section for the corporate continuous availability
document (it gets pulled when both AS400/Rochester and mainframe/POK
complain they couldn't meet requirements). Also previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson.
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems, then leave
IBM a few months later.
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to
MIPS/BIPS reference platform):
ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Had left IBM and was brought in as consultant to small client/server
startup by two former Oracle employees (worked with on RDBMS and were
in the Ellison/Hester meeting) that were there responsible for
something called "commerce server" and they wanted to do payment
transactions on the server. The startup had also invented this
technology called SSL they wanted to use, it is now sometimes called
"electronic commerce". I had responsibility for everything between
webservers and payment networks. Based on procedures, documentation
and software I had to do for electronic commerce, I did a talk on "Why
Internet Wasn't Business Critical Dataprocessing" (including it took
3-10 times the original application effort to turn something into a
"service") that (Internet/IETF standards editor) Postel sponsored at
ISI/USC.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
posts mentioning talk "Why Internet Wasn't Business Critical
Dataprocessing:
https://www.garlic.com/~lynn/2025e.html#35 Linux Clusters
https://www.garlic.com/~lynn/2025e.html#23 IBM Token-Ring
https://www.garlic.com/~lynn/2025d.html#111 ARPANET, NSFNET, Internet
https://www.garlic.com/~lynn/2025d.html#42 IBM OS/2 & M'soft
https://www.garlic.com/~lynn/2025b.html#97 Open Networking with OSI
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024g.html#80 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
--
virtualization experience starting Jan1968, online at home since Mar1970
Amazon Explains How Its AWS Outage Took Down the Web
From: Lynn Wheeler <lynn@garlic.com>
Subject: Amazon Explains How Its AWS Outage Took Down the Web
Date: 25 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#38 Amazon Explains How Its AWS Outage Took Down the Web
Summary of the Amazon DynamoDB Service Disruption in the Northern Virginia (US-EAST-1) Region
https://aws.amazon.com/message/101925/
while doing electronic commerce, was working with some contractors
that were also doing work for GOOGLE in its early infancy. They
initially were doing DNS updates for load balancing ... but that
resulted in all sorts of DNS issues. Then then modified the Google
perimeter routers to support the load balancing function.
other trivia: In the early 80s, got HSDT project, T1 and faster
computer links (and arguments with the communication group, in the
60s, IBM had 2701s that supported T1 links, 70s with transition to
SNA/VTAM and other other issues, links were caped at 56kbits; FSD did
get S1 Zirpel T1 cards for gov. customers that were having their 2701s
failing). Was also working with NSF director and was suppose to get
$20M to interconnect the NSF supercomputer centers. Then congress cuts
the budget, some other things happen, then RFP was released (in part
based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
Posts mentioning Series/1 Zirpel T1 cards:
https://www.garlic.com/~lynn/2025d.html#47 IBM HSDT and SNA/VTAM
https://www.garlic.com/~lynn/2025c.html#70 Series/1 PU4/PU5 Support
https://www.garlic.com/~lynn/2025b.html#120 HSDT, SNA, VTAM, NCP
https://www.garlic.com/~lynn/2025b.html#114 ROLM, HSDT
https://www.garlic.com/~lynn/2025b.html#96 OSI/GOSIP and TCP/IP
https://www.garlic.com/~lynn/2025b.html#93 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#43 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#40 IBM APPN
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#15 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#99 Terminals
https://www.garlic.com/~lynn/2024g.html#79 Early Email
https://www.garlic.com/~lynn/2024e.html#34 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2018b.html#9 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2016h.html#26 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016d.html#27 Old IBM Mainframe Systems
https://www.garlic.com/~lynn/2015e.html#83 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2014f.html#24 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013j.html#60 Mainframe vs Server - The Debate Continues
https://www.garlic.com/~lynn/2013j.html#43 8080 BASIC
https://www.garlic.com/~lynn/2013j.html#37 8080 BASIC
https://www.garlic.com/~lynn/2013g.html#71 DEC and the Bell System?
https://www.garlic.com/~lynn/2011g.html#75 We list every company in the world that has a mainframe computer
https://www.garlic.com/~lynn/2010e.html#83 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2009j.html#4 IBM's Revenge on Sun
https://www.garlic.com/~lynn/2008l.html#63 Early commercial Internet activities (Re: IBM-MAIN longevity)
https://www.garlic.com/~lynn/2008e.html#45 1975 movie "Three Days of the Condor" tech stuff
https://www.garlic.com/~lynn/2007f.html#80 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006n.html#25 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2004l.html#7 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004g.html#37 network history
https://www.garlic.com/~lynn/2003m.html#28 SR 15,15
https://www.garlic.com/~lynn/2003d.html#13 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Boca and IBM/PCs
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Boca and IBM/PCs
Date: 25 Oct, 2025
Blog: Facebook
30yrs of PC market ("IBM" PCs increasingly dominated by "clones")
https://arstechnica.com/features/2005/12/total-share/
I had been posting to the internal PC forums, quantity one prices for
clones advertised in Sunday SJMN ... totally different from Boca
projections. Head of POK mainframe is moved to Boca to head up
PCs. They contract with Dataquest (since bought by Gartner) for study
of the PC market future, including a video taped roundtable of Silicon
Valley experts. I'd known the Dataquest person for a decade, and was
asked to be a Silicon Valley "expert" (promising that they would
garble my identity so Boca wouldn't recognize me as an IBM employee),
and after clearing it with my IBM management, and agreed to
participate.
trivia: Late 70s and early 80s, I had been blamed for online computer
conferencing on the internal network (precursor to social media,
larger than arpanet/internet from just about the beginning until
sometime mid/late 80s --- about the time it was forced to convert to
SNA/VTAM). Only about 300 actually participated but claims 25,000 were
reading. When the corporate executive committee was told about it,
folklore is 5of6 wanted to fire me. Results included officially
sanctioned software & forums and researcher paid to sit in back of
my office for nine months studying how I communicated, face-to-face
& telephone; all incoming & outgoing email, instant messages
logs (results were IBM reports, papers and conference talks, books,
and a Stanford Phd joint with language and computer AI).
AWD (workstation division) had done their own cards for PC/RT (16bit
PC/AT bus) ... including 4mbit Token-Ring card. For the RS/6000
(w/microchannel, 32bit bus), AWD was told they couldn't do their own
cards, but had to use the (communication group heavily performance
kneecaped) PS2 cards. It turns out the PS2 microchannel 16mbit
Token-Ring card had lower throughput than the PC/RT 4mbit Token-Ring
card (i.e. joke that PC/RT server with 4mbit T/R would have higher
throughput than RS/6000 server with 16mbit T/R).
New Almaden Research bldg had been heavily provisioned with CAT wiring
assuming 16mbit token-ring, but found 10mbit ethernet (over CAT
wiring) LAN had lower latency and higher aggregate throughput than
16mbit token-ring LAN. They also found $69 10mbit ethernet cards had
higher throughput than the PS2 microchannel $800 16mbit token-ring
cards. We were out giving customer executive presentations on TCP/IP,
10mbit ethernet, 3-tier architecture, high-performance routers,
distributed computing presentations (including comparisons with
standard IBM offerings) and taking misinformation barbs in the back by
the SNA, SAA, & token-ring forces. The Dallas E&S center
published something purported to be 16mbit T/R compared to Ethernet
... but only (remotely valid) explanation I could give was that they
compared to early 3mbit ethernet prototype predating
listen-before-transmit (CSMA/CD) part of Ethernet protocol standard.
About the same time, senior disk engineer gets talk scheduled at
annual, internal, world-wide communication group conference,
supposedly on 3174 performance. However, his opening was that the
communication group was going to be responsible for the demise of the
disk division. The disk division was seeing drop in disk sales with
data fleeing mainframe datacenters to more distributed computing
friendly platforms. The disk division had come up with a number of
solutions, but they were constantly being vetoed by the communication
group (with their corporate ownership of everything that crossed the
datacenter walls) trying to protect their dumb terminal paradigm.
Senior disk software executive partial countermeasure was
investing in distributed computing startups that would use IBM disks
(he would periodically ask us to drop in on his investments to see if
we could offer any assistance). The communication group stranglehold
on mainframe datacenters wasn't just disk and a couple years later,
IBM has one of the largest losses in the history of US companies and
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company ("baby blues" take-off on the "baby bell"
breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM communication group predicted to be responsible for demise of disk
division
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 360/85
Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/85
Date: 26 Oct, 2025
Blog: Facebook
was blamed for online computer conferencing (precursor to social
media) in the late 70s and early 80s on the internal network (larger
than arpanet/internet from just about the beginning until sometime
mid/late 80s when it was force to convert to SNA/VTAM). It really took
off the spring of 1981 when I distributed trip report to Jim Gray at
Tandem (only about 300 actually participated but claims 25,000 were
reading). When the corporate executive committee was told there was
something of uproar (folklore 5of6 wanted to fire me), with some task
forces that resulted in official online conferencing software and
officially sanctioned moderated forums ... also a researcher was paid
to study how I communicated, sat in back of my office for 9months,
taking notes on my conversations (also got copies of all
incoming/outgoing email and logs of all instant messsages) resulting
in research reports, papers, conference talks, books and Stanford Phd
joint with language and computer AI. One of the observations
Date: 04/23/81 09:57:42
To: wheeler
your ramblings concerning the corp(se?) showed up in my reader
yesterday. like all good net people, i passed them along to 3 other
people. like rabbits interesting things seem to multiply on the
net. many of us here in pok experience the sort of feelings your mail
seems so burdened by: the company, from our point of view, is out of
control. i think the word will reach higher only when the almighty $$$
impact starts to hit. but maybe it never will. its hard to imagine one
stuffed company president saying to another (our) stuffed company
president i think i'll buy from those inovative freaks down the
street. '(i am not defending the mess that surrounds us, just trying
to understand why only some of us seem to see it).
bob tomasulo and dave anderson, the two poeple responsible for the
model 91 and the (incredible but killed) hawk project, just left pok
for the new stc computer company. management reaction: when dave told
them he was thinking of leaving they said 'ok. 'one word. 'ok. ' they
tried to keep bob by telling him he shouldn't go (the reward system in
pok could be a subject of long correspondence). when he left, the
management position was 'he wasn't doing anything anyway. '
in some sense true. but we haven't built an interesting high-speed
machine in 10 years. look at the 85/165/168/3033/trout. all the same
machine with treaks here and there. and the hordes continue to sweep
in with faster and faster machines. true, endicott plans to bring the
low/middle into the current high-end arena, but then where is the
high-end product development?
... snip ... top of post, old email index
trivia: One of my hobbies was enhanced production operating systems
for internal datacenters including disk bldg14/engineering and
bldg15/product test, across the street. Bldg15 got early engineering
processors for I/O testing and got an (Endicott) engineering 4341 in
1978. Branch office heard about it and in Jan1979 con me into doing
benchmarks for national lab looking at getting 70 for compute farm
(sort of the leading edge of the coming cluster supercomputing
tsunami).
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal networking posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
IBM System/360 Model 85 Functional Characteristics ©1967 (2.6 MB)
https://bitsavers.org/pdf/ibm/360/functional_characteristics/A22-6916-0_System_360_Model_85_Functional_Characteristics_1967.pdf
other
https://en.wikipedia.org/wiki/IBM_System/360_Model_85
https://en.wikipedia.org/wiki/Solid_Logic_Technology
https://en.wikipedia.org/wiki/Cache_(computing)
https://en.wikipedia.org/wiki/Microcode
https://en.wikipedia.org/wiki/IBM_System/370_Model_165
https://en.wikipedia.org/wiki/Floating-point_arithmetic
https://en.wikipedia.org/wiki/IBM_System/360
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 360/85
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/85
Date: 26 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#41 IBM 360/85
Amdahl wins battle to make ACS, 360-compatible. Then ACS/360 is killed
(folklore was concern that it would advance state-of-the-art too fast)
and Amdahl then leaves IBM (before Future System); end ACS/360:
https://people.computing.clemson.edu/~mark/acs_end.html
Besides hobby of doing enhanced production operating systems for
internal datacenters ... and wandering around internal datacenters
... I spent some amount of time at user group meetings (like SHARE)
and wandering around customers. Director of one of the largest
(customer) financial datacenters liked me to drop in and talk
technology. At one point, the branch manager horribly offended the
customer and in retaliation, they ordered an Amdahl machine (lonely
Amdahl clone 370 in a vast sea of "blue). Up until then Amdahl had
been selling into univ. & tech/scientific markets, but clone 370s had
yet to break into the IBM true-blue commercial market ... and this
would be the first. I got asked to go spend 6m-12m on site at the
customer (to help obfuscate the reason for the Amdahl order?). I
talked it over with the customer, who said while he would like to have
me there it would have no affect on the decision, so I declined the
offer. I was then told the branch manager was good sailing buddy of
IBM CEO and I could forget a career, promotions, raises.
Early 70s, there was Future System project (and internal politics was
killing off 370 efforts, claim is that lack of new 370s during FS
contributed to clone 370 makers their market foothold)
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
When FS implodes there was mad rush to get stuff back into the 370
product pipelines, including kicking off quick&dirty 3033&3081 efforts
in parallel and I got asked to help with a 16-CPU 370, and we con the
3033 processor engineers into helping in their spare time (a lot more
interesting than remapping 168 logic to 20% faster chips). Everybody
thought it was great until somebody tells the head of POK that it
could be decades before POK favorite son operating system ("MVS") had
(effective) 16-CPU support (MVS docs at the time saying 2-CPU systems
only had 1.2-1.5 times throughput of single CPU because of heavy SMP
overhead). Then head of POK invites some of us to never visit POK
again and directs 3033 processor engineers heads down and no
distractions.
Original 3081D (2-CPU) aggregate MIPS was less than Amdahl 1-CPU
system. Then IBM doubles the processors' cache size, making 3081K
2-CPU about the same MIPS as Amdahl 1-CPU. MVS 2-CPU 3081K at same
aggregate MIPS as Amdahl single processor, only had .6-.75 the
throughput (because of MVS multiprocessor overhead). POK doesn't ship
16-CPU system until after turn of century.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some ACS/360 and Amdahl clone 370s posts
https://www.garlic.com/~lynn/2025e.html#16 CTSS, Multics, Unix, CSC
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#61 Amdahl Leaves IBM
https://www.garlic.com/~lynn/2025c.html#49 IBM And Amdahl Mainframe
https://www.garlic.com/~lynn/2024f.html#122 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#23 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022b.html#88 Computer BUNCH
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#105 IBM Future System
https://www.garlic.com/~lynn/2021e.html#66 Amdahl
https://www.garlic.com/~lynn/2021.html#39 IBM Tech
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 360/85
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/85
Date: 26 Oct, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025e.html#41 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#42 IBM 360/85
1970, HAWK, 30MIPS processor
Date: 05/12/81 13:46:19
To: wheeler
RE: Competiton for resources in IBM
Before Bob Tomasulo left to go to work for STC, he told me many
stories about IBM. Around 1970, there was a project called HAWK. It
was to be a 30 MIPS uniprocessor. Bob was finishing up on the 360/91,
and wanted to go work on HAWK as his next project. He was told by
management that "there were already too many good people working over
there, and they really couldn't afford to let another good person work
on it"! They forced him to work on another project that was more
"deserving" of his talents. Bob never forgave them for that.
I guess IBM feels that resources are to be spread thinly, and that no
single project can have lots of talent on it. Any project that has
lots of talent will be raided sooner or later.
... snip ... top of post, old email index
Amdahl had won battle to make ACS, 360-compatible. Then ACS/360 is
killed (folklore was concern that it would advance state-of-the-art
too fast) and Amdahl then leaves IBM (before Future System); end
ACS/360:
https://people.computing.clemson.edu/~mark/acs_end.html
Shortly after joining IBM, I was asked to help with adding
multithreading (see patent refs in ACS web page) to 370/195. 195 had
out-of-order execution, but no branch prediction (or speculative
execution) and conditional branches drained pipeline so most code only
ran at half rated speed. Adding another i-stream, simulating 2nd-CPU,
each running at half speed, could keep all the execution units running
at full-speed (modulo MVT&MVS 2-CPU multiprocessor only got 1.2-1.5
the throughput of single CPU; because of the multiprocessor
implementation). Then decision was made to add virtual memory to all
370s and it was decided that it wasn't partical to add virtual memory
to 370/195 (and the multithreading was killed).
Early 70s, there was Future System project (and internal politics was
killing off new 370 efforts, claim is that lack of new 370s during FS
contributed to giving clone 370 makers their market foothold)
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
HAWK may have been killed like Amdahl's ACS/360, or because of virtual
memory decision (like 370/195 multithreading), or because of Future
System (don't know).
Besides hobby of doing enhanced production operating systems for
internal datacenters ... and wandering around internal datacenters
... I spent some amount of time at user group meetings (like SHARE)
and wandering around customers. Director of one of the largest
(customer) financial datacenters liked me to drop in and talk
technology. At one point, the branch manager horribly offended the
customer and in retaliation, they ordered an Amdahl machine (lonely
Amdahl clone 370 in a vast sea of "blue). Up until then Amdahl had
been selling into univ. & tech/scientific markets, but clone 370s
had yet to break into the IBM true-blue commercial market ... and this
would be the first. I got asked to go spend 6m-12m on site at the
customer (to help obfuscate the reason for the Amdahl order?). I
talked it over with the customer, who said while he would like to have
me there it would have no affect on the decision, so I declined the
offer. I was then told the branch manager was good sailing buddy of
IBM CEO and I could forget a career, promotions, raises.
When FS implodes there was mad rush to get stuff back into the 370
product pipelines, including kicking off quick&dirty 3033&3081
efforts in parallel. I got asked to help with a 16-CPU 370, and we con
the 3033 processor engineers into helping in their spare time (a lot
more interesting than remapping 168 logic to 20% faster
chips). Everybody thought it was great until somebody tells the head
of POK that it could be decades before POK favorite son operating
system ("MVS") had (effective) 16-CPU support (MVS docs at the time
saying 2-CPU systems only had 1.2-1.5 times throughput of single CPU
because of heavy SMP overhead). Then head of POK invites some of us to
never visit POK again and directs 3033 processor engineers heads down
and no distractions.
Original 3081D (2-CPU) aggregate MIPS was less than Amdahl 1-CPU
system. Then IBM doubles the processors' cache size, making 3081K
2-CPU about the same MIPS as Amdahl 1-CPU. MVS 2-CPU 3081K at same
aggregate MIPS as Amdahl single processor, only had .6-.75 the
throughput (because of MVS multiprocessor overhead). POK doesn't ship
16-CPU system until after turn of century.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some posts mentioning 370/195 multithreading and virtual memory
https://www.garlic.com/~lynn/2025c.html#112 IBM Virtual Memory (360/67 and 370)
https://www.garlic.com/~lynn/2025c.html#79 IBM System/360
https://www.garlic.com/~lynn/2025b.html#35 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#24 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024e.html#115 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024d.html#101 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#66 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2023f.html#89 Vintage IBM 709
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#12 Computer Server Market
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#60 370/195
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195
https://www.garlic.com/~lynn/2019d.html#62 IBM 370/195
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2017g.html#39 360/95
https://www.garlic.com/~lynn/2017c.html#26 Multitasking, together with OS operations
https://www.garlic.com/~lynn/2017.html#90 The ICL 2900
https://www.garlic.com/~lynn/2017.html#3 Is multiprocessing better then multithreading?
https://www.garlic.com/~lynn/2016h.html#7 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015c.html#26 OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainframe - Forbes
https://www.garlic.com/~lynn/2012d.html#73 Execution Velocity
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM SQL/Relational
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM SQL/Relational
Date: 27 Oct, 2025
Blog: Facebook
2nd half of 70s transferring to SJR on west coast, I worked with Jim
Gray and Vera Watson on original SQL/relational, System/R (done on
VM370, Backus office just down the hall and Codds office was on flr
above). Jim refs:
https://jimgray.azurewebsites.net/
https://en.wikipedia.org/wiki/Jim_Gray_(computer_scientist)
and Vera Watson
https://en.wikipedia.org/wiki/Vera_Watson
... System/R
https://en.wikipedia.org/wiki/IBM_System_R
We were able to tech transfer ("under the radar" while corporation was
pre-occupied with "EAGLE") to Endicott for SQL/DS. Then when "EAGLE"
implodes, there was request for how fast could System/R be ported to
MVS ... which was eventually released as DB2, originally for
decision-support *only*. I also got to wander around IBM (and non-IBM)
datacenters in silicon valley, including DISK bldg14 (engineering) and
bldg15 (product test) across the street. They were running
pre-scheduled, 7x24, stand-alone testing and had mentioned recently
trying MVS, but it had 15min MTBF (requiring manual re-ipl) in that
environment. I offer to redo I/O system to make it bullet proof and
never fail, allowing any amount of on-demand testing, greatly
improving productivity. Bldg15 then gets 1st engineering 3033 outside
POK processor engineering ... and since testing only took percent or
two of CPU, we scrounge up 3830 controller and 3330 string to setup
our own private online service. Then bldg15 also gets engineering 4341
in 1978 and some how branch hears about it and in Jan1979 I'm con'ed
into doing a 4341 benchmark for a national lab that was looking at
getting 70 for compute farm (leading edge of the coming cluster
supercomputing tsunami).
in 1988, Nick Donofrio approved HA/6000, originally for NYTimes to
transfer their newspaper system (ATEX) from DEC VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on significantly improving scale-up performance). S/88 Product
Administrator started taking us around to their customers and also had
me write a section for the corporate continuous availability document
(it gets pulled when both AS400/Rochester and mainframe/POK complain
they couldn't meet requirements).
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems. Then leave
IBM a few months later.
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to MIPS
reference platform):
ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Executive we reported to, had gone over to head-up Somerset/AIM
(Apple, IBM, Motorola) and does single chip Power/PC ... using
Motorola 88K bus&cache ... enabling multiprocessor configurations
... further beefing up clusters with multiple processors/system.
other info in this recent thread
https://www.garlic.com/~lynn/2025e.html#41 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#42 IBM 360/85
https://www.garlic.com/~lynn/2025e.html#43 IBM 360/85
In-between System/R (plus getting to play disk engineer in
bldgs14&15) and HA/CMP ... early 80s, I got HSDT project, T1 and
faster computer links (both terrestrial and satellite) and battles
with CSG (60s, IBM had 2701 supporting T1, 70s with SNA/VTAM and
issues, links were caped at 56kbit ... and I had to mostly resort to
non-IBM hardware). Internal IBM network required link
encryptors and I hated what I had to pay for T1 encryptors and
faster ones were hard to find. I became involved with effort,
objective was to handle at least 6mbytes/sec and cost less than $100
to build. The corporate crypto group first claimed it seriously
weakened crypto standard and couldn't be used. It took me 3months to
figure out how to explain what was happening, rather than weaker, it
was much stronger ... it was hallow victory. I was then told that only
one organization in the world was allowed to use such crypto, I could
make as many as I wanted, but they all had to be sent to them. It was
when I realized there was three kinds of crypto: 1) the kind they
don't care about, 2) the kind you can't do, 3) the kind you can only
do for them.
Also was working with NSF director and was suppose to get $20M to
interconnect the NSF Supercomputer centers. Then congress cuts the
budget, some other things happened and eventually there was RFP
released (in part based on what we already had running). NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
some posts mentioning three kinds of crypto
https://www.garlic.com/~lynn/2024e.html#125 ARPANET, Internet, Internal Network and DES
https://www.garlic.com/~lynn/2024e.html#28 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#75 Joe Biden Kicked Off the Encryption Wars
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2023f.html#79 Vintage Mainframe XT/370
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022b.html#109 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021e.html#75 WEB Security
https://www.garlic.com/~lynn/2021e.html#58 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#17 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#70 IBM/BMI/MIB
https://www.garlic.com/~lynn/2021b.html#57 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2019b.html#23 Online Computer Conferencing
https://www.garlic.com/~lynn/2017c.html#69 ComputerWorld Says: Cobol plays major role in U.S. government breaches
https://www.garlic.com/~lynn/2016c.html#57 Institutional Memory and Two-factor Authentication
https://www.garlic.com/~lynn/2015c.html#85 On a lighter note, even the Holograms are demonstrating
https://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, index - home