List of Archived Posts

2025 Newsgroup Postings (10/06 - )

Mainframe and Cloud
Mainframe skills
PS2 Microchannel
Switching On A VAX

Mainframe and Cloud

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe and Cloud
Date: 06 Oct, 2025
Blog: Facebook

re:
https://www.garlic.com/2025d.html#112 Mainframe and Cloud

... note online/cloud tends to have capacity much greater than avg use
... to meet peek on-demand use which could be order of magnitude
greater. cloud operators had heavily optimized server blade systems
costs (including assembling their own systems for a fraction of brand
name servers) ... and power consumption was increasingly becoming a
major expense. There was then increasing pressure on makers of server
components to optimize power use as well as allowing power use to drop
to zero when idle ... but instant on to meet on-demand requirements.

large cloud operation can have a score (or more) of megadatacenters
around the world, each with half million or more server blades, and
each server blade with ten times rocessing of max. configured
mainframe .... and enormous automation; a megadatacenter with 70-80
staff (upwards of 10,000 or more systems per staff). In the past were
articles about being able to use a credit card to on-demand spin up
for a couple of hrs, a cloud ("cluster") supercomputer (that ranked in
the top 40 in the world)

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

GML was invented at the IBM Cambridge Science Center in 1969 (about
the same time that CICS product appeared) .... after decade morphs
into ISO standard SGML and after another decade morphs into HTML at
CERN.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#bdam

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe skills

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe skills
Date: 06 Oct, 2025
Blog: Facebook

In CP67->VM370 (after decision to add virtual memory to all 370s),
lots of stuff was simplified and/or dropped (including multiprocessor
support). Then in 1974 with a VM370R2, I start adding a bunch of stuff
back in for my internal CSC/VM (including kernel re-org for
multiprocessor, but not the actual SMP support). Then with a VM370R3,
I add SMP back in, originally for (online sales&marketing support) US
HONE so they could upgrade all their 168s to 2-CPU 168s (with little
slight of hand getting twice throughput).

Then with the implosion of Future System
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
I get asked to help with a 16-CPU 370, and we con the 3033 processor
engineers into helping in their spare time (a lot more interesting
than remapping 168 logic to 20% faster chips). Everybody thought it
was great until somebody tells the head of POK that it could be
decades before POK favorite son operating system ("MVS") had
(effective) 16-CPU support (MVS docs at the time saying 2-CPU systems
had 1.2-1.5 times throughput of single CPU; POK doesn't ship 16-CPU
system until after turn of century). Then head of POK invites some of
us to never visit POK again and directs 3033 processor engineers heads
down and no distractions.

2nd half of 70s transferring to SJR on west coast, I worked with Jim
Gray and Vera Watson on original SQL/relational, System/R (done on
VM370); we were able to tech transfer ("under the radar" while
corporation was pre-occupied with "EAGLE") to Endicott for
SQL/DS. Then when "EAGLE" implodes, there was request for how fast
could System/R be ported to MVS ... which was eventually released as
DB2, originally for decision-support *only*.

I also got to wander around IBM (and non-IBM) datacenters in silicon
valley, including DISK bldg14 (engineering) and bldg15 (product test)
across the street. They were running pre-scheduled, 7x24, stand-alone
testing and had mentioned recently trying MVS, but it had 15min MTBF
(requiring manual re-ipl) in that environment. I offer to redo I/O
system to make it bullet proof and never fail, allowing any amount of
on-demand testing, greatly improving productivity. Bldg15 then gets
1st engineering 3033 outside POK processor engineering ... and since
testing only took percent or two of CPU, we scrounge up 3830
controller and 3330 string to setup our own private online
service. Then bldg15 also gets engineering 4341 in 1978 and some how
branch hears about it and in Jan1979 I'm con'ed into doing a 4341
benchmark for a national lab that was looking at getting 70 for
compute farm (leading edge of the coming cluster supercomputing
tsunami).

Decade later in 1988, got approval for HA/6000 originally for NYTimes
to port their newspaper system (ATEX) from DEC VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Ingres, Sybase, Informix that have DEC
VAXCluster support in same source base with UNIX). IBM S/88 Product
Administrator was also taking us around to their customers and also
had me write a section for corporate continuous availability strategy
document (it gets pulled when both Rochester/AS400 and POK/mainframe
complain).

Early Jan92 meeting with Oracle CEO, AWD executive Hester tells
Ellison that we would have 16-system clusters by mid92 and 128-system
clusters by ye92. Mid-jan92 convince FSD to bid HA/CMP for
gov. supercomputers. Late-jan92, cluster scale-up is transferred for
announce as IBM Supercomputer (for technical/scientific *only*) and we
were told we couldn't work on clusters with more than four systems (we
leave IBM a few months later).

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to the
industry MIPS reference platform):

• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

Former executive we had reported to, goes over to head up Somerset/AIM
(Apple, IBM, Motorola), single chip RISC with M88k bus/cache (enabling
clusters of shared memory multiprocessors)

i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:

• IBM PowerPC 440: 1,000MIPS
• Pentium3: 2,054MIPS (twice PowerPC 440)

early numbers actual industry benchmarks, later used IBM pubs giving
percent change since previous

z900, 16 processors 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS (1250MIPS/core), Jun2025

Also in 1988, the branch office asks if I could help LLNL (national
lab) standardize some serial they were working with, which quickly
becomes fibre-channel standard, "FCS" (not First Customer Ship),
initially 1gbit/sec transfer, full-duplex, aggregate 200mbye/sec
(including some stuff I had done in 1980). Then POK gets some of their
serial stuff released with ES/9000 as ESCON (when it was already
obsolete, initially 10mbytes/sec, later increased to
17mbytes/sec). Then some POK engineers become involved with FCS and
define a heavy-weight protocol that drastically cuts throughput
(eventually released as FICON).

2010, a z196 "Peak I/O" benchmark released, getting 2M IOPS using 104
FICON (20K IOPS/FICON). About the same time a FCS is announced for
E5-2600 server blade claiming over million IOPS (two such FCS having
higher throughput than 104 FICON). Also IBM pubs recommend that SAPs
(system assist processors that actually do I/O) be kept to 70% CPU (or
1.5M IOPS) and no new CKD DASD has been made for decades, all being
simulated on industry standard fixed-block devices. Note: 2010 E5-2600
server blade (16 cores, 31BIPS/core) benchmarked at 500BIPS (ten times
max configured Z196).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
original SQL/relational posts
https://www.garlic.com/~lynn/submain.html#systemr
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Fibre Channel Standard ("FCS") &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
DASD, CKD, FBA, multi-track posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

PS2 Microchannel

From: Lynn Wheeler <lynn@garlic.com>
Subject: PS2 Microchannel
Date: 07 Oct, 2025
Blog: Facebook

IBM AWD had done their own cards for PC/RT workstation (16bit AT-bus),
including a token-ring 4mbit T/R card. For RS/6000 w/microchannel,
they were told they couldn't do their own cards, but had to use
(heavily performance kneedcaped by communication group) PS2
microchannel cards. It turned out that the PC/RT 4mbit token ring card
had higher card throughput than the PS2 16mbit token ring card (joke
that PC/RT 4mbit T/R server would have higher throughput than RS/6000
16mbit T/R server).

Almaden research had been heavily provisioned with IBM CAT wiring,
assuming 16mbit T/R use. However they found that 10mbit Ethernet LAN
(running over IBM CAT wiring) had lower latency and higher aggregate
throughput than 16mbit T/R LAN. Also $69 10mbit Ethernet cards had
much higher throughput than $800 PS2 microchannel 16mbit T/R cards
(joke communication group trying to severely hobble anything other
than 3270 terminal emulation).

30yrs of PC market
https://arstechnica.com/features/2005/12/total-share/

Note above article makes reference to success of IBM PC clones
emulating IBM mainframe clones. Big part of the IBM mainframe clone
success was the IBM Future System effort in 1st half of 70s (going to
completely replace 370 mainframes, internal politics was killing off
370 efforts and claims is the lack of new 370s during the period is
what gave the 370 clone makers their market foothold)
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

PC/RT 4mbit T/R, PS2 16mbit T/R, 10mbit Ethernet
https://www.garlic.com/~lynn/2025d.html#81 Token-Ring
https://www.garlic.com/~lynn/2025d.html#8 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#2 Mainframe Networking and LANs
https://www.garlic.com/~lynn/2025c.html#114 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#88 IBM SNA
https://www.garlic.com/~lynn/2025c.html#56 IBM OS/2
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#41 SNA & TCP/IP
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring
https://www.garlic.com/~lynn/2024g.html#101 IBM Token-Ring versus Ethernet
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024e.html#64 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#83 IBM's Near Demise
https://www.garlic.com/~lynn/2023b.html#50 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#18 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021d.html#15 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring

--
virtualization experience starting Jan1968, online at home since Mar1970

Switching On A VAX

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Switching On A VAX
Newsgroups: alt.folklore.computers
Date: Tue, 07 Oct 2025 16:49:38 -1000

Lars Poulsen <lars@beagle-ears.com> writes:

As someone who spent some time on async terminal drivers, for both
TTYs and IBM 2741-family terminals as well as in the communications
areas of OS/360 (minimally), Univac 1100 EXEC-8, RSX-11M, VMS and
embedded systems on PDP-11/10, Z80 and MC68000, I can maybe contribute
some perspective here.


In 60s, lots of science/technical and univ were sold 360/67 (w/virtual
memory) for tss/360 ... but when tss/360 didn't come to production
... lots of places just used it as 360/65 with os/360 (Michigan and
Stanford wrote their own virtual memory systems for 360/67).

Some of the CTSS/7094 people went to the 5th flr to do Multics, others
went to the 4th flr to the IBM science center and did virtual
machines, internal network, lots of other stuff. CSC originally
wanted 360/50 to do virtual memory hardware mods, but all the spare
50s were going to FAA/ATC and had to settle for 360/40 to modify and
did (virtual machine) CP40/cms. Then when 360/67 standard with
virtual memory came available, CP40/CMS morphs into CP67/CMS.

Univ was getting 360/67 to replace 709/1401 and I had taken two credit
hr intro to fortran/computers; at end of semester was hired to rewrite
1401 MPIO for 360/30 (temporarily replacing 1401 pending 360/67). Within
a yr of taking intro class, the 360/67 arrives and I'm hired fulltime
responsible for OS/360 (Univ. shutdowns datacenter on weekend and I got
it dedicated, however 48hrs w/o sleep made monday classes hard).

Eventually CSC comes out to install CP67 (3rd after CSC itself and MIT
Lincoln Labs) and I mostly get to play with it during my 48hr weekend
dedicated time. I initially work on pathlengths for running OS/360 in
virtual machine. Test stream ran 322secs on real machine,
initially 856secs in virtual machine (CP67 CPU 534secs), after
a couple months I have reduced CP67 CPU from 534secs to 113secs. I
then start rewriting the dispatcher, (dynamic adaptive resource
manager/default fair share policy) scheduler, paging, adding ordered
seek queuing (from FIFO) and mutli-page transfer channel programs
(from FIFO and optimized for transfers/revolution, getting 2301 paging
drum from 70-80 4k transfers/sec to channel transfer peak of 270). Six
months after univ initial install, CSC was giving one week class in
LA. I arrive on Sunday afternoon and asked to teach the class, it
turns out that the people that were going to teach it had resigned the
Friday before to join one of the 60s CP67 commercial online spin-offs.

Original CP67 came with 1052 & 2741 terminal support with
automagic terminal identification, used SAD CCW to switch controller's
port terminal type scanner. Univ. had some number of TTY33&TTY35
terminals and I add TTY ASCII terminal support integrated with
automagic terminal type. I then wanted to have a single dial-in number
("hunt group") for all terminals. It didn't quite work, IBM had taken
short cut and had hard-wired line speed for each port. This kicks off
univ effort to do our own clone controller, built channel interface
board for Interdata/3 programmed to emulate IBM controller with the
addition it could do auto line-speed/(dynamic auto-baud). It was later
upgraded to Interdata/4 for channel interface with cluster of
Interdata/3s for port interfaces. Interdata (and later Perkin-Elmer)
was selling it as clone controller and four of us get written up
responsible for (some part of) the IBM clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

Univ. library gets an ONR grant to do online catalog and some of the
money is used for a 2321 datacell. IBM also selects it as betatest for
the original CICS product and supporting CICS added to my tasks.

Then before I graduate, I'm hired fulltime into a small group in the
Boeing CFO office to help with the formation of Boeing Computer
Services (consolidate all dataprocessing into independent business
unit). I think Renton datacenter largest in the world (360/65s
arriving faster than they could be installed, boxes constantly staged
in hallways around machine room; joke that Boeing was getting 360/65s
like other companies got keypunch machines). Lots of politics between
Renton director and CFO, who only had a 360/30 up at Boeing field for
payroll (although they enlarge the machine room to install 360/67 for
me to play with when I wasn't doing other stuff). Renton did have a
(lonely) 360/75 (among all the 360/65s) that was used for classified
work (black rope around the area, heavy black felt draopped over
console lights & 1403s with guards at perimeter when running
classified). After I graduate, I join IBM CSC in cambridge (rather
than staying with Boeing CFO).

One of my hobbies after joining IBM CSC was enhanced production
operating systems for internal datacenters. At one time had rivalry
with 5th flr over whether they had more total installations (internal,
development, commercial, gov, etc) running Multics or I had more
internal installations running my internal CSC/VM.

A decade later, I'm at SJR on the west coast and working with Jim Gray
and Vera Watson on the original SQL/relational implementation
System/R. I also had numerous internal datacenters running my internal
SJR/VM system ... getting .11sec trivial interactive system
response. This was at the time of several studies showing .25sec
response getting improved productivity.

The 3272/3777 controller/terminal had .089 hardware response (plus the
.11 system response resulted in .199 response, meeting .25sec
criteria).  The 3277 still had half-duplex problem attempting to hit a
key at same time as screen write, keyboard would lock and would have
to stop and reset. YKT was making a FIFO box available, unplug the
3277 keyboard for the head, plug-in the FIFO box and plug keyboard
into FIFO ... which avoided the half-duplex keyboard lock).

Then IBM produced 3274/3278 controller/terminal where lots of
electronics were moved back into the controller, reducing cost to make
the 3278, but significantly increase coax protocol chatter
... significantly increasing hardware response to .3-.5secs depending
on how much data was written to screen. Letters to the 3278 product
administrator complaining, got back response that 3278 wasn't designed
for interactive computing ... but data entry.

clone (pc ... "plug compatible") controller built w/interdata posts
https://www.garlic.com/~lynn/submain.html#360pcm
csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
original sql/relational, "system/r" posts
https://www.garlic.com/~lynn/submain.html#systemr

some posts mentioning undergraduate work at univ & boeing
https://www.garlic.com/~lynn/2025d.html#112 Mainframe and Cloud
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#91 IBM VM370 And Pascal
https://www.garlic.com/~lynn/2025d.html#69 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#103 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#0 System Response

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, index - home >