List of Archived Posts
2025 Newsgroup Postings (05/11 - )
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Living Wage
- IBM System/R
- IBM System/R
- IBM 4341
- IBM 4341
- IBM 4341
- Cluster Supercomputing
- Cluster Supercomputing
- IBM System/R
- Is Parallel Programming Hard, And, If So, What Can You Do About It?
- APL and HONE
- Is Parallel Programming Hard, And, If So, What Can You Do About It?
- Is Parallel Programming Hard, And, If So, What Can You Do About It?
- IBM 8100
- IBM 4361 & DUMPRX
- IBM AIX
- 360 Card Boot
- IBM Downfall
- IBM 360 Programming
- IBM 360 Programming
- 360 Card Boot
- Is Parallel Programming Hard, And, If So, What Can You Do About It?
- IBM Downfall
- IBM Downfall
- IBM Downfall
- TCP/IP, Ethernet, Token-Ring
- IBM Downfall
- IBM Downfall
- IBM Mainframe
- Is Parallel Programming Hard, And, If So, What Can You Do About
- IBM 3090
- IBM & DEC DBMS
- SNA & TCP/IP
- SNA & TCP/IP
Interactive Response
Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 11 May, 2025
Blog: Facebook
Some other details (from recent post) ... related to quarter second
response
https://www.garlic.com/~lynn/2025b.html#115 SHARE, MVT, MVS, TSO
Early MVS days, CERN did MVS/TSO comparison with VM370/CMS with 1974
presentation of analysis at SHARE ... inside IBM, copies of the
presentation were stamped "IBM Confidential - Restricted" (2nd highest
security classification), only available on "need to know" basis (for
those that didn't directly get copy at SHARE)
MVS/TSO trivia: late 70s, SJR got a 370/168 for MVS and 370/158 for
VM/370 (replacing MVT 370/195) and several strings of 3330s all with
two channel switch 3830s connecting to both systems .... but
strings&controllers were labeled MVS or VM/370 and strict rules that
no MVS use of VM/370 controller/strings. One morning, an MVS 3330 was
placed on 3330 string and within a few minutes, operations were
getting irate phone calls from all over the bldg about what happened
to response. Analysis showed that the problem was MVS 3330 (OS/360
filesystem extensive use of multi-track search locking up controller
and all drives on that controller) had been placed on VM/370 3330
string and demands that the offending MVS 3330 be moved. Operations
said they would have to wait until offshift. Then a single pack VS1
(highly optimized for VM370 and hand-shaking) is put up on an MVS
string and brought up on the loaded 370/158 VM370 ... and was able to
bring the MVS/168 to a crawl ... alleviating a lot of the problems for
VM370 users (operations almost immediately agreed to move the
offending MVS 3330).
Trivia: one of my hobbies after joining IBM was highly optimized
operating systems for internal datacenters. In early 80s, there were
increasing studies showing quarter second response improved
productivity. 3272/3277 had .086 hardware response. Then 3274/3278 was
introduced with lots of 3278 hardware moved back to 3274 controller,
cutting 3278 manufacturing costs and significantly driving up coax
protocol chatter ... increasing hardware response to .3sec to .5sec
depending on amount of data (impossible to achieve quarter
second). Letters to the 3278 product administrator complaining about
interactive computing got a response that 3278 wasn't intended for
interactive computing but data entry (sort of electronic
keypunch). 3272/3277 required .164sec system response (for human to
see quarter second). Fortunately I had numerous IBM systems in silicon
valley with (90th precentile) .11sec system response, I don't believe
any TSO users ever noticed 3278 issues, since they rarely ever saw
even one sec system response). Later, IBM/PC 3277 emulation cards had
4-5 times the upload/download throughput as 3278 emulation cards.
Future System was going to completely replace 370s:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
and internal politics were killing off 370 efforts (lack of new 370s
is credited giving clone 370 makers their market foothold), when FS
implodes there is mad rush to get stuff back into the 370 product
pipelines, including kicking off quick&dirty 3033&3081 efforts
in parallel. Head of POK also lobbying corporate to kill the VM370
product, shutdown the development group and transfer all the people to
POK for MVS/XA (Endicott eventually manages to save the VM370 product
mission, but have to recreate a development group from scratch).
Endicott starts on XEDIT for release to customers. I send Endicott
email asking might they consider one of the internal 3270 fullscreen
editors, that were much more mature, more function and faster. red,
ned, xedit, edgar, etc. had similar capability ("EDIT" was the old
CP67/CMS editor) ... but simple cpu usage test that i did (summery
from '79) of the same set of operations on the same file by all
editors showed the following cpu uses (at the time, "RED" was my
choice):
RED 2.91/3.12
EDIT 2.53/2.81
NED 15.70/16.52
XEDIT 14.05/14.88
EDGAR 5.96/6.45
SPF 6.66/7.52
ZED 5.83/6.52
Endicott's reply was that it was the RED-author's fault that it was so
much better than XEDIT and therefor it should be his responsibility to
bring XEDIT up to RED level.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
internal CP67L, CSC/VM, SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
posts mentioning RED, NED, XEDIT, EDGAR, SPF, ZED:
https://www.garlic.com/~lynn/2024.html#105 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2011p.html#112 SPF in 1978
https://www.garlic.com/~lynn/2011m.html#41 CMS load module format
https://www.garlic.com/~lynn/2006u.html#26 Assembler question
https://www.garlic.com/~lynn/2003d.html#22 Which Editor
some posts mentioning .11sec system response and 3272/3277
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025.html#127 3270 Controllers and Terminals
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024f.html#12 3270 Terminals
https://www.garlic.com/~lynn/2024e.html#26 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024.html#92 IBM, Unix, editors
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023e.html#0 3270
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021c.html#92 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
https://www.garlic.com/~lynn/2017e.html#26 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014m.html#127 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014h.html#106 TSO Test does not support 65-bit debugging?
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2014g.html#23 Three Reasons the Mainframe is in Trouble
https://www.garlic.com/~lynn/2013l.html#65 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012n.html#37 PDP-10 and Vax, was System/360--50 years--the future?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2011d.html#53 3270 Terminal
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 11 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
Trivia: 90s, i86 chip makers implemented on-the-fly, pipelined
translation of i86 instructions to RISC micro-ops (for execution,
largely negating performance difference with RISC systems). Also
Somerset/AIM (apple, ibm, motorola) was formed to do single chip
801/RISC (with motorola 88k bus and cache supporting multiprocessor
configurations). Industry benchmark is number of program iterations
compared to reference platform (for MIPS rating); 1999:
• single core IBM PowerPC 440, 1BIPS
• single core Pentium3, 2.054BIPS
and Dec2000:
• IBM z900, 16 processors 2.5BIPS (156MIPS/processor)
2010:
• IBM z196, 80 processors, 50BIPS (625MIPS/processors)
• E5-2600 server blade (two 8-core XEON chips) 500BIPS (30BIPS/core)
Note: no CKD DASD have been made for decades, all being emulated on
industry fixed-block devices (increasingly SSD).
Cache miss/memory latency, when measured in count of processor cycles
is similar to 60s disk latency when measured in count of 60s processor
cycles (memory is new disk). Current equivalent to 60s multitasking
are things like out-of-order execution, branch prediction, speculative
execution, etc (and to further improve things, translating CPU
instructions into RISC micro-ops for actual execution scheduling).
Note that individual instruction timings can take multiple cycles
(translation, broken into multiple parts, etc) ... but there is large
amount of concurrent pipelining .... so can complete one instruction
per cycle, even while it might take 10-50 cycles to process each
instruction.
60s undergraduate, took 2 credit hr intro to fortran/computers, end of
semester was hired to reimplement 1401 MPIO on 360/30. Univ was
getting 360/67 for tss/360, replacing 709/1401 and temporarily got
360/30 replacing 1401. Univ. shutdown datacenter on weekends and I had
whole place dedicated, although 48hrs w/o sleep affected monday
classes. I was given pile of hardware & software manuals and got to
design/implement my own monitor, device drivers, interrupt handlers,
error recovery, storage management, etc ... and within a few weeks had
2000 card assembler program. 360/67 arrived within year of taking
intro class (tss/360 didn't come to production and ran as 360/65) and
was hired fulltime responsible for os/360. Student fortran ran under
second on 709 (tape->tape) but over minute on os/360. I install HASP
and cut the time in half. I then start redoing MFTR11 STAGE2 SYSGEN,
carefully placing datasets and PDS members to optimize seeks and
multi-track search, cutting another 2/3rds to 12.9secs. Student
Fortran never got better than 709 until I install UofWaterloo WATFOR
(single step monitor, batch card tray of jobs, ran at 20,000 cards/min
on 360/65).
CSC comes out to install CP67/CMS (precursor to VM370, 3rd site after
CSC itself and MIT Lincoln Labs). I mostly get to play with it during
my weekend dedicated time, started out reWriting pathlengths for os360
virtual machine, os360 job stream ran 322secs on real hardware
and initially 856secs in virtual machine (CP67 CPU 534secs),
after a couple months I have reduced CP67 CPU from 534secs to
113secs. I then start rewriting the dispatcher, scheduler, paging,
adding ordered seek queuing (from FIFO) and mutli-page transfer
channel programs (from FIFO and optimized for transfers/revolution,
getting 2301 paging drum from 70-80 4k transfers/sec to peak of 270).
CP67 originally came with auto terminal type with support for 1052 &
2741 terminals. Univ. had some TTY, so I integrate in ASCII support
(including auto terminal type support). I then want to have single
dial-in number ("hunt group") for all terminals, but IBM had taken
short-cut, while could change terminal type port scanner, baud rate
was hardwired for each port. That starts program to do clone
controller, build channel interface board for Interdata/3 programmed
to emulate IBM controller ... but including port auto-baud (later
upgraded to Interdata/4 for channel interface and cluster of
Interdata/3s for port interfaces). Interdata (and later Perkin-Elmer)
sell it as clone controller and four of us are written responsible for
some part of IBM clone business.
I then add terminal support to HASP for MVT18, with an editor
emulating "CMS EDIT" syntax for simple CRJE.
In prior life, my wife was in the GBURG JES group reporting to Crabby
& one of the ASP "catchers" for JES3; also co-author of JESUS (JES
Unified System) specification (all the features of JES2 & JES3
that the respective customers couldn't live w/o, for various reasons
never came to fruition). She was then con'ed into transfer to POK
responsible for mainframe loosely-coupled architecture
(Peer-Coupled Shared Data). She didn't remain long because 1)
periodic battles with communication group trying to force her to use
VTAM for loosely-coupled operation and 2) little uptake (until much
latter with SYSPLEX and Parallel SYSPLEX), except for IMS hot-standby
(she has story asking Vern Watts who he would ask permission, he
replied "nobody" ... he would just tell them when it was all done).
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
360&370 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
IBM Cambridge Science Center
https://www.garlic.com/~lynn/subtopic.html#545tech
Peer-Coupled Shared Data Architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
other trivia: i got to wander around silicon valley datacenters after
transfer to SJR, including disk bldg14/engineering and bldg15/product
test, across the street. They were running prescheduled, 7x24,
stand-alone testing and mentioned that they had recently tried MVS
... but it had 15min MTBF (in that environment) requiring manual
re-ipl. I offer to rewrite I/O supervisor to be bullet proof and never
fail, allowing any amount of on-demand, concurrent testing, greatly
improving productivity. I also drastically cut the queued I/O redrive
pathlength (1/10th MVS time from interrupt to redrive SIOF) and
significantly enhance multi-channel path efficiency (in addition to
never fail).
IBM had quideline that new generation product had to have performance
(should be more but) not more than 5% less than previous. Initial test
of 3880 showed it failed. It supported "data streaming" channels
(previous channels were end-to-end hand-shake for every byte,
data-streaming cut overhead by going to multiple bytes, higher
datarate, but less processing) ... and they were able to get away with
much slower processor than in 3830. However, slower processing
significantly increased controller channel busy for every other kind
of operation, including from end of channel program data transfer to
presenting ending interrupt (significantly increase in time from SIOF
to channel program ending interrupt) reducing aggregate I/O
throughput. Attempt to mask the problem, they change 3880 to present
ending interrupt and do final controller cleanup overlapped with
operating system interrupt processing overhead (modulo niggling
problem finding controller error and need to present "unit check" with
an interrupt).
Whole thing tested fine with MVS ... the enormous MVS interrupt to
redrive pathlength was more than enough to mask the 3880 controller
fudge. However the 3880 fudge didn't work for me, I would hit 3880
with redrive SIOF long before it was done, which it then had to
respond with CU-busy, I then had to requeue the request and wait for
the CUE interrupt (indicating the controller was really free).
I periodically pontificated that a lot of the XA/370 architecture was
to mask MVS issues (and my 370 redrive pathlength was close to the
elapsed time that of the XA/370 hardware redrive).
Slightly other issues ... getting within a few months for 3880 first
customer ship (FCS), FE (field engineering) had test of 57 simulated
errors that were likely to occur and MVS was (still) failing in all 57
cases (requiring manual re-ipl) and in 2/3rds of the cases no
indication for what caused the failure.
I then did a (IBM internal only) research report on all the I/O
integrity work and it was impossible to believe the enormous grief
that the MVS organization caused me for mentioning MVS 15min MTBF.
... trivia: MVS wrath at my mentioning VM370 "never fail" & MVS "15min
MTBF" ... remember, POK had only recently convinced corporate to kill
VM370, shutdown the group and transfer all the people to POK for
MVS/XA. Endicott had managed to save the VM370 product mission (for
mid-range) but was still recreating a development from scratch and
bringing it up to speed ... can find comments about ibm code quality
during the period in vmshare archive
http://vm.marist.edu/~vmshare
getting to play disk engineer in bldgs14/15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
... also, as undergraduate in the 60s, univ hired me fulltime
responsible for OS/360 on their 360/67 (running as 360/65) ... had
replaced 709/1401. Student fortran ran under second on 709. Initially
on OS/360, it was well over a minute. I install HASP, cutting time in
half. I then start redoing (MFTR11) stage2 sysgen to carefully place
datasets and PDS members to optimize arm seek and multi-track search,
cutting another 2/3rds to 12.9secs. Student fortran never got better
than 709 until I install UofWaterloo WATFOR.
Turns out a major part of that 12.9 secs, was OS/360 had major
implementation goal running on minimal real-storage configurations, so
things like file OPEN SVC had a long string of SVCLIB modules that had
to be sequentially loaded ... I got tens of second performance
improvement by carefully placing those SVCLIB members (both in
multi-track search of PDS directory, and the actual loading).
One of my problems was PTF that replaced SVCLIB and LINKLIB PDS
members that disturbed the careful placement, and student fortran
would start inching up towards 20secs (from 12.9) and I would have to
do a mini-sysgen to get the ordering restored.
some other recent posts mentioning student fortran, 12.9secs, WATFOR
https://www.garlic.com/~lynn/2025b.html#121 MVT to VS2/SVS
https://www.garlic.com/~lynn/2025b.html#98 Heathkit
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025.html#103 Mainframe dumps and debugging
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2025.html#26 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#8 IBM OS/360 MFT HASP
https://www.garlic.com/~lynn/2024g.html#98 RSCS/VNET
https://www.garlic.com/~lynn/2024g.html#69 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024g.html#62 Progenitors of OS/360 - BPS, BOS, TOS, DOS (Ways To Say How Old
https://www.garlic.com/~lynn/2024g.html#54 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#29 Computer System Performance Work
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024f.html#15 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#98 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024e.html#2 DASD CKD
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#117 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
an anecdote: 1979 major national grocer (hundreds of stores organized
in regions) was having severe performance problem ... and after
bringing through all the standard IBM corporate performance experts
... they got around to asking me. Datacenter had several CECs in
loosely-coupled configuration (each CEC with a stores from couple
dedicated regions). I'm brought into classroom with tables piled high
with CEC system activity reports. After more than 30mins, I notice a
specific 3330 DASD peaking at 7-8 I/Os (activity summed across all the
CEC activity reports) during worst performance period. I asked what it
was. It was shared DASD (across all CECs) with the store controller
apps PDS dataset with 3cyl PDS directory.
Then it was obvious ... every store controller app load for hundreds
of stores, was doing multi-track search avg. 1.5cyls, aka 60revs/sec,
full cyl, 19tracks, first search 19/60=.317sec, 2nd search
9.5/60=.158sec ... multi-track search for each store controller app
load taking .475sec ... (multi-track search locks the device,
controller, and channel for the duration). Effectively limited to two
store controller app loads (for the hundreds of stores) took avg of
four multi-track search I/Os and .951secs (during which DASD,
controller, channel was blocked) ... other 3-4 I/Os per second
representing the rest of the 7-8 I/Os per second (for the shared DASD
across all CECs).
Solution was to partition the store controller app PDS DATASET into
multiple files and provide a dedicated set (on non-shared) 3330 (and
non-shared controller) for each CEC.
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
pure random: 1988 branch office asks if I could help LLNL (national
lab) standardize some serial stuff they working with (including long
runs from machine rooms to high performance large graphics in offices)
quickly becomes fibre-channel standard ("FCS", including some stuff I
had done in 1980, initially 1gbit transfer, full-duplex, aggregate
200mbyte/sec). Then POK finally gets their stuff shipped with ES/9000
as ESCON (when it is already obsolete, 17mbytes/sec). Then POK becomes
involved with FCS and define a heavy weight protocol that eventually
ships as FICON.
Latest public benchmark I found was z196, "Peak I/O" getting 2M IOPS
using 104 FICON (about 20,000 IOPS/FCS). About the same time a FCS is
announced for E5-2600 server blades claiming over million IOPS (two
such FCS higher throughput than 104 FICON). Also IBM pubs was that
SAPs (system assist processors that actually do I/O) should be kept to
70% CPU (or around 1.5M IOPS). Also no CKD DASD has been made for
decades, all being simulated on industry standard fixed-block devices.
FICON ... overview
https://en.wikipedia.org/wiki/FICON
IBM System/Fibre Channel
https://www.wikiwand.com/en/articles/IBM_System/Fibre_Channel
Fibre Channel
https://www.wikiwand.com/en/articles/Fibre_Channel
FICON is a protocol that transports ESCON commands, used by IBM
mainframe computers, over Fibre Channel. Fibre Channel can be used to
transport data from storage systems that use solid-state flash memory
storage medium by transporting NVMe protocol commands.
... snip ...
Evolution of the System z Channel
https://web.archive.org/web/20170829213251/https://share.confex.com/share/117/webprogram/Handout/Session9931/9934pdhj%20v0.pdf
above mention zHPF, a little more similar to what I had done in 1980
and also in the original native FCS, early documents claimed something
like 30% throughput improvement ... pg39 claims increase in 4k IOs/sec
for z196 from 20,000/sec per FCS to 52,000/sec and then 92,000/sec.
https://web.archive.org/web/20160611154808/https://share.confex.com/share/116/webprogram/Handout/Session8759/zHPF.pdf
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
posts mentioning zHPF
https://www.garlic.com/~lynn/2025b.html#111 System Throughput and Availability II
https://www.garlic.com/~lynn/2025b.html#110 System Throughput and Availability
https://www.garlic.com/~lynn/2025.html#81 IBM Bus&TAG Cables
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2022c.html#54 IBM Z16 Mainframe
https://www.garlic.com/~lynn/2018f.html#21 IBM today
https://www.garlic.com/~lynn/2017d.html#88 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#1 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2016g.html#28 Computer hard drives have shrunk like crazy over the last 60 years -- here's a look back
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012m.html#13 Intel Confirms Decline of Server Giants HP, Dell, and IBM
https://www.garlic.com/~lynn/2012m.html#11 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
https://www.garlic.com/~lynn/2025c.html#5 Interactive Response
Some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS,
others went to the IBM Cambridge Science Center on the 4th flr and do
virtual machines, internal network, bunch of performance work (some
evolving into capacity planning), inventing GML in 1969 (after decade,
morphs into ISO standard SGML and after another decade morphs into
HTML at CERN). CSC 1st wanted a 360/50 to hardware modify with virtual
memory, but all the spare 50s were going to FAA/ATC, so they had to
settle for 360/40 and did CP40/CMS. When 360/67 standard with virtual
memory becomes available, CP40/CMS morphs into CP67/CMS (precursor to
VM370).
3272/3277 had hardware response of .086 ... and I had a bunch of
VM370s inside IBM that had 90th percentile of .11 seconds ... giving
.196secs for human response. For the 3274/3278, they move a lot of
hardware to the 3274 reducing 3278 manufacturing cost but really
driving up coax protocol latency and hardware response becomes
.3sec-.5sec, depending on amount of data. Letters to 3278 "product
administrator" got response that 3278 wasn't for interactive
computing, but data entry (MVS/TSO users never notice because it was
really rare that they saw even 1sec system response).
PROFS trivia: PROFS group went around gathering internal apps to wrap
menus around and picked up very early copy of VMSG for the email
client. Then when VMSG author tried to offer them a much enhanced
version, they wanted him shutdown & fired. It all quieted down when he
demonstrated his initials in non-displayed field in every email. After
that, he only shared his source with me and one other person.
While I was at SJR, I also worked with Jim Gray and Vera Watson on the
original SQL/relational (System/R, originally all done on VM370). Then
when the company was preoccupied with the next great DBMS, "EAGLE"
... we managed to do tech transfer to Endicott for SQL/DS. Later when
"EAGLE" imploded, there was request for how fast could System/R be
ported to MVS ... eventually released as DB2 (originally for decision
support only). Fall of 1980, Jim had left IBM for Tandem and tries to
palm off bunch of stuff on me.
Also early 80s, I got HSDT project, T1 and faster computer links (both
terrestrial and satellite, even do double hop satellite between west
coast and europe) and lots of conflicts with the communication group
(60s, IBM had 2701 telecommunication controller supporting T1, 70s
move to SNA/VTAM, issues caped controller links at 56kbits/sec). Was
working with NSF director and was suppose to get $20M to interconnect
the NSF supercomputer centers, then congress cuts the budget, some
other things happen and then a RFP is released, in part based on what
we already had running. Communication group was fiercely fighting off
client/server and distributed computing and we weren't allowed to bid.
trivia: NSF director tried to help by writing the company a letter
(3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and
director of Research, copying IBM CEO) with support from other
gov. agencies ... but that just made the internal politics worse (as
did claims that what we already had operational was at least 5yrs
ahead of the winning bid), as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet.
Communication also tried to block release of mainframe TCP/IP support,
when that failed, they said that since they had corporate ownership of
everything that crossed datacenter walls, it had to be shipped through
them; what shipped got aggregate 44kbytes/sec using nearly whole 3090
processor.
I then do enhancements for RFC1044 support and in some tuning tests at
Cray Research between Cray and 4341, got 4341 sustained channel
throughput using only modest amount of 4341 CPU (something like 500
times improvement in bytes moved per instruction executed).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM systems for internal datacenters posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
https://www.garlic.com/~lynn/2025c.html#5 Interactive Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
Lots of places got 360/67s for tss/360, but most places just used them
as 360/65s for OS/360. CSC did CP67, and a lot of places started using
360/67s for CP67/CMS. UofMichigan did their own virtual memory system
(MTS) for 360/67 (later ported to MTS/370). Stanford did their own
virtual memory system for 360/67 which included WYLBUR .... which was
later ported to MVS.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
some posts mentioning 360/67, cp67/cms, michigan, mts, stanford, wylbur
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023b.html#34 Online Terminals
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2018b.html#94 Old word processors
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015c.html#52 The Stack Depth
https://www.garlic.com/~lynn/2014c.html#71 assembler
https://www.garlic.com/~lynn/2013e.html#63 The Atlas 2 and its Slave Store
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2010j.html#67 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2008h.html#78 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
https://www.garlic.com/~lynn/2025c.html#5 Interactive Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025c.html#7 Interactive Response
ORVYL and WYLBUR
https://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML
... this wiki entry looks like work in progress
https://en.wikipedia.org/wiki/Talk%3AORVYL_and_WYLBUR
Orvyl is a time shariang monitor that took advantage of the paging
capability of the IBM 360/67 at Stanford's Campus Computing center. It
was written by Roger Fajman with major improvements by Richard Carr, I
believe over the summer in 1968. Wylbur is a text editor and
time-sharing system. Wylbur was originally written by John Borgelt
alongside Richard, again, I believe, in the summer of 1968. Milten
monitored and supervised all the computer terminal input ports that
allowed multiple users access to Wylbur and Orvyl. John Halperin wrote
Milten. Joe Wells for Wylbur and John Halperin for Milten converted
them to run under MVT on the 360/91 at SLAC and eventually OS/VS2 when
SLAC obtained the two 360/168 computers. Joe made major improvements
to Wylbur including the 'Exec' file capability that allowed one to
script and run Wylbur Commands. He also built automatic file recovery
for when the entire MVT/MVS system crashed which was not infrequent.
This made Joe very popular with the SLAC physics community. John
extended Milten to operate hundreds of terminals using an IBM3705
communications controller. These changes were eventually back-ported
to the campus version of Wylbur when Orvyl was retired.
... snip ...
... trivia: "Metz" frequently mentioned (in the wiki), is (also)
person from online "bit.listserv.ibm-main" that asked me to track down
decision to add virtual memory to all 370s ... archived post w/pieces
of email exchange with staff member to executive making the decision
https://www.garlic.com/~lynn/2011d.html#73
... trivia2: At Dec81 SIGOPS, Jim Gray (I had worked with Jim &
Vera Watson on original SQL/Relational, System/R, before he left for
Tandem fall 1980) asked me if I could help Richard (Tandem co-worker)
get his Stanford PhD, it involved Global LRU page replacement
... and there was ongoing battle with the "Local LRU page replacement"
forces. I had huge amount of data from 60s & early 70s with both
"global" and "local" implementations done for CP67. Late 70s &
early 80s, I had been blamed for online computer conferencing on the
internal network; it really took off spring 1981 when I distributed
trip report about visit to Jim at Tandem. While only 300 directly
participated, claims that 25,000 were reading and folklore when
corporate executive committee was told, 5of6 wanted to fire me. In any
case, IBM executives blocked me from sending reply for nearly a year.
... trivia3: In prior life, my wife was in Gburg JES group, reporting
to Crabby and one of the catchers for ASP (to turn into JES3) and
co-author of JESUS (JES Unified System) specification (all the
features of JES2 & JES3 that the respective customers couldn't
live w/o; for various reason never came to fruition). She was then
con'ed into transfer to POK responsible for mainframe loosely-coupled
architecture (Peer-Coupled Shared Data). She didn't remain long
because 1) periodic battles with communication group trying to force
her to use VTAM for loosely-coupled operation and 2) little uptake
(until much later with SYSPLEX and Parallel SYSPLEX), except for IMS
hot-standby (she has story asking Vern Watts who he would ask
permission, he replied "nobody" ... he would just tell them when it
was all done).
virtual memory, page replacement, paging posts
https://www.garlic.com/~lynn/subtopic.html#clock
original sql/relational implementation, System/R
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
loosely-coupled, hot-standby posts
https://www.garlic.com/~lynn/submain.html#shareddata
--
virtualization experience starting Jan1968, online at home since Mar1970
Living Wage
From: Lynn Wheeler <lynn@garlic.com>
Subject: Living Wage
Date: 13 May, 2025
Blog: Facebook
In the 90s, congress asked GAO for studies on paying workers below
living wage ... GAO report found it cost (city/state/federal)
govs. avg $10K/worker/year .... basically worked out to an indirect
gov. subsidy to their employers. The interesting thing is that it has
been 30yrs since that report ... and have yet to see congress this
century asking the GAO to update the study.
https://www.gao.gov/assets/hehs-95-133.pdf
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM System/R
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/R
Date: 15 May, 2025
Blog: Facebook
I worked with Jim Gray and Vera Watson on System/R ... had a joint
study with BofA, getting 60 VM/4341s for betatest. Then helped with
transferring technology to Endicott for SQL/DS ... "under the radar"
when IBM was preoccupied with the next great DBMS, "EAGLE" ... then
when "EAGLE" implodes, there was request for how fast could System/R
be ported to MVS, eventually released as DB2, originally for decision
support only.
recent posts about doing some work same time working on System/R
https://www.garlic.com/~lynn/2025b.html#90 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#112 System Throughput and Availability II
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
... then 1988 I got HA/6000 project, original for NYTimes to move
their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I then
rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXcluster
support in same source base with Unix; I do distributed lock
manager with VAXCluster semantics to ease ports; IBM Toronto was
still long way before having simple relational for OS2). Then S/88
product administrator starts taking us around to their customers and
gets me to write a section for the corporate continuous
availability strategy document (it gets pulled when both
Rochester/as400 and POK/mainframe, complain they can't meet the
objectives).
other system/r details
http://www.mcjones.org/System_R/
https://en.wikipedia.org/wiki/IBM_System_R
trivia: 1st relational (non-sql) shipped; Some of the MIT CTSS/7094
people go to the 5th flr to do MULTICS, others went to the IBM
cambridge science center on the 4th flr to do virtual machines
(modifying 360/40 with virtual memory and doing cp40/cms, CP40 morphs
into CP67 when 360/67 standard with virtual memory becomes available),
internal network, invent GML (in 1969, which later morphs into SGML
standard and HTML at CERN), performance work (some of which morphs
into capacity planning), etc. Then when decision is made to add
virtual memory to all 370s, some of the science center splits off from
CSC and takes over the IBM Boston Programming Center on the 3rd flr
for the VM370 development group (and all of SJR System/R was done on
VM370). Multics ships the 1st relational product,
https://en.wikipedia.org/wiki/Multics_Relational_Data_Store
https://www.mcjones.org/System_R/mrds.html
and I have transferred out to San Jose Research from CSC.
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
posts getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster
survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM System/R
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/R
Date: 15 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#10 IBM System/R
other trivia: both multics (also 1st relational product) and tss/360
were single level store ... also adapted for future system and later
s/38. Note one of the last nails in the FS coffin was analysis by IBM
Houston Science Center than if 370/195 applications were redone for FS
machine made out of the fastest available hardware technology, it
would have throughput of 370/145 (about 30 times slowdown; for S/38
market, there was significant technology hardware headroom between the
market requirements and available hardware).
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
I continued to work on 360&370 all during FS, including periodically
ridiculing what they had done (after joining IBM, I did an internal
page-mapped filesystem for CP67/CMS showing at least three times the
throughput ... and claimed I learned what not to do from TSS/360)
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 4341
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 15 May, 2025
Blog: Facebook
I had early engineering 4341 (E5) in bldg15 ... running one of my
enhanced operating systems. Branch office finds out and in Jan1979,
asks me to do benchmark for national lab looking at getting 70 for
compute farm (sort of the leading edge of the coming cluster
supercomputing tsunami). The E5 even had slowed down processor clock
by 20%, and benchmark was still successful.
Then in the early 80s, large corporations start making orders for
hundreds of vm4341s at a time for placement out in non-datacenter
departmental areas (sort of the leading edge of the coming distributed
computing tsunami).
When I transferred from Cambridge Science Center to San Jose Research,
I get to wander around datacenters in silicon valley, including disk
bldg14/engineering and bldg15/product test, across the street. They
are doing 7x24, prescheduled stand alone testing and mentioned that
they had recently tried MVS, but it had 15min MTBF (requiring manual
re-ipl in that environment). I offer to rewrite I/O supervisor to be
bullet proof and never fail allowing any amount of on-demand,
concurrent testing, greatly improving productivity.
Bldg15 gets very early engineering systems for I/O testing and get the
first engineering 3033 outside POK processor development. I/O testing
only takes a percent or two of the 3033, so we scrounge a 3830 and
3330 string to set up our own private online service (and run 3270
coax under the street to my office in SJR/bldg28). Then bldg15 gets an
early engineering 4341 and I joke with 4341 people in Endicott that I
have significantly more 4341 availability than they do.
I'm also working Jim Gray and Vera Watson on the original
SQL/relational System/R ... all work done on VM370 ... and get a
System/R pilot at BofA ordering 60 VM/4341s for distributed operation.
We then do System/R technology transfer to Endicott for SQL/DS
In spring 1979, some USAFDC (in the Pentagon) wanted to come by to
talk to me about 20 VM/4341 systems, visit kept being delayed, by the
time they came by (six months later), it had grown from 20 to 210.
I also get HSDT project, T1 (1.5mbit/sec) and faster computer links
(both terrestrial and satellite) and lots of conflict with
communication group (60s, IBM 2701 telecommunication controller
supported T1 but 70s transition to SNA/VTAM and the associated issues
seem to cap controllers at 56kbit/sec links) ... and looking at more
reasonable speeds for distributed operation.
Mid-80s, communication group is fighting off client/server and
distributed computing (preserving dumb terminal paradigm) and trying
to block mainframe release of TCP/IP support. When they loose, then
they claim that since they have corporate responsibility for
everything that crosses datacenter walls, it has to be released
through them. What ships gets aggregate 44kbytes/sec using nearly
whole 3090 processor. I then do changes for RFC1044 support and in
some tuning tests at Cray Research between Cray and 4341, get
sustained 4341 channel throughput, using only modest amount of 4341
processor (something like 500 times improvement in bytes moved per
instruction executed).
I had helped Endicott with ECPS microcode assist for the 138/148 (and
then used for 4300s followons) .... old archive post with initial
analysis selecting 6kbytes of 370 kernel instruction paths for moving
to microcode.
https://www.garlic.com/~lynn/94.html#21
Endicott then wants to pre-install VM370 on every machine shipped
... however POK was in the process of convincing corporate to kill the
vm370 product, shutdown the development group and transfer all the
people to POK for MVS/XA ... and it is vetoed. Endicott eventually
manages to save the VM370 product mission for the mid-range, but has
to recreate a development group from scratch (and was never able to
get permission to pre-install vm370 on every machine).
Posts getting to play disk engineer in bldgs 14/15
https://www.garlic.com/~lynn/subtopic.html#disk
post mentioning CP67L, CSC/VM, SJR/VM systems for internal datacenters
https://www.garlic.com/~lynn/submisc.html#cscvm
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 4341
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 15 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#12 IBM 4341
Long ago and far away; Science Center had added original (non-dat) 370
instructions support to CP67 for (vanilla) 370 virtual machine
option. Then after there was decision to add virtual memory to all
370s, there was joint project with Cambridge and Endicott (distributed
development project using the CSC CP67-based science center wide-area
network as it was evolving into the corporate internal network) to
expand CP67 370 virtual machine support to full 370 virtual memory
architecture which was "CP67-H", then there was modification to CP67
to run on 370 virtual memory architecture which was "CP67-I". Because
Cambridge also had profs, staff, students from Boston/Cambridge
institutions, CP67-L ran on the real 360/67, CP67-H ran in a CP67-L
360/67 virtual machine and CP67-I ran in a CP67-H 370 virtual machine
(countermeasure to leaking unannounced 370 virtual memory). This was
in regular operation a year before the first engineering machine
(370/145) with virtual memory was operational and CMS run in CP67-I
virtual machine (also CP67-I was used to verify the engineering
370/145 virtual memory implementation) ... aka
CMS running in CP67-I 370 virtual machine
CP67-I running in CP67-H 370 virtual machine
CP67-H running in a CP67-L 360/67 virtual machine
CP67-L running on real 360/67 (non-IBMers restricted here)
Later three San Jose engineers came out to Cambridge and added 2305 &
3330 device support to CP67-I ... for CP67-SJ ... which was in wide
use on (internal real) 370 virtual memory machines. As part of all
this, original multi-level source update support had also been added
to CMS.
trivia: I was asked to track down decision to add virtual memory to
all 370s ... and found staff member to executive making the
decision. Basically MVT storage management was so bad that region
sizes had to be specified four times larger than used. As a result,
typical 1mbyte 360/165 was limited to four concurrent regions,
insufficient to keep system busy and justified. Moving MVT into
16mbyte virtual address space (similar to running MVT in CP67 virtual
machine), allowed increasing number of concurrent running regions by
factor of four times (capped at 15 because of storage protect keys)
with little or no paging (VS2/SVS).
Some of the MIT CTSS/7094 people had gone to the 5th flr for multics,
others went to the 4th flr for the IBM Science Cener. With the
decision to add virtual memory to all 370s, some split off from the
science center and take over the IBM Boston Programming Center on the
3rd flr for the VM370 development group. The VM370 product work
continued on in parallel with the CP67-H, CP67-I, CP67-SJ.
other trivia: part of SE training had been part of group onsite at
customer. after 23jun1969 unbundling announce and starting to charge
for SE services, they couldn't figure out how not to charge for
trainee SEs at customer site. HONE CP67 systems were deployed
supporting online branch office SE use practicing with guest operating
systems in virtual machines. With the original announce of 370,
the HONE CP67 systems were upgraded with the non-dat 370 virtual
machine support. Science Center had also ported APL\360 to CMS for
CMS\APL ... and HONE started offering sales & marketing support
APL-based applications ... which eventually comes to dominate all HONE
activity (and guest operating systems use withered away) ... after
graduating and joining science center, one of my hobbies was enhanced
production operating systems for internal datacenters and HONE was
early (and long time) customer.
Note: In the morph from CP67->VM370, lots of features were
simplified and/or dropped. For VM370R2, I started adding CP67
enhancements into my CSC/VM for internal datacenters. Then for a
VM370R3-based CSC/VM, I added in other features, including
multiprocessor support (originally for the US HONE systems that had
been consolidated in silicon valley), so they could add a 2nd
processor to each system.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
23jun69 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
misc recent past posts mentioning Endicott and CP67-H,I,SJ
https://www.garlic.com/~lynn/2025b.html#6 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#10 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 4341
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 16 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#12 IBM 4341
https://www.garlic.com/~lynn/2025c.html#13 IBM 4341
CSC port of APL\360 to CMS\APL ... required redoing APL\360 storage
management. APL\360 had swapped 16kbyte (sometimes 32kbyte)
workspaces, APL was define a new workspace location for every
assignment statement (even if item already existed), APL\360 would
quickly exhaust all workspace storage and need to do garbage
collection and then compact assigned storage; since the complete
workspace was swapped ... not big problem. Initial move to CMS\APL wih
demand page large workspaces resulting in enormous page
thrashing. Also APIs for system services (like file I/O) was added,
combination enabled lots of real world applications.
Then move from CP67 to VM370 and consolidating all US HONE datacenters
in silicon valley ... across back parking lot from the IBM Palo Alto
Science Center. PASC had done lots more work on APL for what became
(VM370/CMS) APL\CMS ... PASC also did the 370/145 APL microcode assist
... increase performance by factor of ten times (HONE couldn't really
use since it needed large multi-mbyte real memory of 370/168). PASC
also improved FORTRAN H optimization (available internally as
FORTRAN-Q, eventually released to customers as FORTRAN-HX) and helped
HONE with moving some of the larger compute intensive HONE APPs to
FORTRAN and being able to call from APL\CMS.
The consolidated US HONE configured the multiple systems into large
"single-system-image", loosely-coupled, shared DASD operation with
load-balancing and fall-over across the complex (largest IBM SSI
configuration, internally or customers). Then I added multiprocessor
support to VM370R3-based CSC/VM ... so HONE could add a 2nd processor
to each system ... (with some cache affinity and other hacks) two
processor systems were getting twice the throughput of the previous
single processor operation (at a time when MVS documentation claimed
their two processor operation only had 1.2-1.5 the throughput of
single processor operation). I made a joke some 30+ yrs later when
similar SSI capability was released to customers ("from the annals of
release no software before its time").
PASC also helped with HONE SEQUOIA, a few hundred kilobyte of APL code
that was integrated into the share memory image of the APL executable
(so only a single copy existed for all users) ... it basically
provided a high-level menu environment for the sales&marketing
users (hiding most details and operation of CMS and APL).
There was a scenario that was repeated a couple times in the later 70s
and early 80, where a branch manager was promoted to executive in DPD
hdqtrs (with HONE reporting to them) and was aghast to discover that
HONE was VM370-based (and not MVS). They would believe that if they
directed HONE to move everything to a MVS-base, their career would be
made ... almost every other activity stopped ... while the attempt to
get a MVS HONE operational ... after a year or so, it would be
declared a success, heads would roll uphill, and things return to
normal VM370 operation. Towards middle of the 80s, somebody decided
that they couldn't move from VM370 to MVS ... because of HONE's use of
my (by then) internal SJR/VM ... and it might be possible if it was
done in two stages, mandate HONE move to a standard product VM
(because what would happen to the whole IBM sales&marketing empire if
I was hit by a bus) before attempting the move to MVS.
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM systems
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE systems
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
Cluster Supercomputing
From: Lynn Wheeler <lynn@garlic.com>
Subject: Cluster Supercomputing
Date: 17 May, 2025
Blog: Facebook
I was doing a lot with early engineering IBM 4341 and in Jan1979,
branch office found out and cons me into do benchmark for national lab
that was looking at getting 70 for compute farm (sort of leading edge
of coming cluster supercomputing tsunami). The engineering 4341 had
processor clock was slowed down 20% (as they worked out kinks in
timing), but benchmark was still succesful (was fortran benchmark from
60s CDC6600 ... and engineering 4341 benchmark ran about same as 60s
CDC6000).
A couple years later, got HSDT project, T1 (1.5mbits/sec) and faster
computer links (both terrestrial and satellite) and lots of conflict
with the communication products group (60s, IBM was selling 2701
telecommunication controller that supported T1, then 70s move to
SNA/VTAM, issues appeared to cap controllers at 56kbit links). Was
also working with NSF director and was suppose to get $20M to
interconnect the NSF Supercomputer center. Then congress cuts the
budget, some other things happen and eventually an RFP is released (in
part based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, it becomes the NSFNET backbone,
precursor to modern Internet.
The communication group was also fiercely fighting off client/server
and distributed computing and trying to block release of IBM mainframe
TCP/IP support. When that failed, they said that since they had
corporate responsibility for everything that crossed datacenter walls,
it had to be released through them; what shipped got aggregate
44kbytes/sec using nearly whole 3090 CPU. I then add RFC1044 support
and in some tuning tests at Cray Research between Cray and 4341, got
sustained 4341 channel throughput, using only modest amount of 4341
CPU (something like 500 times improvement in bytes moved per
instruction executed).
1988 got HA/6000 project, originally for NYTimes to move their
newspaper system (ATEX), off DEC VAXCluster to RS/6000. I rename it it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with
national labs (LANL, LLNL, NCAR/UCAR, etc) and commercial cluster
scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix) that
have VAXCluster support in same source based with UNIX. IBM S/88
product administrator starts taking us around to their customers and
also has me do a section for the corporate strategic continuous
availability strategy document (it gets pulled when both
Rochester/AS400 and POK/mainframe complain that they couldn't meet
requirements). Then was also working with LLNL porting their
LINCS/UNITREE filesystem to HA/CMP and NCAR/UCAR spin-off Mesa
Archival filesystem to HA/CMP.
Also 1988, IBM branch office asks if I can help LLNL (national lab)
standardize some serial stuff they were working with ... which quickly
becomes fibre-standard channel ("FCS", including some stuff I had done
in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec)
... some competition with LANL standardization of Cray 100mbyte/sec
for HIPPI (and later serial version).
Early Jan1992, cluster scale-up meeting with Oracle CEO, IBM/AWD
executive Hester tells Ellison that we would have 16-system clusters
by mid-92 and 128-system clusters by ye-92. Then late Jan1992, cluster
scale-up is transferred for announce as IBM Supercomputer (for
technical/scientific *ONLY*) and we are told that we can't work on
anything with more than four processors (we leave IBM a few months
later).
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster
survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
FCS &(/or) FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
Cluster Supercomputing
From: Lynn Wheeler <lynn@garlic.com>
Subject: Cluster Supercomputing
Date: 17 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#15 Cluster Supercomputing
About same time asked to help LLNL on what becomes FCS, branch office
also asked if I could get involved in SCI
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
Decade later did some consulting for Steve Chen (designed Cray XMP &
YMP),
https://en.wikipedia.org/wiki/Steve_Chen_(computer_engineer)
https://en.wikipedia.org/wiki/Cray_X-MP
https://en.wikipedia.org/wiki/Cray_Y-MP
but at that time, Sequent CTO (before IBM bought Sequent and shut it
down).
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
Sequent had used SCI for a (numa) 256 i486 machine
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA
FCS &(/or) FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some archived posts mentioning SCI
https://www.garlic.com/~lynn/2024g.html#85 IBM S/38
https://www.garlic.com/~lynn/2024e.html#90 Mainframe Processor and I/O
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2022g.html#91 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#29 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022b.html#67 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022.html#118 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2021i.html#16 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#45 OoO S/360 descendants
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2019d.html#81 Where do byte orders come from, Nova vs PDP-11
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#32 Cluster Systems
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2016e.html#45 How the internet was invented
https://www.garlic.com/~lynn/2016c.html#70 Microprocessor Optimization Primer
https://www.garlic.com/~lynn/2016b.html#74 Fibre Channel is still alive and kicking
https://www.garlic.com/~lynn/2016.html#19 Fibre Chanel Vs FICON
https://www.garlic.com/~lynn/2014m.html#176 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#95 5 Easy Steps to a High Performance Cluster
https://www.garlic.com/~lynn/2014d.html#18 IBM ACS
https://www.garlic.com/~lynn/2014.html#85 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#50 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013m.html#96 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013m.html#94 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013m.html#70 architectures, was Open source software
https://www.garlic.com/~lynn/2013g.html#49 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2011p.html#40 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#39 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2010.html#92 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2010.html#41 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2008i.html#3 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#2 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2006y.html#38 Wanted: info on old Unisys boxen
https://www.garlic.com/~lynn/2006q.html#24 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM System/R
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/R
Date: 18 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#10 IBM System/R
https://www.garlic.com/~lynn/2025c.html#11 IBM System/R
At Dec81 SIGOPS, Jim asked me if I could help Carr (Tandem co-worker)
get his Stanford PhD, it involved Global LRU page replacement
... and there was ongoing battle with the "Local LRU page replacement"
forces. I had huge amount of data from 60s & early 70s with both
"global" and "local" implementations done for CP67. As undergraduate
in 60s, I rewrote lots of CP67 (virtual machine precursor to
VM370), including doing Global LRU ... about the time there was
bunch of ACM literature appearing about "Local LRU". Then early 70s,
IBM Grenoble Scientific Center modified CP67 with "Local LRU" and
"working set dispatcher". Grenoble had 1024kbyte memory 360/67, 155
pages after fixed storage and 35 users. CSC was running my
implementation on 768kbye memory 360/67, 104pages after fixed storage
with 75-80 users with similar workloads, but better throughput and
interactive response.
Late 70s & early 80s, I had been blamed for online computer
conferencing on the internal network; it really took off spring 1981
when I distributed trip report about visit to Jim at Tandem. While
only 300 directly participated, claims that 25,000 were reading and
folklore when corporate executive committee was told, 5of6 wanted to
fire me. In any case, IBM executives blocked me from sending my
Global/Local reply for nearly a year (19Oct1982).
Page I/O, Global LRU replacement, virtual memory posts
https://www.garlic.com/~lynn/subtopic.html#clock
Dynamic Adaptive Resource Managment (fairshare) scheduler posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
some past (CP67, global/local, Cambridge/Grenoble) refs
https://www.garlic.com/~lynn/2025b.html#115 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#98 Heathkit
https://www.garlic.com/~lynn/2024g.html#107 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#34 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024b.html#95 Ferranti Atlas and Virtual Memory
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2018f.html#63 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#62 LRU ... "global" vs "local"
https://www.garlic.com/~lynn/2016c.html#0 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)
--
virtualization experience starting Jan1968, online at home since Mar1970
Is Parallel Programming Hard, And, If So, What Can You Do About It?
Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About It?
Newsgroups: comp.arch
Date: Sun, 18 May 2025 14:10:00 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Think of what a cache is for in the first place. The only reason they work
is because of the "principle of locality". This can also be expressed as
saying that typical patterns of data access by application programs follow
a Pareto distribution, less formally known by monikers like the "80/20
rule" or the "90/10 rule".
IBM "added" full-track "-13" cache to 3880 dasd control for 3380 disk
(ten records/track) ... claiming 90% "hit rate". Issue was that there
was a lot of sequential file reading ... the 1st record read for track
would be a "miss" but bring in the whole track, resulting in the next
nine reads being "hits".
system services offered option for application doing sequential i/o to
specify full-track i/o (into processor memory) ... which would result
in the zero hit rate for the controller cache (IBM standard batch
operating system did contiguous allocation on file creation).
About the same time, we did system mod. that did highly efficient
trace/capture of every record operation which was deployed on numerous
production systems. Then traces were fed to sophisticated simulator that
could vary algorithms, caches, kinds of caches, sizes of caches,
distribution of caches, etc.
Given a fixed amount of cache storage, it was always better to have a
global system cache ... than partitioned/distributed; except a few edge
cases. Example, if device track cache could be used to immediately
start transfering data, rather having to rotate to start of track before
starting transfer.
posts mentioning record activity trace/capture
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2022b.html#83 IBM 3380 disks
https://www.garlic.com/~lynn/2022.html#83 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2013d.html#11 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012c.html#47 nested LRU schemes
https://www.garlic.com/~lynn/2011.html#71 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by dataset
https://www.garlic.com/~lynn/2007.html#3 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006y.html#35 The Future of CPUs: What's After Multi-Core?
more recent posts mentioning 3880-13 or 3880-23
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2023g.html#7 Vintage 3880-11 & 3880-13
https://www.garlic.com/~lynn/2022b.html#83 IBM 3380 disks
https://www.garlic.com/~lynn/2022.html#83 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#97 IBM Disks
https://www.garlic.com/~lynn/2017b.html#32 Virtualization's Past Helps Explain Its Current Importance
https://www.garlic.com/~lynn/2014l.html#81 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2014i.html#96 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2013d.html#3 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012d.html#78 megabytes per second
https://www.garlic.com/~lynn/2012d.html#75 megabytes per second
https://www.garlic.com/~lynn/2012d.html#72 megabytes per second
https://www.garlic.com/~lynn/2012c.html#47 nested LRU schemes
https://www.garlic.com/~lynn/2012c.html#34 nested LRU schemes
https://www.garlic.com/~lynn/2011.html#68 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#67 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms
https://www.garlic.com/~lynn/2010i.html#20 How to analyze a volume's access by dataset
https://www.garlic.com/~lynn/2010g.html#55 Mainframe Executive article on the death of tape
https://www.garlic.com/~lynn/2010g.html#11 Mainframe Executive article on the death of tape
https://www.garlic.com/~lynn/2010.html#51 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010.html#47 locate mode, was Happy DEC-10 Day
--
virtualization experience starting Jan1968, online at home since Mar1970
APL and HONE
From: Lynn Wheeler <lynn@garlic.com>
Subject: APL and HONE
Date: 19 May, 2025
Blog: Facebook
23jun1969, IBM unbundling starts to charge for (application) software
(made case that kernel software still free), SE services, maint.,
etc. SE training use to include part of large group at customer site,
however after unbundling, they couldn't figure out how to not charge
for SE training time. As a result CP67 HONE datacenters were setup
where branch office SEs could login online to HONE and practice with
guest operating systems running in virtual machines. One of my
hobbies after joining IBM was enhanced production operating systems
for internal datacenters and HONE was one of the first (and long time)
customer.
Cambridge Science Center also ports APL\360 to CMS as CMS\APL, redoing
storage management (APL\360 was 16kbyte, sometimes 32kbyte, swapped
workspaces, new location for variables on every assignment, even if
already existed, quickly ran through the workspace, then garbages
collect and coalesce everything to contiguous area and start again;
move to CMS\APL and demand paged hundreds of kilobyte/megabyte
workspaces resulted in severe page thrashing) and APIs for system
services (like file I/O), enabling lots of real world applications.
HONE then started offering APL-based sales&marketing support
applications, which came to dominate all HONE activity (and guest
operating system practice just withered away). With the propagation of
clone HONE datacenters around the world, HONE was easily the largest
APL operation in the world.
After decision to added virtual memory to all 370s, decided also to
morph CP67 to VM370 and HONE consolidates all US datacenters in
silicon valley (across the back parking lot from IBM Palo Alto Science
Center) upgrading from CP67/CMS to VM370/CMS (trivia: when facebook
1st moves into silicon valley, it is a new bldg built next door to the
former consolidated US HONE datacenter). PASC also does APL microcode
assist for 370/145 and releases APL\CMS (claiming ten times
performance improvement, equivalent to 370/168). HONE still needed
real 370/168s for the larger real memory sizes. Non-CMS went from
APL\360 to APL\SV and then VS/APL which replaces APL\CMS on VM370.
PASC also responsible for internal Fortran-Q optimization (eventually
released as FORTRAN-HX for customers) and also helps HONE with
invoking some of the reprogramed sales&marketing FORTRAN APPs from
APL.
One of the other CSC members, in the early 70s, had done an analytical
System Model in APL, which was made available on HONE as the
Performance Predictor, SEs could enter customer workload and
configuration information and ask "what-if" questions about
workload&configuration changes. After IBM troubles in the early 90s
and unloading all sort of stuff, a descendant of the Performance
Predictor was acquired by performance consultant in Europe, who ran
it through a APL->C translator and using it for large system
performance consulting. Turn of the century I was doing some
performance work for operation doing financial outsourcing, datacenter
with 40+ max configured IBM mainframes (@$30M, constant upgrades, no
system older than 18months), all running the same 450K statement Cobol
program (had a large group for decades responsible for performance
care and feeding). I was still able to find 14% improvement and the
other consultant (w/performance predictor) found another 7%.
23jun1969 Unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
some recent posts mentioning performance predictor
https://www.garlic.com/~lynn/2025b.html#68 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#6 Testing
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024b.html#18 IBM 5100
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#33 Copyright Software
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#7 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history
--
virtualization experience starting Jan1968, online at home since Mar1970
Is Parallel Programming Hard, And, If So, What Can You Do About It?
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About It?
Newsgroups: comp.arch
Date: Tue, 20 May 2025 16:38:31 -1000
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
I presume you know that the 3880 controller did not do what today we
call command queuing, so I think you were referring to a potential
queue in the host. That being the case, the controller doesn't know
if there is a queue or not. So given that, why not start reading
record 1 on the next track. If a request comes in, you can abandon
the read to service the request - no harm, no foul. If there isn't,
and you subsequently get a request for that track, it's a big win.
The only potential loss is if you get a request for the track that was
LRU and got pushed out of the cache.
re:
https://www.garlic.com/~lynn/2025c.html#18 Is Parallel Programming Hard, And, If So, What Can You Do About It?
over optimizing full track read ahead could lock out other tasks that
had competing requirements for other parts of the disk.
trivia: early 70s, IBM decided to add virtual memory to all
370s. Early last decade I was asked to tract down the decsion. I found
staff member to executive making the decision. Basically MVT (IBM's
high end, major batch system) storage management was so bad that
(multiprogramming) region sizes had to be specified four times larger
than used, as a result typical (high-end) 1mbyte 370/165 only ran four
regions concurrently, insufficient to keep system busy and justified.
Running MVT in a 16mbyte virtual address space (sort of like running
MVT in CP67 16mbyte virtual machine) would allow concurrent
regions to be increased by factor of four times (caped at 15 because
of 4bit storage protect key) with little or no paging. Later as
high-end systems got larger, they needed more than 15 concurrent
running regions ... and so switched from VS2/SVS (single 16mbyte
virtual address space) to VS2/MVS (a separate 16mbyte virtual address
space for each "region", went through MVT->VS2/SVS->VS2/MVS)
along the way, I had been pontificating that DASD (disks) relative
system throughput has been decreasing ... in 1st part of 80s, I turned
out analysis that in the 15yr period since the IBM 360 1st ships,
DASD/disk relative system throughput had declined by an order of
magnitude (i.e. DASD got 4-5 times faster while systems got 40-50
times faster). Some DASD division executive took exception and
assigned the division performance group to refute the claim ... after
a few weeks, they came back and bascially said I had slightly
understated the issue. The performance group then respun the analysis
for user group presenation on how to configure disks and filesystem to
improve system throughput (SHARE63, B874, 16Aug1984).
1970 IBM 2305 fixed-head disk controller supported 8 separate psuedo
device addresses ("multiple exposure") for each 2305 disk ... each
having channel program that the controller could optimize. In 1975, I
was asked to help enhance low-end 370 that had integrated channels and
integrated device controllers ... and I wanted to upgrade microcode so
I just update a queue of channel programs that the (integrated
microcode) controller could optimize (wasn't allowed to ship the
product).
Later I wanted to add "multiple exposure" support to 3830 (precursor
to the 3880) for 3350 (moveable arm) disks (IBM east coast group was
working on emulated electronic memory disks, considered it might
compete and got it vetoed. sometime later they got shutdown, they were
told IBM was selling all electronic memory it could make as higher
markup processor memory).
getting to play disk engineer in (SJ DASD) bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
Is Parallel Programming Hard, And, If So, What Can You Do About It?
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About It?
Newsgroups: comp.arch
Date: Wed, 21 May 2025 07:06:26 -1000
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
Only if the cores and/or "hardware threads" do not interfere with one
another? Fwiw, an example of an embarrassingly parallel algorithm is
computing the Mandelbrot set. Actually, this reminds me of the "alias"
problem with Intel hyper threading in the past.
re:
https://www.garlic.com/~lynn/2025c.html#18 Is Parallel Programming Hard, And, If So, What Can You Do About It?
https://www.garlic.com/~lynn/2025c.html#20 Is Parallel Programming Hard, And, If So, What Can You Do About It?
shortly after graduating and joining IBM, I got roped into helping
with hyperthreading the 370/195. It had pipelined, out-of-order
execution, but conditional branches drained the pipeline and most code
only ran system had half rated throughput. Two hardware i-streams
... each running at half throughput would (might) keep system full
throughput.
hardware hyperthreading mentioned in this about Amdahl winning the
battle to make ACS, 360 compatible (folklore it was shutdown because
IBM was concerned that it would advance the state-of-the-art too fast
and IBM would loose control of the market, and Amdahl leaves IBM).
https://people.computing.clemson.edu/~mark/acs_end.html
Then decision was made to add virtual memory to all 370s, and it was
decided it would be too difficult to add it to 370/195 and all new 195
activity was shutdown (note operating system for 195 was MVT and its
shared memory multiprocessor support on 360/65MP was only getting
1.2-1.5 throughput of single processor, so running 195 with simulated
multiprocessor with two i-streams ... would only be more like .6 times
fully rated throughput (all hardware might be running at 100%, but the
SMP overhead would limit productive throughput); trivia the
multiprocessor overhead continues up throught MVS.
also after joining IBM, one of my hobbies was enhanced production
operating systems for internal datacenters and the online
sales&marketing support HONE systems were early (& long time)
customer. Then with decision to add virtual memory to all 370s, there
was also decision to do VM370 and in the morph of CP67->VM370 a lot of
things were simplified and/or dropped (including multiprocessor
support). I then start adding stuff back into VM370 and initially do
multiprocessor support for the HONE 168s so they can add 2nd processor
to all their systems (and managed to get twice single processor
throughput with some cache affinity hacks and other stuff).
In the mid-70s, after Future Systems implodes,
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
I get roped into helping with a 370 16-CPU multiprocessor design. It
was going fine until somebody tells head of POK (high end 370
processors) that it could be decades before POK's favorite son
operating system (now "MVS") had ("effective") 16-cpu support (POK
doesn't ship a 16-CPU system until after the turn of the century)
... and some of us are invited to never visit POK again.
SMP, tightly-coupled, shared memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 8100
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 8100
Date: 22 May, 2025
Blog: Facebook
In prior life, my wife was asked by Evans to audit/review 8100
... shortly later it was canceled.
Later communication group was fiercely fighting off client/server and
distributed computing (trying to preserve their dumb terminal design)
and block release of mainframe tcp/ip support. When that was
overturned, they said that since they had corporate ownership of
everything that crossed datacenter walls it had to be released through
them; what shipped got aggregate 44kbytes/sec using nearly whole 3090
processor. I then added RFC1044 support and in some tuning tests at
Cray Research between Cray and 4341, got sustained 4341 channel
throughput using only modest amount of 4341 processor (something like
500 times increase in bytes moved per instruction executed).
I had gotten HSDT project in early 80s, T1 (1.5mbits/sec, full-duplex,
aggregate 300kbytes/sec) and faster computer links and lots of
conflict with the communication group (in 60s, IBM had 2701
telecommunication controller that supported T1 links, however with the
transition to SNA/VTAM in the 70s, and the associated issues, seemed
to cap controllers at 56kbit/sec links). trivia: also EU T1,
2mbits/sec, full-duplex, aggregate 400kbytes/sec.
posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt
posts mentioning RFC1044
https://www.garlic.com/~lynn/subnetwork.html#1044
some other posts mentioning 8100
https://www.garlic.com/~lynn/2025b.html#4 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2024g.html#56 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2023c.html#60 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2022g.html#62 IBM DPD
https://www.garlic.com/~lynn/2021f.html#89 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2015e.html#86 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2015.html#71 Remembrance of things past
https://www.garlic.com/~lynn/2014e.html#20 IBM 8150?
https://www.garlic.com/~lynn/2013l.html#32 model numbers; was re: World's worst programming environment?
https://www.garlic.com/~lynn/2013b.html#57 Dualcase vs monocase. Was: Article for the boss
https://www.garlic.com/~lynn/2012l.html#82 zEC12, and previous generations, "why?" type question - GPU computing
https://www.garlic.com/~lynn/2012h.html#66 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2011p.html#66 Migration off mainframe
https://www.garlic.com/~lynn/2011m.html#28 Supervisory Processors
https://www.garlic.com/~lynn/2011d.html#31 The first personal computer (PC)
https://www.garlic.com/~lynn/2011.html#0 I actually miss working at IBM
https://www.garlic.com/~lynn/2008k.html#22 CLIs and GUIs
https://www.garlic.com/~lynn/2007f.html#55 Is computer history taught now?
https://www.garlic.com/~lynn/2005q.html#46 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics
https://www.garlic.com/~lynn/2002q.html#53 MVS History
https://www.garlic.com/~lynn/2001b.html#75 Z/90, S/390, 370/ESA (slightly off topic)
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 4361 & DUMPRX
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4361 & DUMPRX
Date: 23 May, 2025
Blog: Facebook
After 3033 was out the door, the processor engineers start on
trout/3090 ... also begin adapting 4331 for 3092 service processor
with highly modified vm370 version6 (and all service screens done in
CMS IOS3270). Then 3092 was upgraded from 4331 to a pair of 4361s.
trivia; early in REX (before rename to rexx customer release), I
wanted to demonstrate that REX wasn't just another pretty scripting
language ... demonstration was to redo a large assembler application
(IPCS, dump analysis) in REX, working half time in three months with
ten times the function and ten times the performance (finished early
so built a library of automated scripts that looked for common failure
signatures). I thought it would replace the existing version
(especially since it was in use by nearly every internal datacenter
and PSRs), but for some reason it wasn't. Eventually I did get
permission to give talks at user group meetings on how I did the
implementation ... and withing a few months similar implementations
started appearing.
Then got email from the 3092 group asking if they can include it with
release of 3090.
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
... all 3090 machines came with at least two FBA3370 (for 3092), even
MVS systems which never had FBA support.
Date: 23 December 1986, 10:38:21 EST
To: wheeler
Re: DUMPRX
Lynn, do you remember some notes or calls about putting DUMPRX into an
IBM product? Well .....
From the last time I asked you for help you know I work in the
3090/3092 development/support group. We use DUMPRX exclusively for
looking at testfloor and field problems (VM and CP dumps). What I
pushed for back aways and what I am pushing for now is to include
DUMPRX as part of our released code for the 3092 Processor Controller.
I think the only things I need are your approval and the source for
RXDMPS.
I'm not sure if I want to go with or without XEDIT support since we do
not have the new XEDIT.
In any case, we (3090/3092 development) would assume full
responsibility for DUMPRX as we release it. Any changes/enhancements
would be communicated back to you.
If you have any questions or concerns please give me a call. I'll be
on vacation from 12/24 through 01/04.
... snip ... top of post, old email index
4361:
https://web.archive.org/web/20220121184235/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_2423PH4361.html
DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx
some recent posts mentioning dumprx & 3092
https://www.garlic.com/~lynn/2024f.html#114 REXX
https://www.garlic.com/~lynn/2024e.html#21 360/50 and CP-40
https://www.garlic.com/~lynn/2024d.html#81 APL and REXX Programming Languages
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#16 REXX and DUMPRX
https://www.garlic.com/~lynn/2024b.html#1 Vintage REXX
https://www.garlic.com/~lynn/2024.html#60 IOS3270 Green Card and DUMPRX
https://www.garlic.com/~lynn/2023g.html#69 Assembler & non-Assembler For System Programming
https://www.garlic.com/~lynn/2023g.html#54 REX, REXX, and DUMPRX
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023g.html#38 Computer "DUMPS"
https://www.garlic.com/~lynn/2023f.html#45 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#28 IBM Reference Cards
https://www.garlic.com/~lynn/2023e.html#32 3081 TCMs
https://www.garlic.com/~lynn/2023d.html#74 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#29 IBM 3278
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#59 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#41 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#26 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#101 PSR, IOS3270, 3092, & DUMPRX
https://www.garlic.com/~lynn/2022h.html#34 Mainframe Development Language
https://www.garlic.com/~lynn/2022g.html#7 3880 DASD Controller
https://www.garlic.com/~lynn/2022f.html#69 360/67 & DUMPRX
https://www.garlic.com/~lynn/2022e.html#2 IBM Games
https://www.garlic.com/~lynn/2022d.html#108 System Dumps & 7x24 operation
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021j.html#84 Happy 50th Birthday, EMAIL!
https://www.garlic.com/~lynn/2021j.html#24 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021h.html#55 even an old mainframer can do it
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021d.html#2 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#58 MAINFRAME (4341) History
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM AIX
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AIX
Date: 24 May, 2025
Blog: Facebook
1988 got HA/6000 project, originally for NYTimes to move their
newspaper system (ATEX), off DEC VAXCluster to RS/6000. I rename it it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR/UCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix) that have VAXCluster
support in same source based with UNIX. IBM S/88 product administrator
starts taking us around to their customers and also has me do a
section for the corporate continuous availability strategy document
(it gets pulled when both Rochester/AS400 and POK/mainframe complain
that they couldn't meet requirements). Then was also working with LLNL
porting their LINCS/UNITREE filesystem to HA/CMP and NCAR/UCAR
spin-off Mesa Archival filesystem to HA/CMP.
Also 1988, IBM branch office asks if I can help LLNL (national lab)
standardize some serial stuff they were working with ... which quickly
becomes fibre-standard channel ("FCS", including some stuff I had done
in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec)
... some competition with LANL standardization of Cray 100mbyte/sec
for HIPPI (and later serial version).
About same time (1988) asked me to help LLNL on what becomes FCS
(which IBM later uses as base for FICON), branch office also asked if
I could get involved in SCI
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
Decade later did some consulting for Steve Chen (designed Cray XMP & YMP),
https://en.wikipedia.org/wiki/Steve_Chen_(computer_engineer)
https://en.wikipedia.org/wiki/Cray_X-MP
https://en.wikipedia.org/wiki/Cray_Y-MP
but at that time, Sequent CTO (before IBM bought Sequent and shut it
down).
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
Sequent had used SCI for a (numa) 256 i486 machine
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA
Early Jan1992, cluster scale-up meeting with Oracle CEO, IBM/AWD
executive Hester tells Ellison that we would have 16-system clusters
by mid-92 and 128-system clusters by ye-92. Then late Jan1992, cluster
scale-up is transferred for announce as IBM Supercomputer (for
technical/scientific *ONLY*) and we are told that we can't work on
anything with more than four processors (we leave IBM a few months
later).
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster
survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
FCS &(/or) FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some posts mentioning ha/cmp, fcs, sci, sequent, chen
https://www.garlic.com/~lynn/2024.html#54 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#22 Vintage Cray
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2022f.html#29 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#32 Cluster Systems
https://www.garlic.com/~lynn/2018d.html#57 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018b.html#53 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2015g.html#74 100 boxes of computer books on the wall
https://www.garlic.com/~lynn/2014m.html#140 IBM Continues To Crumble
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?
https://www.garlic.com/~lynn/2009e.html#7 IBM in Talks to Buy Sun
https://www.garlic.com/~lynn/2009.html#5 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
--
virtualization experience starting Jan1968, online at home since Mar1970
360 Card Boot
From: Lynn Wheeler <lynn@garlic.com>
Subject: 360 Card Boot
Date: 25 May, 2025
Blog: Facebook
(360) basic programming systems ... was all card base. There was BPS
"loader" that was about 100(?) cards ... behind which you placed TXT
deck ... output of compilers and assemblers. CSC came out to install
(virtual machine) CP67 (precursor to VM370) at the univ (3rd
installation after CSC itself and MIT Lincoln labs), and I mostly got
to play with it during my dedicated weekend 48hrs. At the time all the
source was on OS/360, assembled there and the assembled output TXT
decks placed in card tray with BPS txt deck at the front. Tended to
use felt pen to do a diagonal stripe across the top of the individual
TXT deck with the module name (making it easy to replace individual
modules). The tray of cards would be placed in 2540 card reader, dial
in "00C", and hit the IPL button. The last module in the deck was CP67
CPINIT, which would get control from BPS loader after all cards were
read from the reader, and write the storage image to disk. It was then
possible to dialin the disk address, hit IPL, and CPINIT would get
control and reverse the process, reading the storage image back into
memory.
It was also possible to write a tray of cards to tape and do the
initial IPL from the tape drive (rather than the card reader).
There were also simple 2card, 3card, and 7card loaders. Some assembler
programs could have "PUNCH" statements at the front of the assembler
source, that would punch a 2, 3, or 7 card loader prefixing the
assembled TXT output ... which could be placed in the card reader and
loaded.
from long ago and far away
https://www.mail-archive.com/ibm-main@bama.ua.edu/msg43867.html
I always called it "Basic Programming System", but officially "Basic
Programming Support" and "Basic Operating System"
http://www.bitsavers.org/pdf/ibm/360/bos_bps/C24-3420-0_BPS_BOS_Programming_Systems_Summary_Aug65.pdf
IBM 360
https://en.wikipedia.org/wiki/IBM_System/360
A little-known and little-used suite of 80-column punched-card utility
programs known as Basic Programming Support (BPS) (jocularly: Barely
Programming Support), a precursor of TOS, was available for smaller
systems.
... snip ...
I had taken 2credit hr intro to fortran/computers and at the end of
semester was hired to rewrite 1401 MPIO in assembler for 360/30. The
univ was getting a 360/67 for tss/360 to replace 709/1401 and
temporarily pending 360/67 being available, the 1401 was replaced with
360/30 (which had 1401 emulation and could run 1401 mode, so my
rewriting it in 360 assembler wasn't really needed). The univ shutdown
datacenter on weekends and I would have the place dedicated (although
48hrs w/o sleep made monday classes difficult). I was given a stack of
hardware & software manuals and got to design and implement my own
monitor, device drivers, interrupt handlers, error recovery, storage
management and after a few weeks had a 2000 card assembler program
that took 30mins to assemble under os/360 (stand alone monitor loaded
with the BPS loader, could do card->tape and tape->printer/punch
concurrently). I then used assembler option that would assemble with
os/360 system services to run under os/360 (that took 60mins to
assemble, each DCB macro taking 5-6mins). Within a year of taking
intro class, the 360/67 arrived and I was hired fulltime responsible
for os/360 (tss/360 never came to production use).
some past BPS Loader posts
https://www.garlic.com/~lynn/2025.html#79 360/370 IPL
https://www.garlic.com/~lynn/2024g.html#78 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024b.html#2 Can a compiler work without an Operating System?
https://www.garlic.com/~lynn/2023g.html#83 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2022.html#116 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#114 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#26 Is this group only about older computers?
https://www.garlic.com/~lynn/2022.html#25 CP67 and BPS Loader
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2017g.html#30 Programmers Who Use Spaces Paid More
https://www.garlic.com/~lynn/2007n.html#57 IBM System/360 DOS still going strong as Z/VSE
https://www.garlic.com/~lynn/2007f.html#1 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2006v.html#5 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2005f.html#16 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2003f.html#26 Alpha performance, why?
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#62 PLX
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Downfall
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 25 May, 2025
Blog: Facebook
Note AMEX and KKR were in competition for private-equity,
reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help.
Then IBM has one of the largest losses in the history of US companies
and was being re-orged into the 13 "baby blues" in preparation for
breaking up the company (take-off on "baby bell" breakup decade
earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
20yrs earlier, 1972, Learson tried (and failed) to block the
bureaucrats, careerists, and MBAs from destroying Watson
culture/legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
trivia: I was introduced to (USAF retired) John Boyd in the early 80s
and sponsored his briefings at IBM. Then 89/90, the Commandant of the
Marine Corps (approx. same number of people as IBM) leverages Boyd for
a corps makeover (at a time when IBM was desperately in need of
makeover). By the time Boyd passes in 1997, the USAF had pretty much
disowned him and it is the Marines at Arlington (and his effects goes
to the Gray Research and Library in Quantico); the former commandant
continued to sponsor Boyd conferences at Quantico MCU.
communication group was fighting off client/server and distributed
computing (try to preserve dumb terminal paradigm) and attempted to
block release of mainframe TCP/IP. When that was reversed, they change
strategy to since they had corporate strategic responsibility for
everything that crossed datacenter walls .... it had to be released
through them; what ships get aggregate 44kbytes/sec using nearly whole
3090 processor. I then add RFC1044 support and in some tuning tests at
Cray Research between Cray and 4341, I get sustained 4341 channel
throughput using only modest amount of 4341 CPU (something like 500
times improvement in bytes moved per instruction executing).
Earlier in the 80s, I had gotten HSDT, T1 (US&EU T1 (1.5mbits/sec
and 2mbits/sec; full-duplex; aggregate 300kbytes and 400kbytes) and
faster computer links and lots of conflict with communication group
(60s, IBM had 2701 telecommunication controller supporting T1, then
with IBM's move to SNA/VTAM in the 70s and the associated issues,
appeared to cap controllers at 56kbit/sec links).
HSDT was working with NSF director and was suppose to get $20M to
interconnect the NSF supercomputer centers. Then congress cuts the
budget, some other things happen and eventually an RFP is released (in
part based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, it becomes the NSFNET backbone
(precursor to modern internet).
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 360 Programming
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Programming
Date: 26 May, 2025
Blog: Facebook
I was undergraduate, but hired fulltime responsible of OS/360. CSC
came out to univ to install CP67 (3rd after CSC itself and MIT Lincoln
Labs, morphs into VM370). It had 1052&2741 terminal support and could
automagic do terminal type identification and switch port scanner type
with controller SAD ccw). Univ. had some tty/ascii ... and so add
ascii support (integrated with line auto terminal type). I then want a
single dial-in number ("hunt group") for all terminal types ... it
didn't quite work, IBM controller could change port scanner terminal
type, but had taken short cut and hard wired baud rate.
This kicks off univ clone controller project, build a channel
interface board for Interdata/3 programmed to emulate IBM controller
with the additional ability to do line auto baud. This is then
upgraded with Interdata/4 for the channel interface and cluster of
Interdata/3s for the port interfaces. Interdata (and later
Perkin/Elmer) start selling it as a IBM clone controller (and four of
use were written up responsible for some part of the clone controller
business)
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
Turn of century visiting east coast datacenter that handled most of
the point-of-sale credit card terminal dialup calls east of the
mississippi ... which were handled by descendant of our Interdata box.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM clone/plug compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
-
virtualization experience starting Jan1968, online at home since Mar1970
IBM 360 Programming
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Programming
Date: 26 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#27 IBM 360 Programming
very early in 80s/REX (before renamed REXX and released to customers),
I wanted to show it wasn't just another pretty scripting language
... so objective was working part-time for 3months would rewrite large
assembler program (IPCS, dump analysis) in REX with ten times the
function and ten times the performance (coding tricks to have
interpreted REX running faster than assembler), finished early and did
automated library that look for common failure signatures. I thought
it would then ship to customers, but for whatever reason, it didn't
(even though it was in use by nearly every PSR and the internal
datacenters). I eventually get permission to give talks on how it was
implemented at customer user group meetings ... and within a few
months, similar implementations started appearing.
Later 3092 (3090 service processor) group asked about including it as
part of 3092 (almost 40yrs ago):
Date: 23 December 1986, 10:38:21 EST
To: wheeler
Re: DUMPRX
Lynn, do you remember some notes or calls about putting DUMPRX into an
IBM product? Well .....
From the last time I asked you for help you know I work in the
3090/3092 development/support group. We use DUMPRX exclusively for
looking at testfloor and field problems (VM and CP dumps). What I
pushed for back aways and what I am pushing for now is to include
DUMPRX as part of our released code for the 3092 Processor Controller.
I think the only things I need are your approval and the source for
RXDMPS.
I'm not sure if I want to go with or without XEDIT support since we do
not have the new XEDIT.
In any case, we (3090/3092 development) would assume full
responsibility for DUMPRX as we release it. Any changes/enhancements
would be communicated back to you.
If you have any questions or concerns please give me a call. I'll be
on vacation from 12/24 through 01/04.
... snip ... top of post, old email index
somebody did CMS IOS3270 green card version (trivia: 3092 ... 3090
service processor, pair of 4361s running a modified vm370R6, and all
the service screens were IOS3270) .... I've done a rough translation
to HTML:
https://www.garlic.com/~lynn/gcard.html
DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx
--
virtualization experience starting Jan1968, online at home since Mar1970
360 Card Boot
From: Lynn Wheeler <lynn@garlic.com>
Subject: 360 Card Boot
Date: 26 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#25 360 Card Boot
After transferring from IBM Cambridge Science Center to IBM San Jose
Research on the west coast, I got to wander around datacenters in
silicon valley, including disk bldg14/engineering and
bldg15/product-test across the street. They had been doing 7x24,
pre-scheduled, stand-alone testing (ipling FRIEND/? from cards or
tape). They mentioned that they had recently tried MVS, but it had
15min MTBF (requiring manual re-ipl) in that environment. I offer to
rewrite I/O supervisor making it bullet-proof and never fail, enabling
any amount of on-demand, concurrent testing greatly improving
productivity (still could IPL "FRIEND" but from virtual cards and
virtual card reader in virtual machine). I then write an internal-only
research report on the I/O integrity work and happen to mention the
MVS "15min MTBF", bringing down the wrath of the POK MVS organization
on my head.
getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
enhanced production systems for internal datacenter posts, CP67L,
CSC/VM, SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
--
virtualization experience starting Jan1968, online at home since Mar1970
Is Parallel Programming Hard, And, If So, What Can You Do About It?
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About It?
Newsgroups: comp.arch
Date: Mon, 26 May 2025 15:36:22 -1000
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
Yup. And IIRC the IBM 3380 had a linear actuator with two heads per
arm, one covering the outer cylinders, the other the inner
cylinders. The tradeoff was shorter seeks, thus smaller seek time but
higher cost due to more heads.
re:
https://www.garlic.com/~lynn/2025c.html#18 Is Parallel Programming Hard, And, If So, What Can You Do About It?
https://www.garlic.com/~lynn/2025c.html#20 Is Parallel Programming Hard, And, If So, What Can You Do About It?
https://www.garlic.com/~lynn/2025c.html#21 Is Parallel Programming Hard, And, If So, What Can You Do About It?
original 3380 had 20 track spacings per data track, they then cut the
spacing in half, doubling number of tracks per platter (and double the
capacity), then cut it again for triple the number of tracks per platter
(and triple capacity).
doing some analysis moving data from 3350s to 3380s ... avg 3350
accesses per second divided by drive megabytes ... for avg
access/sec/mbyte. 3380 mbytes increased significantly more than
avg. access/sec ... could move all 3350 data to much smaller number of
3380s but with much worse performance/throughput.
at customer user group get-togethers there were discussions about how to
convince bean counters that performance/throughput sensitive data needed
to have much higher accesses/sec/mbyte. Eventually IBM offers a 3380
with the 1/3 data track spacing as the original 3380, but only enabled
for the same number of tracks as the original 3380 (as a high
performance/throughput drive, since head only had to travel 1/3rd as
far).
other trivia: 2301 fixed head drum was effectively same as 2303 fixed
head drum, but transferred four heads in parallel, 1/4 the number of
tracks, tracks four times larger and four times the transfer rate (still
same avg. rotational delay).
late 60s, 2305 fixed head disk first appeared with 360/85 and block mux
channels. There were two models, one with single head per track and one
with pairs of heads per data track, half the number of data tracks and
half the total capacity (same number of total heads). The were offset
180 degrees, and would transfer from both heads concurrently for double
the data rate with a quarter avg rotational delay (instead half avg
rotational delay),
2305
http://www.bitsavers.org/pdf/ibm/2835/GA26-1589-5_2835_2305_Reference_Oct83.pdf
getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Downfall
Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 27 May, 2025
Blog: Facebook
1972, Learson tried (and failed) to block bureaucrats, careerists, and
MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
FS completely different from 370 and going to completely replace it
(during FS, internal politics was killing off 370 efforts, limited new
370 is credited with giving 370 system clone makers their market
foothold). One of the final nails in the FS coffin was analysis by the
IBM Houston Science Center that if 370/195 apps were redone for FS
machine made out of the fastest available hardware technology, they
would have throughput of 370/145 (about 30 times slowdown)
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
trivia: I continued to work on 360&370 all during FS, periodically
ridiculing what they were doing (which wasn't exactly career enhancing
activity)
I was introduced to John Boyd in the early 80s and would sponsor his
briefings at IBM. In 89/90, the Marine Corps Commandant leverages Boyd
for makeover of the corps (at a time when IBM was desperately in need
of a makeover). Then IBM has one of the largest losses in the history
of US companies and was being reorganized into the 13 "baby blues" in
preparation for breaking up the company (take-off on "baby bell"
breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
Early 80s, I had submitted an IBM speakup with supporting
documentation that I was significantly underpaid. I got back response
from head of HR saying that after a complete review of my entire
employment history, I was being paid exactly what I was suppose to
be. I then took the original and the reply and sent it back with cover
letter say I was being asked to interview upcoming graduates for a new
group that would work under my direction ... and they were getting
starting salary offers 1/3rd more than I was making. I never got a
written response, but within a few weeks, I got a 1/3rd raise (putting
me on same level with new college graduates offers that I was
interviewing). Numerous people reminded me that "business ethics" in
IBM was oxymoron.
Late 80s, AMEX and KKR were in competition for private-equity,
reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help.
About same time IBM brings in the former president of AMEX as CEO,
AMEX spins off its financial transaction outsourcing business as First
Data (that had previously reported to new IBM CEO), in what was the
largest IPO up until that time (that included multiple mega-mainframe
datacenters). trivia: turn of century I was asked to look at
performance at one of these datacenters, greater than 40
max-configured IBM mainframes (@$30M), constantly rolling updates, all
running same 450K cobol statement application, number needed to finish
financial settlement in the overnight batch window (had a large
performance group that was responsible for care and feeding for a
couple decades, but got somewhat myopically focused). Using some
different performance analysis technology, I was able to find 14%
improvement. Interview for IBM System Magazine (although some history
info slightly garbled)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history
Today hes the chief scientist for First Data Corp., and his Web site
extends his influence into the current IBM* business and beyond.
... snip ...
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
misc. archive posts about getting 1/3rd raise
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2017.html#78 IBM Disk Engineering
https://www.garlic.com/~lynn/2014h.html#81 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
https://www.garlic.com/~lynn/2011g.html#12 Clone Processors
https://www.garlic.com/~lynn/2011g.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
https://www.garlic.com/~lynn/2010c.html#82 search engine history, was Happy DEC-10 Day
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Downfall
Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 27 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#31 IBM Downfall
Early 80s, I also get HSDT project, T1 (1.5mbit/sec) and faster
computer links (both terrestrial and satellite) and lots of conflict
with communication group (60s, IBM 2701 telecommunication controller
supported T1 but 70s transition to SNA/VTAM and the associated issues
seem to cap controllers at 56kbit/sec links) ... and looking at more
reasonable speeds for distributed operation. Also working with NSF
Director, was supposed to get $20M to interconnect NSF supercomputer
centers. Then congress cuts the budget, some other things happen and
eventually an RFP is released (in part based on what we already had
running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, it becomes the NSFNET backbone,
precursor to modern internet.
Mid-80s, communication group is fighting off client/server and
distributed computing (preserving dumb terminal paradigm) and trying
to block mainframe release of TCP/IP support. When they loose, then
they claim that since they have corporate responsibility for
everything that crosses datacenter walls, it has to be released
through them. What ships gets aggregate 44kbytes/sec using nearly
whole 3090 processor. I then do changes for RFC1044 support and in
some tuning tests at Cray Research between Cray and 4341, get
sustained 4341 channel throughput, using only modest amount of 4341
processor (something like 500 times improvement in bytes moved per
instruction executed).
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Downfall
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 28 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#31 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#32 IBM Downfall
One of the things that happened with FS and the internal politics
killing 370 ... the lack of new 370 products during (& after) FS
was giving the clone 370 system makers their market foothold ... and
IBM sales&marketing having to fall back to "FUD" marketing.
Amdahl had won the battle to make ACS, "360 compatible" ... then
ACS/360 was killed (folklore IBM was concerned that it would advance
the state-of-the-art too fast and IBM would loose control of the
market) ... and Amdahl leaves IBM to form his own computer company
(before FS started) ... also lists some ACS/360 features that don't
show up until more than 20yrs later with ES/9000.
https://people.computing.clemson.edu/~mark/acs_end.html
When FS imploded there was mad rush to get stuff back into 370 product
pipelines, including kicking off quick&dirty 303x and 3081
efforts.
For the 303x channel director they took 158 engine with just the
integrated channel microcode (and no 370 microcode). A 3031 was two
158 engines (one with the integrated channel microcode and the other
just the 370 microcode). A 3032 was 168-3 reworked to use the 303x
channel director for external channels. A 3033 started out 168-3 logic
remapped to 20% faster chips.
3081 was suppose to be multiprocessor only using some warmed over FS
technology. The initial 2-CPU 3081D has less aggregate MIPS than a
single CPU Amdahl. They double the processor cache size for the 3081K
bringing it up to about the same aggregate MIPS as the single CPU
Amdahl (however IBM MVS docs, list MVS multiprocessor overhead only
getting 1.2-1.5 times the throughput of a single processor ... so even
with approx same aggregate MIPS as the single CPU Amdahl, a MVS 3081K
would only have approx. .6-.75 times the throughput)
After FS implodes, I had also gotten roped into helping with a 16-CPU
370 and we con'ed the 3033 processor engineers into working on it in
their spare time (a lot more interesting than remapping 168-3 logic to
20% faster chips). Everybody thought it was great until somebody tells
the head of POK that it could be decades before MVS had effective
16-CPU support (i.e. the MVS multiprocessor overhead playing the major
role ... and POK doesn't ship a 16-CPU multiprocessor until after the
turn of the century). The head of POK then asks some of us to never
visit POK again and directs the 3033 processor engineers, heads down
and no distractions.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
TCP/IP, Ethernet, Token-Ring
From: Lynn Wheeler <lynn@garlic.com>
Subject: TCP/IP, Ethernet, Token-Ring
Date: 28 May, 2025
Blog: Facebook
The new IBM Almaden Research bldg was heavily provisioned with CAT
wiring, assuming 16mbit TR .... however they found that ten mbit
Ethernet had higher aggregate LAN throughput over CAT wiring (than
16mbit T/R) and lower latency.
IBM AWD Workstation for PC/RT (PC/AT 16bit bus) did their own cards,
including 4mbit T/R cards. Then for RS/6000 (with microchannel), they
were told they couldn't do their own cards, but had to use standard
PS2 microchannel cards. However the communication group had severely
performance kneecaped the PS2 microchannel cards ... and the
microchannel 16mbit T/R cards had lower card throughput than the PC/RT
4mbit T/R cards. Furthermore $69 10mbit Ethernet cards had 8.5mbit/sec
card throughput (way higher than the $800 16mbit T/R microchannel
cards).
The IBM communication was also fiercely fighting off release of IBM
mainframe TCP/IP support, when they lost, they changed their strategy
... since they had corporate strategic responsibility for everything
that crossed datacenter walls, it had to be released through them;
... what shipped got aggregate 44mbyte/sec throughput using nearly
whole 3090 processor. I then added RFC1044 support and in some tuning
tests at Cray Research between Cray and 4341, got sustained 4341
channel throughput using only modest amount of 4341 CPU (something
like 500 times improvement in bytes moved per instruction executed).
RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Downfall
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 29 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#31 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#32 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#33 IBM Downfall
After graduating and joining IBM science center, one of my hobbies was
enhanced operating systems for internal datacenters (and the online
branch office sales&marketing support HONE systems were some of my
first and long time customers). I also got to attend customer user
group meetings (like SHARE) and drop by customers. Director of one of
the financial industry largest IBM datacenters liked me to stop by and
talk technology. Somewhere along the way the IBM branch manager
managed to horribly offend the customer and in retaliation they were
ordering an Amdahl system (lone Amdahl in vast sea of IBM "blue"). Up
until that time Amdahl had been selling into the scientific/technology
and university markets, but this one would be the 1st for a "true
blue" commercial account. I was then asked to go onsite for 6-12months
(to help obfuscate why the Amdahl order). I talk it over with the
customer and decided to decline the IBM offer. I was then told that
the branch manager was a good sailing buddy of IBM CEO and if I didn't
do it, I could forget raises, promotion, and career. One of the first
times that I was told in IBM, "business ethics" was an oxymoron.
trvia: some of the MIT CTSS/7094 people went to the 5th flr for
Project MAC and did Multics and others went to the 4th flr for the IBM
science center and did virtual machines, internal network, performance
tools, invented GML in 1969, etc. There was some friendly rivalry
between 4th & 5th flrs, at one point I was able to point out that I
had more internal datacenters running my enhanced operating systems
than the total number of Multics installations that ever existed.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
past posts mentioning branch manager horribly offending customer:
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s
https://www.garlic.com/~lynn/2025.html#64 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#19 60s Computers
https://www.garlic.com/~lynn/2024f.html#122 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#23 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024e.html#65 Amdahl
https://www.garlic.com/~lynn/2023g.html#42 IBM Koolaid
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#45 IBM 3081 TCM
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022d.html#21 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#88 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Downfall
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 29 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#31 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#32 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#33 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#35 IBM Downfall
When we were doing HA/CMP ... we spent a lot of time with the TA to
the FSD President (he was working 1st shift as TA, and 2nd shift he
was ADA programming for the latest FAA project). In Jan92, we
convinced FSD to go with HA/CMP for gov. supercomputers. A couple
weeks later cluster scaleup was being transferred for announce as IBM
supercomputer (for technical/scientific *only*) and we were told we
weren't allowed to work on anything with more than four processors (we
leave IBM a few months later).
We had been spending so much time in the Marriott on Democracy that
some of them started to think we were Marriott employees.
recent HA/6000, HA/CMP, LANL, LLNL, NCAR, FSC, SCI, etc
https://www.garlic.com/~lynn/2025c.html#24 IBM AIX
... after leaving IBM, we did a project with Fox & Template
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514/
Two mid air collisions 1956 and 1960 make this FAA procurement
special. The computer selected will be in the critical loop of making
sure that there are no more mid-air collisions. Many in IBM want to
not bid. A marketing manager with but 7 years in IBM and less than one
year as a manager is the proposal manager. IBM is in midstep in coming
up with the new line of computers - the 360. Chaos sucks into the fray
many executives- especially the next chairman, and also the IBM
president. A fire house in Poughkeepsie N Y is home to the technical
and marketing team for 60 very cold and long days. Finance and legal
get into the fray after that.
Joe Fox had a 44 year career in the computer business- and was a vice
president in charge of 5000 people for 7 years in the federal division
of IBM. He then spent 21 years as founder and chairman of a software
corporation. He started the 3 person company in the Washington D. C.
area. He took it public as Template Software in 1995, and sold it and
retired in 1999.
... snip ...
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
recently post mentioning HA/CMP & FSD email
https://www.garlic.com/~lynn/2025b.html#92 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#72 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#36 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2024f.html#67 IBM "THINK"
https://www.garlic.com/~lynn/2014d.html#52 [CM] Ten recollections about the early WWW and Internet
other posts mentioning work with Fox & company after leaving IBM
https://www.garlic.com/~lynn/2023d.html#82 Taligent and Pink
https://www.garlic.com/~lynn/2021e.html#13 IBM Internal Network
https://www.garlic.com/~lynn/2021.html#42 IBM Rusty Bucket
https://www.garlic.com/~lynn/2019b.html#88 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2019b.html#73 The Brawl in IBM 1964
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Mainframe
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe
Date: 30 May, 2025
Blog: Facebook
FS was completely different than 370 and was going to completely
replace it (internal politics was killing off 370 efforts during FS
and lack of new 370s is credited with given the 370 clone makers their
market foothold ... along with forcing IBM sales&marketing to fall
back on FUD). One of the last nails in the FS coffin was analysis by
the IBM Houston Scientific Center that if 370/195 applications were
redone for FS machine made out of the fastest available technology, it
would have throughput of 370/145 (factor of 30 times slowdown). When
FS finally implodes therre is mad rush to get stuff back into the 370
product pipelines, including kicking off the quick&dirty
3033&30831 in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
They took 158 engine with just the integrated channel microcode for
the 303x channel director. A 3031 was two 158 engines, one with just
the integrated channel microcode and 2nd with just the 370
microcode. A 3032 was 168-3 reworked to use 303x channel director. A
3033 started out with 168-3 logic remapped to 20% faster chips.
The 3081 was suppose to be multiprocessor-only starting with 3081D
that had lower aggregate MIPS than Amdahl single processor. They
quickly double the processor cache sizes for the 3081K bringing
aggregate MIPS up to about the same as Amdahl single processor.
However MVS docs were that 2-CPU support only had 1.2-1.5 throughput
of single CPU (aka 3081K even with same aggregate MIPS as Amdahl
single processor, the MVS 3081K throughput only about .6-.75 times,
because MVS's multiprocessor overhead). Then they lash two 3081Ks
together for a 4-CPU system to try and get something with more MVS
throughput than single processor Amdahl machine (MVS multiprocessor
overhead increasing as the #CPUS increased).
Also when FS imploded, I got roped into helping with a 370 16-CPU
processor (and we con the 3033 processor engineers into working on it
in their spare time, lot more interesting that remapping 168-3 logic
to 20% faster chips). Everybody throught it was really great until
somebody tells the head of POK that it could be decades before MVS had
(effective) 16-CPU support (POK doesn't ship 16-CPU machine until
almost 25yrs later after turn of century). The head of POK then
invites some of us to never visit POK again and directs the 3033
processor engineers, heads down and no distractions.
1988, Nick Donofrio approves HA/6000, originally for NYTimes to move
their newspaper system (ATEX) off VAXCluster to RS/600. I rename it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXcluster
support in same source base with Unix; I do distributed lock manager
with VAXCluster semantics to ease ports; IBM Toronto was still long
way before having simple relational for PS2). Then S/88 product
administrator starts taking us around to their customers and gets me
to write a section for the corporate continuous availability strategy
document (it gets pulled when both Rochester/as400 and POK/mainframe,
complain they can't meet the objectives). Work is also underway to
port LLNL supercomputer filesystem (LINCS) to HA/CMP and working with
NCAR spinoff (Mesa Archive) to platform on HA/CMP.
Early Jan92, we have HA/CMP meeting with Oracle CEO, IBM/AWD executive
Hester tells Ellison that we would have 16-system clusters mid92 and
128-system clusters ye92. Then late Jan92, cluster scale-up is
transferred for announce as IBM Supercomputer (for
technical/scientific *ONLY*) and we are told we weren't allowed to
work with anything that had more than four systems (we leave IBM a few
months later).
FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster
survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
--
virtualization experience starting Jan1968, online at home since Mar1970
Is Parallel Programming Hard, And, If So, What Can You Do About
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About
It?
Newsgroups: comp.arch
Date: Sat, 31 May 2025 12:53:59 -1000
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
I was too flip in my answer, so here is, I think, a better one. The
"it" to which we are referring here is caching of write data.
So let's look at a possible scenario. Let's say the heads are at
cylinder 100. A write comes in for data that is at cylinder
300. Without write caching, the disk will move the heads to cylinder
300. Now lets say the next request is a read for data at cylinder 150.
If the write had been cached, the disk can handle the read with only a
50 cylinder move, then the write with a 150 cylinder move for a total
of 200 cylinders. Without write caching, the first move is 200
cylinders for the write, followed by 150 back for the read for a total
of 350. Thus the read data, which is presumably more time critical, is
delayed.
Overall, write caching improves performance, but if you don't want it,
then you can essentially not use it, either by forcing the writes to
go to the media, or not using command queuing at all.
Early 70s, as mainstream IBM was converting everything to virtual
memory, I got into a battle. Somebody came up with a (LRU?) page
replacement algorithm that would replace non-changed pages (didn't
require a write before the read replacement) before changed pages
(which needed a write before being able to fetch the needed
page). Nearly a decade later, they finally realized that they were
replacing highly used, highly shared RO/non-changed pages ... before
replacing, private, single-task, changed data page (before they
realized it was possible to keep a pool of immediately available,
changed pages that had been pre-written).
ATM financial started using the IBM (airline) TPF operating system
... light-weight but had simple ordered arm queuing algorithm for
reads/updates/writes.
Then a little later in 70s an IBM tech in LA at a financial
institution redid it looking at ATM use history and anticipating
account requests (that would result in reads/updates/writes ordering
that hadn't happened yet). Under heavy load, it improved aggregate
throughput (and under lighter load it make little difference) ... sort
of delaying a 300cyl seek anticipating likelyhood of transaction (as
yet to happen), that would involve a shorter seek.
since sometime in the 80s, (at least) RDBMS have been using "write
caching" (write behind) where the sequential log/journal of
"committed" transactions is made and actual RDBMS writes happen in the
background. Failure recovery requires rereading the log and forcing
pending writes for committed transactions.
Originally in cluster environment, any (RDBNS) pending writes for
transaction lock request from a different system would force pending
writes before granting a different system the requested lock. I did a
hack where I could append queued/pending writes to passing the
transaction lock to a different system ... in the era of mbyte
(shared, multi-system, cluster) disks and gbyte interconnect.
HA/CMP & RDBMS posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
original sql/rdbms System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 3090
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3090
Date: 31 May, 2025
Blog: Facebook
1980, STL (since renamed SVL) was bursting at the seams and moving 300
people (and 3270s) from the IMS (DBMS) group to offsite bldg. They
tried "remote 3270s" and found the human factors completely
unacceptable. They con me into doing channel-extender support so they
can place channel-attached 3270 controllers in the offsite bldg, with
no perceptible difference in human factors. Side-effect were those
mainframe systems throughput increased 10-15%. STL was configuring
3270 controllers across all channels shared with DASD controllers. The
channel-extender hardware had significantly lower channel busy (for
same amount of 3270 activity) than directly channel-attached 3270
controllers, resulting increased system (DASD) throughput. There was
then some discussion about placing all 3270 controllers on
channel-extenders (even for controllers inside STL). Then there is
attempt by the hardware vendor to get IBM to release my support,
however there is a group in POK that were trying to get some serial
stuff released and they were concerned if my stuff was in the field,
it would make it harder to release the POK stuff (and request is
vetoed)
There was a later, similar problem with 3090 and 3880
controllers. While 3880 controllers supported "data streaming"
channels capable of 3mbyte/sec transfer, they had replaced 3830
horizontal microprocessor with an inexpensive, slow vertical
microprocessor ... so for everything else (besides doubling transfer
rate from 1.5mbyte to 3mbyte), 3880 had much higher channel busy. 3090
had originally configured number of 3090 channels to meet target
system throughput (assuming 3880 was same as 3830 but supporting
3mbyte transfer). When they found out how much worse the 3880 channel
busy actually was, they were forced to significantly increase the
number of channels to meet target throughput. The increase in number
of channels required an extra TCM, and 3090 people semi-facetiously
joked they would bill the 3880 organization for the increase in 3090
manufacturing costs. Eventually sales/marketing respins the large
increase in number of 3090 channels as 3090 being wonderful I/O
machine.
1988, IBM branch office asks me if I can help LLNL (national lab) get
some serial stuff they are working with standardized, which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980), initially 1gbit/sec transfer, full-duplex, aggregate
200mbytes/sec. Then POK finally gets there stuff released (when it is
already obsolete) with ES/9000 as ESCON (initially 10mbytes/sec
increasing to 17mbytes/sec). Then POK becomes involved in "FCS" and
define a heavy-weight protocol that significantly reduces the
throughput, which eventually is released as FICON.
The latest public benchmark I've found is z196 "Peak I/O" getting 2M
IOPS using 104 FICON (about 20K/FICON). About the same time a FCS is
announced for E5-2600 server blades claiming over a million IOPS (two
such FCS with higher throughput than 104 FICON). Note IBM docs
recommended that SAPs (system assist processors that do actual I/O) be
kept to 70% CPU (which would be more like 1.5M IOPS). Also no CKD DASD
have been made for decades, all being simulated on industry standard
fixed-block devices.
refs:
https://en.wikipedia.org/wiki/ESCON
https://en.wikipedia.org/wiki/Fibre_Channel
... above says 100mbyte/direction in 1997, but we had some in 1992
https://en.wikipedia.org/wiki/FICON
Note IBM channel protocol was half-duplex with a enormous amount of
end-to-end protocol chatter (per each CCW in channel program and
associated busy latency) with control units. Native FICON effectively
streamed download much of channel program (equivalent) to controller
equivalent; eliminating the enormous end-to-end protocol chatter and
the half-duplex busy latency.
Note also max. configured z196 benchmarked at 50BIPS, while there were
E5-2600 server blades benchmarking at 500BIPS (ten times z196 and a
rack of server blades might have 32-64 such blades, potentially 640
times max. configured z196).
getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM & DEC DBMS
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM & DEC DBMS
Date: 01 Jun, 2025
Blog: Facebook
I had lots of time on early engineering E5/4341 and in Jan1979, IBM
branch office found out about it and cons me into doing benchmark for
national lab looking at getting 70 for compute farm (sort of the
leading edge of the coming cluster supercomputing tsunami). The
E5/4341 clock was reduced 20% compared to production models that would
ship to customers. Then a small cluster of five 4341s had higher
throughput than IBM high-end 3033 mainframe, much lower cost and less
floor space and environmentals. Then in the 80s, 4300s were selling
into the same mid-range market as DEC VAX for small unit number
orders. The big difference was large corporations ordering hundreds of
VM/4341s at a time for placing out in departmental areas (sort of the
leading edge of the coming distributed computing tsunami). Spring
1979, some USAFDC (in the Pentagon) wanted to come by to talk to me
about 20 VM/4341 systems, visit kept being delayed, by the time they
came by (six months later), it had grown from 20 to 210. Old archived
post with decade of DEC VAX, slide&diced by model, year,
US/non-US:
https://www.garlic.com/~lynn/2002f.html#0
Late 70s, besides getting to play disk engineer in bldg14&15, I
was also working with Jim Gray and Vera Watson on the original
SQL/Relational (System/R) and we manage to do tech transfer to
Endicott ("under the radar" while company was preoccupied with the
next great DBMS, "EAGLE"). Then "EAGLE" implodes and request is made
for how fast can System/R be ported to MVS (which is eventually
announced as DB2, originally for decision support only). Jim Gray
departs SJR fall of 1980 for Tandem and tries to palm off stuff on me
(BofA has early System/R pilot and look at getting 60 VM/4341s).
In 1988, Nick Donofrio approves HA/6000, originally for NYTimes to
move their newspaper system off DEC VAXcluster to RS/6000. I rename it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXcluster
support in same source base with Unix; I do distributed lock
manager with VAXCluster semantics to ease ports; IBM Toronto was
still long way before having simple relational for PS2). Then S/88
product administrator starts taking us around to their customers and
gets me to write a section for the corporate continuous
availability strategy document (it gets pulled when both
Rochester/as400 and POK/mainframe, complain they can't meet the
objectives). Work is also underway to port LLNL supercomputer
filesystem (LINCS) to HA/CMP and working with NCAR spinoff (Mesa
Archive) to platform on HA/CMP.
Early Jan1992, cluster scale-up meeting with Oracle CEO, IBM/AWD
executive Hester tells Ellison that we would have 16-system clusters
by mid-92 and 128-system clusters by ye-92. I was also working with
IBM FSD and convince them to go with cluster scale-up for government
supercomputer bids ... and they inform the IBM Supercomputer
group. Then late JAN1992, cluster scale-up is transfer for announce as
IBM Supercomputer (for technical/scientific *ONLY*) and we are told we
couldn't work on anything with more than four systems (we leave IBM a
few months later).
IBM concerned that RS/6000 will eat high-end mainframe (industry
benchmark, number of program iterations compared to MIPS reference
platform). 1993:
ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS
The executive had reported to for HA/CMP goes over to head up
Somerset/AIM (apple, ibm, motorola). RIOS/Power was multi-chip w/o
bus/cache consistency (no SMP). AIM would do single chip with motorola
88k bus/cache (supporting SMP configurations). 1999:
single chip Power/PC 440: 1,000MIPS.
original sql/relational System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster
survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
playing disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
SNA & TCP/IP
From: Lynn Wheeler <lynn@garlic.com>
Subject: SNA & TCP/IP
Date: 01 Jun, 2025
Blog: Facebook
there was claim that some customers had so much 3270 coax runs that it
was starting to exceed bldg load limits ... supposedly motivating
token-ring and cat wiring.
1980, STL (since renamed SVL) was bursting at the seams and moving 300
people (and 3270s) from the IMS (DBMS) group to offsite bldg. They
tried "remote 3270s" and found the human factors completely
unacceptable. They con me into doing channel-extender support so they
can place channel-attached 3270 controllers in the offsite bldg, with
no perceptible difference in human factors. Side-effect was throughput
for those mainframe systems, increased 10-15%. STL was configuring
3270 controllers across all channels shared with DASD controllers. The
channel-extender hardware had significantly lower channel busy (for
same amount of 3270 activity) than directly channel-attached 3270
controllers, resulting increased system (DASD) throughput. There was
then some discussion about placing all 3270 controllers on
channel-extenders (even for controllers inside STL).
IBM workstation division for PC/RT workstation did their own 4mbit T/R
card ... but for RS/6000 microchannel, they were told they couldn't do
their own cards, but had to use standard PS2 cards. The communication
group was fiercely fighting off client/server and distributed
computing (trying to protect their dumb terminal paradigm) and had
severely performance kneecaped PS2 microchannel cards. The 16mbit T/R
microchannel card had lower throughput than the PC/RT 4mbit T/R
card. Then for the new Almaden bldg, they found that 10mbit Ethernet
over cat wiring had higher aggregate LAN throughput than 16mbit T/R
over same wiring. Also $69 10mbit ethernet card had significantly
higher throughput than the $800 16mbit T/R microchannel cards.
Early in the 80s, I had gotten HSDT, T1 (US&EU T1; 1.5mbits/sec
and 2mbits/sec; full-duplex; aggregate 300kbytes and 400kbytes) and
faster computer links (both terrestrial and satellite) and lots of
conflict with communication group (60s, IBM had 2701 telecommunication
controller supporting T1, then with IBM's move to SNA/VTAM in the 70s
and the associated issues, appeared to cap controllers at 56kbit/sec
links).
HSDT was working with NSF director and was suppose to get $20M to
interconnect the NSF supercomputer centers. Then congress cuts the
budget, some other things happen and eventually an RFP is released (in
part based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, it becomes the NSFNET backbone,
precursor to modern internet.
The communication group was also fighting off release of mainframe
TCP/IP support. When they lost, they changed their tactic that since
they had corporate strategic responsibility for everything that
crossed datacenter walls, it had to be released through them. What
shipped got aggregate of 44kbytes/sec using nearly whole 3090
processor. I then add support for RFC1044 and in some tuning tests
between at Cray Research between Cray and 4341, got sustained 4341
channel throughput using only modest amount of 4341 processor
(something like 500 times improvement in bytes moved per instruction
executed).
Univ. study in the late 80s, found that VTAM LU6.2 pathlength was
something like 160k instructions while a typical (BSD 4.3 tahoe/reno)
UNIX TCP pathlength was 5k instructions.
Later in the 90s, the communication group subcontracted TCP/IP support
(implemented directly in VTAM) to silicon valley contractor. What he
initially demo'ed had TCP running much faster than LU6.2. He was then
told that everybody "knows" that a "proper" TCP implementation is much
slower than LU6.2 ... and they would only be paying for a "proper"
implementation.
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
SNA & TCP/IP
From: Lynn Wheeler <lynn@garlic.com>
Subject: SNA & TCP/IP
Date: 02 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#41 SNA & TCP/IP
channel-extender vendor?
NSC ... initially they implied that 710 was full duplex ... but their
software never really used 710 in that manner ... and I started
getting lots of collisions; needed to use 720 satellite adapters to
simulate full-duplex until they came out with 715. I had done the
support in 1980, and then the serial group in POK (that eventually
ships ESCON more than decade later) get the request to have my
software released, vetoed (concerned if it was in the market it would
make it harder to justify releasing their stuff).
funny ... for certain types of transmission errors, I would simulate
CSW channel check. when IBM wouldn't release my support, NSC reverse
engineers it and duplicates it. This comes up 6-7 yrs later when the
3090 product administrator tracks me down. 3090 channels had been
designed to have 3-5 channel checks over year period, aggregate for
all 3090s. There was an industry service that collected EREP data from
mainframe customers (both IBM and IBM clone) and published summarized
data; and 3090 showed a total of 20 channel check aggregate for a year
period for all 3090s ... and they attributed the additional 15
reported channel checks to customers running the NSC channel extender
support. The 3090 product administrator asks if I could do something
about it. I do a little research and determine that for
channel-extender purposes, simulating IFCC (interface control check)
would result in the same actions (as channel check) and get NSC to
change their software.
For related info, see RFC1044
https://www.rfc-editor.org/info/rfc1044
for support I added to mainframe TCP/IP.
Note, I could get both T1 and T3 from NSC routers (as well as dozen+
Ethernet ports, FDDI, bunch of other interfaces) ... which is what I
was using in working with NSF director for the NSF Supercomputer
datacenter support (as well as other gov. agencies). The ESCON spec
was improved for transmission by about 10% and made full-duplex,
getting about 40+mbytes/sec aggregate (rather than ESCON just
10mbytes/sec, later improved to 17mbytes/sec) ... which was used in
RS/6000 for SLA (serial link adapter), however it was only useful to
talk to other RS/6000 until we con NSC into adding SLA feature to
their router. This was then upgraded to fibre-channel ("FCS") in 1992.
what got me into it was when I transferred from science center to
research in the 70s, I got to wander around silicon valley datacenters
(both IBM and non-IBM), including disk bldg14/engineering and
bldg15/product-test across the street. They were running 7x24,
prescheduled, stand-alone testing and had mentioned they recently had
tried MVS but it had 15min MTBF (requiring manual re-ipl) in that
environment. I offered to rewrite the I/O supervisor to make it
bullet-proof and never fail, allowing any amount of on-demand,
concurrent testing, grealy improving productivity. STL was one of the
places running my enhanced production operating system, so that
contributed to requesting me to also do channel-extender support. I do
a (internal only) research report mentioning the MVS 15min MTBF,
bringing down the wrath of the MVS organization on my head.
Many standard VM systems were claiming quarter to third second system
response (while all the MVS systems were rarely even second
response). I was clocking .11sec system response for my systems. In
the early 80s, there were studies showing .25sec response improve
productivity. 3277/3272 had .086 hardware response ... so needed at
least .164sec system response for .25sec. This was when 3278 appeared
where lots of the electronics moved back to 3274 controller greatly
increasing coax protocol chatter and hardware response went to
.3-.5secs (depending on amount of data) ... making it impossible to
achieve .25sec (unless mainframe had time machine to send transmission
into the past). Letters to the 3278 product administrator got response
that 3278 wasn't for interactive computing but data entry.
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, index - home