From: Lynn Wheeler <lynn@garlic.com> Subject: Liibrary Catalog Date: 26 Jul, 2025 Blog: Facebookre:
By early 80s, online NIH NLM had a problem with answers to queries where it would return thousands of answers, as additional terms were added out around 6-8 terms, it would go bimodel between thousands of answers and zero. Along came "Grateful Med" query app on Apple ... instead of returning the answers, it returned the count of answers and the holy grail become finding query with move than zero and less than 100 answers.
"Grateful Med" refs:
https://pubmed.ncbi.nlm.nih.gov/10304249/
https://pubmed.ncbi.nlm.nih.gov/2407046/
https://pubmed.ncbi.nlm.nih.gov/35102837/
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Chip Design (LSM & EVE) Date: 27 Jul, 2025 Blog: Facebook70s, IBM Los Gatos lab did the LSM (Los Gatos State Machine) ... that ran chip design logic verification, 50k times faster than IBM 3033 ... included clock support that could be used for chips with asynchronous clocks and analog circuits ... like electronic/thin-film disk head chips.
Then in the 80s there was EVE (Endicott Verification Engine) that ran faster and handled larger VSLI chips (than LSM), but assumed synchronous clock designs. Disk Engineering had been moved offsite (temporarily to bldg "86", just south of main plant site, while bldg "14" was getting seismic retrofit) and got an EVE.
I also had HSDT project (T1 and faster computer links, both terrestrial and satellite) mostly done out of LSG, that included custom designed 3-dish Ku-band satellite system (Los Gatos, Yorktown, and Austin). IBM San Jose had done T3 Collins digital radio microwave complex (centered bldg 12 on main plant site). Set up T1 circuit from bldg29 (LSG) to bldg12, and then bldg12 to bldg86. Austin was in process of doing 6chip RIOS for what becomes RS/6000 ... and being able to get fast turn around chip designs between Austin and bldg86 EVE is credited with helping bring RIOS chip design in a year early.
trivia: when transferred from Science Center to Research in San Jose, got to wander around Silicon Valley datacenters, including disk engineering/bldg14 and product test/bldg15 across the street. They were running 7x24, prescheduled, stand-alone testing and commented that they had recently tried MVS, but it had 15min MTBF (in that environment), requiring manual reboot. I offered to rewrite I/O supervisor, making it bullet-proof and never fail, allowing any amount of ondemand, concurrent testing ... greatly improving productivity.
Bldg15 then got engineering 3033 (first outside of POK 3033 processor
engineering) and since disk testing only used a percent or two of CPU,
scrounge a 3830 disk controller and 3330 disk drive string and setup
our own private online service. At the time the air-bearing simulation
(for thin-film disk head) was getting a couple turn arounds a month on
SJR 370/195. We set it up on bld15 3033 and they were able to get
several turn arounds a day. 3370 was first thin-film head.
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/
1988, get HA/6000 project (also IBM Los Gatos lab), initially for
NYTimes to migrate their newspaper system (ATEX) off VAXCluster to
RS/6000. I then rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scaleup with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (that have VAXCluster support in same source base with UNIX
.... Oracle, Sybase, Ingres, Informix). Was working with Hursley 9333s
and hoping can upgrade to interoperable with FCS (planning for HA/CMP
high-end).
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid-92 and 128-system clusters ye-92. Mid Jan1992 presentations with FSD convinces them to use HA/CMP cluster scaleup for gov. supercomputer bids. Late Jan1992, cluster scaleup is transferred to be announced as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than 4-systems (we leave IBM a few months later).
Some concern that cluster scaleup would eat the mainframe .... 1993
MIPS benchmark (industry standard, number of program iterations
compared to reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS
The executive we had been reporting to, goes over to head up
Somerset/AIM (apple, ibm, motorola) ... single chip power/pc with
Motorola 88k bus enabling shared-memory, tightly-coupled,
multiprocessor system implementations
Sometime after leaving IBM, brought into small client/server startup as consultant. Two former Oracle people (that were in the Ellison/Hester meeting) are there responsible for something they call "commerce server" and want to do payment transactions on the server. The startup also invented this technology they call SSL/HTTPS, that they want to use. The result is now frequently called e-commerce. I have responsibility for everything between webservers and the payment networks.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
electronic commerce & payment networks
https://www.garlic.com/~lynn/subnetwork.html#gateway
posts mentioning Los Gatos LSM and EVE (endicott verification engine)
https://www.garlic.com/~lynn/2023f.html#16 Internet
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2021i.html#67 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021c.html#53 IBM CEO
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2014b.html#67 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014b.html#5 IBM Plans Big Spending for the Cloud ($1.2B)
https://www.garlic.com/~lynn/2010m.html#52 Basic question about CPU instructions
https://www.garlic.com/~lynn/2007o.html#67 1401 simulator for OS/360
https://www.garlic.com/~lynn/2007l.html#53 Drums: Memory or Peripheral?
https://www.garlic.com/~lynn/2007h.html#61 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2002j.html#26 LSM, YSE, & EVE
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Mainframe Networking and LANs Date: 27 Jul, 2025 Blog: FacebookMid-80s, the communication group was fighting release of mainframe tcp/ip support. When they lost, they change tactic and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbyte/sec using nearly whole 3090 processor. I then do RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel support using only modest amount of 4341 CPU (somethng like 500 times improvement in bytes moved per instruction executed)
There were also claims about how much better token-ring was than ethernet. IBM AWD (workstation) had done their own cards for PC/RT (16bit, PC/AT bus) including 4mbit token-ring card. Then for RS/6000 (w/microchannel), they were told they could not do their own cards, but had to use the (communication group heavily performance kneecapped) PS2 cards (example PS2 16mbit T/R card had lower card throughput than the PC/RT 4mbit T/R card).
New Almaden Research bldg was heavily provisioned with IBM CAT wiring, supposedly for 16mbit T/R, but found that running 10mbit ethernet (over same wiring) had higher aggregate throughput (8.5mbit/sec) and lower latency. Also that $69 10mbit ethernet cards had much higher card throughput (8.5mbit/sec) than the $800 PS2 16mbit T/R cards. Also for 300 workstation configuration, the price difference (300*$69=$20,700)-(300*$800=$240,000)=$219,300, could get several high performance TCP/IP routers with IBM (or non-IBM) mainframe channel interfaces, 16 10mbit Ethernet LAN interfaces, Telco T1 & T3 options, 100mbit/sec FDDI LAN options and other features ... say 300 workstations could be spread across 80 high-performance 10mbit Ethernet LANs.
Late 80s, a senior disk engineer got a talk scheduled at internal, annual, world-wide communication group conference, supposedly on 3174 performance. However he open the talk with comment that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing mainframe to more distributed computing friendly platforms. They had come up with a number of solutions, but they were constantly being vetoed by the communication group (having stranglehold on mainframe datacenters with their corporate ownership of everything that crossed datacenter walls). Disk division exec partial countermeasure was investing in distributed computing startups using IBM disks, and we would periodically get asked to drop by the investments to see if we could offer any help.
Wasn't just disks and couple years later, IBM has one of the largest
losses in the history of US companies and was being reorged into the
13 baby blues in preparation for breaking up the company (take-off
on the "baby bell" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup (but it
wasn't long before the disk division was "divested").
other trivia: 1980, STL (since renamed SVL) was bursting at the seams and were moving 300 people (& 3270s) from the IMS group to offsite bldg with dataprocessing back to the STL datacenter. They had tried "remote 3270", but found the human factors totally unacceptable. I get con'ed into doing channel extender support, allowing channel attached 3270 controllers to be placed at offsite bldg with no perceptible difference in human factors. Unintended side-effect was those IMS 168-3 systems saw 10-15% improvement in throughput. The issue was STL had been spreading the directly 3270 channel attached controllers across channels with 3830/3330 disks. The channel extender boxes had much lower channel busy (for same amount of 3270 activity) reducing interferance with disk throughput (and there some consideration moving *ALL* 3270 channel attached controllers to channel extender boxes).
more trivia: After channel-extender, early 80s, I had got HSDT, T1 and
faster computer links (both satellite and terrestrial) and lots of
battles with communication group (60s, IBM had 2701 supporting T1 but
in the 70s move to SNA/VTAM and issues ... controller links were caped
at 56kbits/sec). Was also working with NSF director and was suppose to
get $20M to interconnect the NSF supercomputing centers. Then congress
cuts the budget, some other things happen and eventually an RFP is
released (in part based on what we already had running). NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.
1988, IBM branch asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec). Then POK manages to get their stuff released as ESCON (when it is already obsolete, initially 10mbyte/sec, later upgraded to 17mbyte/sec). Then some POK engineers become involved with "FCS" and define a heavy-weight protocol that significantly reduces throughput, eventually ships as FICON. 2010, z196 "Peak I/O" benchmark gets 2M IOPS using 104 FICON (20K IOPS/FICON). Also 2010, FCS announced for E5-2600 server blades claiming over million IOPS (two such FCS higher throughput than 104 FICON). Note: IBM docs has SAPs (system assist processors that do actual I/O) be kept to 70% CPU or about 1.5M IOPS. Also no CKD DASD has been made for decades, all being simulated on industry standard fixed-block devices.
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
Demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Mainframe Networking and LANs Date: 27 Jul, 2025 Blog: Facebookre:
long-ago and far way: co-worker responsible for the science center
wide-area network (that grows into the internal corporate, non-SNA,
network; larger than arpanet/internet from just about the beginning
until sometime mid/late 80s about the time it was forced to convert to
SNA; technology had also been used for the corporate sponsored univ
BITNET). ref by one of the science center inventors of GML (precursor
to SGML&HTML) in 1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
Edson (passed aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Mainframe Networking and LANs Date: 27 Jul, 2025 Blog: Facebookre:
misc. other details ...
OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems
Interconnection standards to become the global protocol for computer
networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."
... snip ...
Original JES NJE came from HASP (that had "TUCC" in card cols 68-71) ... and had numerous problems with the internal network. It started out using spare entries in the 255 entry psuedo device table ... usually about 160-180 ... however the internal network had quickly passed 255 entries in the 1st half of 70s (before NJE & VNET/RSCS release to customers) ... and JES would trash any traffic where the origin or destination node wasn't in their local table. Also the network fields had been somewhat intermixed with job control fields (compared to the cleanly layered VM370 VNET/RSCS) and traffic between MVS/JES systems at different release levels had habit of crashing destination MVS (infamous case of Hursley (UK) MVS systems crashing because of changes in a San Jose MVS JES). As a result, MVS/JES systems were restricted to boundary nodes behind a protected VM370/RSCS system (where a library of code had accumulated that knew how to rewrite NJE headers between origin node and the immediately connected destination node). JES NJE was finally upgraded to support 999 node network ... but after the internal network had passed 1000 nodes.
HASP, ASP, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
For a time, the person responsible for AWP164 (becomes APPN) and I reported to same executive ... and I would periodically kid him that he should come over and work on real networking (TCP/IP) because the SNA people would never appreciate him. When it came time to announce APPN, the SNA group "non-concurred" ... the APPN announcement then was carefully rewritten to NOT imply any relationship between APPN and SNA.
Late 80s, univ. did analysis of VTAM LU6.2 ... finding 160k pathlength compared to UNIX workstation (BSD reno/tahoe) TCP ... 5k pathlength.
First half of 90s, the communication group hired silicon valley contractor to implement TCP/IP directly in VTAM. When he demonstrated was TCP running much faster than LU6.2. He was then told that "everybody" knows that a "proper" TCP implementation is much slower than LU6.2 ... and they would only be paying for a "proper" TCP implementation.
I had taken two credit intro to fortran/computers. The univ was getting 360/67 for tss/360 replacing 709/1401, but tss/360 didn't come to fruition, so 360/67 came in within a year of taking intro class and I was hired fulltime responsible for OS/360 (univ. shutdown datacenter on weekends and I had place dedicated, but 48hrs w/o sleep made my monday classes hard). Then CSC came out to install CP67 (precursor to vm370 virtual machine, 3rd install after CSC itself and MIT Lincoln Labs) and I mostly play with it during my dedicated weekend time. It came with 1052 & 2741 terminal support, including automagic terminal type identification (used SAD CCW to change terminal type port scanner). Univ had some number of ASCII terminals (TTY 33&35) and I add TTY terminal support to CP67 (integrated with automagic terminal type id). I then want to have single dialup number ("hunt group") for all terminals. Didn't quite work, although could change port scanner type, IBM had taken short cut and hard wired line speed.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
This kicks off univ. project to build our own IBM terminal controller,
build 360 channel interface card for Interdata/3 programmed to emulate
IBM 360 controller with addition doing line auto-baud. Then
Interdata/3 is upgraded to Interdata/4 for channel interface and
cluster of Interdata/3s for port interfaces. Interdata (and later
Perkin-Elmer) sells it as 360 clone controller, and four of us are
written up for (some part of) IBM clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
trivia: when ASCII/TTY port scanner first arrived for IBM controller, it came in Heathkit box.
Selectric based terminals ... 1052, 2740, 2741 ... used tilt/rotate code to select ball character position to strike paper. Different balls could have different character sets .... and could translate back&forth between whatever character set used by a computer and the selectric ball that was currently loaded.
Selectric 1961
https://en.wikipedia.org/wiki/IBM_Selectric
Use as a computere terminal
https://en.wikipedia.org/wiki/IBM_Selectric#Use_as_a_computer_terminal
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: SLAC and CERN Date: 28 Jul, 2025 Blog: FacebookStanford SLAC was CERN "sister" institution.
HTML done at CERN (GML invented at CSC in 1969, decade later morphs into ISO SGML and after another decade morphs into HTML at CERN)
Co-worker responsible for the science center CP67 wide-area network
(non-SNA), account by one of the 1969 GML inventors at science center:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
CSC CP67-based wide-area network then grows into the corporate internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s when the internal network was forced to convert to SNA) and technology used for corporate sponsored univ. BITNET
First webserver in the states (outside of europe) was Stanford SLAC on VM370 system (descendant of CSC CP67)
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
SLAC/CERN, initially 168E & then 3081E ... sufficient 370 instructions
implementated to run fortran programs to do initial data reduction
along accelerator line.
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf
SLAC also hosted the monthly BAYBUNCH VM370 user group meetings.
CSC co-worker responsible for CSC wide-area network, Edson (passed aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
posts mentioning slac/cern 168e/3081e
https://www.garlic.com/~lynn/2024g.html#38 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024d.html#77 Other Silicon Valley
https://www.garlic.com/~lynn/2024b.html#116 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2023d.html#73 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#34 IBM Mainframe Emulation
https://www.garlic.com/~lynn/2023b.html#92 IRS and legacy COBOL
https://www.garlic.com/~lynn/2022g.html#54 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2021b.html#50 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2020.html#40 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2017k.html#47 When did the home computer die?
https://www.garlic.com/~lynn/2017j.html#82 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017j.html#81 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017d.html#78 Mainframe operating systems?
https://www.garlic.com/~lynn/2017c.html#10 SC/MP (1977 microprocessor) architecture
https://www.garlic.com/~lynn/2016e.html#24 Is it a lost cause?
https://www.garlic.com/~lynn/2016b.html#78 Microcode
https://www.garlic.com/~lynn/2015c.html#52 The Stack Depth
https://www.garlic.com/~lynn/2015b.html#28 The joy of simplicity?
https://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2015.html#79 Ancient computers in use today
https://www.garlic.com/~lynn/2015.html#69 Remembrance of things past
https://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, "why?" type question - GPU computing
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: SLAC and CERN Date: 28 Jul, 2025 Blog: Facebookre:
note: 1974, CERN did analysis comparing VM370/CMS and MVS/TSO, paper and presentation given at SHARE. Within IBM, copies of the paper were classified "IBM Confidential - Restricted" (2nd highest security classification, required "Need To Know"). While freely available outside IBM, IBM wanted to restrict internal IBMers access. Within 2yrs, head of POK managed to convince corporate to kill the VM370 product, shutdown the development group and and transfer all the people to POK for MVS/XA. Eventually, Endicott managed to save the VM370/CMS product mission (for the midrange), but had to recreate a development group from scratch.
Plans were to not inform the VM370 group until the very last minute, to minimize the numbers escaping into the local Boston/Cambridge area (it was in the days of DEC VAX/VMS infancy and joke was that head of POK was a major contributor to DEC VMS). The shutdown managed to leak early and there was hunt for the leak source (fortunately for me, nobody gave up the source).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts mentioning CERN 1974 SHARE paper
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2022h.html#69 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022g.html#56 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2014l.html#13 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2010q.html#34 VMSHARE Archives
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM ES/9000 Date: 28 Jul, 2025 Blog: FacebookES9000, well ... Amdahl won the battle to make ACS, 360 compatible ... then it was canceled (and Amdahl departs IBM). Folklore; concern that ACS/360 would advance state of the art too fast, and IBM would loose control of the market ... ACS/360 end ... including things that show up more than 20yrs later with ES/9000
1988, got HA/6000, originally for NYTimes to move their newspaper
system (ATEX) off DEC VAXCluster to RS/6000 (run out of Los Gatos lab,
bldg29). I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (that have VAXCluster support in same source base with
UNIX .... Oracle, Sybase, Ingres, Informix).
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid-92 and 128-system clusters ye-92. Mid Jan1992 presentations with FSD convinces them to use HA/CMP cluster scale-up for gov. supercomputer bids. Late Jan1992, cluster scale-up is transferred to be announced as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than 4-systems (we leave IBM a few months later).
Some concern that cluster scale-up would eat the mainframe .... 1993
MIPS benchmark (industry standard, number of program iterations
compared to reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Executive we had reported to for HA/CMP goes over to head up Somerset/AIM (Apple, IBM, Motorola), do single chip Power/PC with Motorola cache/bus enabling SMP, tightly-coupled, shared-memory, multiprocessor configurations.
i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:
• IBM PowerPC 440: 1,000MIPS
• Pentium3: 2,054MIPS (twice PowerPC 440)
Dec2000, IBM ships 1st 16-processor mainframe (industry benchmark):
• z900, 16 processors 2.5BIPS (156MIPS/processor)
mid-80s, communication group was fighting announce of mainframe
TCP/IP, when they lost, they change strategy; since they had corporate
strategic ownership of everything that crossed datacenter walls, it
had to ship through them; what shipped got aggregate 44kbytes/sec
using nearly whole 3090 processor. I then add RFC1044 support and in
some tuning tests at Cray Research between Cray and 4341, get
sustained 4341 channel throughput using only modest amount of 4341 CPU
(something like 500 times improvement in bytes moved per instruction
executed).
RFC1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044
posts mentioning 70s 16-cpu multiprocessor project
https://www.garlic.com/~lynn/2025c.html#111 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#57 IBM Future System And Follow-on Mainframes
https://www.garlic.com/~lynn/2025c.html#49 IBM And Amdahl Mainframe
https://www.garlic.com/~lynn/2025b.html#118 IBM 168 And Other History
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025b.html#73 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#69 Amdahl Trivia
https://www.garlic.com/~lynn/2025b.html#58 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#35 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2024g.html#89 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#56 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024g.html#37 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#107 NSFnet
https://www.garlic.com/~lynn/2024f.html#90 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#46 IBM TCM
https://www.garlic.com/~lynn/2024f.html#37 IBM 370/168
https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#17 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#119 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024b.html#61 Vintage MVS
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2013h.html#14 The cloud is killing traditional hardware and software
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM ES/9000 Date: 28 Jul, 2025 Blog: Facebookre:
IBM AWD (workstation) had done their own cards for PC/RT (16bit, PC/AT bus) including 4mbit token-ring card. Then for RS/6000 (w/microchannel), they were told they could not do their own cards, but had to use the (communication group heavily performance kneecapped) PS2 cards (example PS2 16mbit T/R card had lower card throughput than the PC/RT 4mbit T/R card). New Almaden Research bldg was heavily provisioned with IBM CAT wiring, supposedly for 16mbit T/R, but found that running 10mbit ethernet (over same wiring) had higher aggregate throughput (8.5mbit/sec) and lower latency. Also that $69 10mbit ethernet cards had much higher card throughput (8.5mbit/sec) than the $800 PS2 16mbit T/R cards.
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
Late 80s, a senior disk engineer got a talk scheduled at internal, annual, world-wide communication group conference, supposedly on 3174 performance. However he open the talk with comment that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing mainframe to more distributed computing friendly platforms. They had come up with a number of solutions, but they were constantly being vetoed by the communication group (having stranglehold on mainframe datacenters with their corporate ownership of everything that crossed datacenter walls). Disk division exec partial countermeasure was investing in distributed computing startups using IBM disks, and we would periodically get asked to drop by the investments to see if we could offer any help.
Demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
Wasn't just disks and couple years later, IBM has one of the largest
losses in the history of US companies and was being reorged into the
13 baby blues in preparation for breaking up the company (take-off
on the "baby bell" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup (but it
wasn't long before the disk division was "divested").
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
20yrs before one of the largest losses in US company history, Learson
tried (and failed) to block the bureaucrats, careerists, and MBAs from
destroying Watsons culture & legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
Oh, also 1988, IBM branch asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec). Then POK manages to get their stuff released as ESCON (when it is already obsolete, initially 10mbyte/sec, later upgraded to 17mbyte/sec). Then some POK engineers become involved with "FCS" and define a heavy-weight protocol that significantly reduces throughput, eventually ships as FICON. 2010, z196 "Peak I/O" benchmark gets 2M IOPS using 104 FICON (20K IOPS/FICON). Also 2010, FCS announced for E5-2600 server blades claiming over million IOPS (two such FCS higher throughput than 104 FICON). Note: IBM docs has SAPs (system assist processors that do actual I/O) be kept to 70% CPU or about 1.5M IOPS. Also no CKD DASD has been made for decades, all being simulated on industry standard fixed-block devices.
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM ES/9000 Date: 29 Jul, 2025 Blog: Facebookre:
Other trivia: Early 80s I was introduced to John Boyd and would
sponsor his briefings at IBM. In 1989/1990, the Marine Corps
Commandant leverages Boyd for corps makeover (when IBM was desperately
in need of makeover); some more
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Also early 80s, I got the HSDT project, T1 and faster computer links
(both terrestrial and satellite) and lots of battles with the
communication group (60s, IBM had 2701 controller that supported T1
links, with 70s and transition to SNA and its issues, it appeared
controllers were caped at 56kbits/sec). Was also suppose to get $20M
to interconnect the NSF Supercomputer datacenters ... then congress
cuts the budget, some other things happen and eventually a RFP was
released (in part based on what we already had running), NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.
John Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Mainframe Efficiency Date: 29 Jul, 2025 Blog: FacebookMainframes since turn of century
2010 E5-2600 server blade benchmarked at 500BIPS (ten times max. configured z196, and 2010 E5-2600 still twice z17) and more recent generations have at least maintained that ten times since 2010 (aka say 5TIPS, 5000BIPS)
The big cloud operators aggressively cut costs of system, in part by doing their own asssembling (claiming 1/3rd the price of brand name servers, like IBM). Before IBM sold off its blade server business, it had a base list price of $1815 for E5-2600 server blade (compared to $30M for z196). Then industry press had blade component makers shipping half their product directly to cloud megadatacenters (and IBM shortly sells off it server blade business).
A large cloud operator will have a score or more of megadatacenters around the world, each megadatacenter with half million or more server blades (each blade ten times max. configured mainframe) and enormous automation. They had so radically reduced system costs, that power&cooling was increasingly becoming major cost component. As a result, cloud operators have put enormous pressure on component vendors to increasingly optimize power per computation (sometimes new generation energy efficient, has resulted in complete replacement of all systems).
Industry benchmarks were about total mips, then number of
transactions, then transactions per dollar, and more recently
transactions per watt. PUE (power usage effectivenss) was introduced
in 2006 and large cloud megadatacenters regularly quote their values
https://en.wikipedia.org/wiki/Power_usage_effectiveness
google
https://datacenters.google/efficiency/
google: Our data centers deliver over six times more computing power
per unit of electricity than they did just five years ago.
https://datacenters.google/operating-sustainably/
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 4341 Date: 30 Jul, 2025 Blog: Facebook4341 ... like a chest freezer or credenza
when I transferred to San Jose Research, got to wander around IBM (& non-IBM) datacenters in Silicon Valley, including disk engineering/bldg14 and product test/bldg15 across the street. they had been running 7x24, prescheduled, stand-alone mainframe testing and mentioned that they had recently tried MVS, but it had 15min MTBF (in that environment, requiring manual reboot). I offer to rewrite I/O supervisor to make it bullet-proof and never fail to allow any amount on on-demand, concurrent testing.
Then bldg15 gets 1st engineering 3033 (outside POK processor engineering) for disk I/O testing. Testing was only taking a percent or two of cpu, so we scrounge up a 3830 controller and 3330 string and set-up our, private online service.
Then 1978, get an engineering 4341 (introduced/announced 30jun1979) and in Jan1979, branch office hears about it and cons me into doing a national lab benchmark looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). Later in the 80s, large corporations were ordering hundreds of vm/4341s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). Inside IBM, departmental conference rooms become scarce, so many converted to vm/4341 rooms.
trivia: earlier, after FS imploded and the rush to get stuff back into
370 product pipelines, Endicott cons me into helping with ECPS for
138/148 ... which was then also available on 4331/4341. Initial
analysis done for doing ECPS ... old archived post from three decades
ago:
https://www.garlic.com/~lynn/94.html#21
... Endicott then convinces me to take trip around the world with them presenting the 138/148 & ECPS business case to various planning organizations
mid-80s, communication group was trying to block announce of mainframe TCP/IP and when they lost, they changed tactics. Since they had corporate ownership of everything that crossed datacenter walls, it had to be released through them, what shipped got aggregate 44kbytes/sec using nearly whole 3090 CPU. I then add RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput, using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
note, also in the wake of FS implosion, head of POK managed to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA. Endicott eventually manages to save the VM370 product mission, but had to recreate a development group from scratch
FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370/168 Date: 06 Aug, 2025 Blog: Facebookas undergraduate, i was hired into (very) small group in the Boeing CFO office to help with the formation of boeing computer services (consoldate all dataprocessing into independent business unit) ... at the time i thought renton datacenter was largest in the world (when I graduate, i join ibm science center instead of staying with CFO).
One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters (and online sales and marketing support HONE was one of the 1st and long time customers). In the decision to add virtual memory to all 370s and morph CP67 into VM370, lots of features were simplified or dropped (including muliprocessor support).
US HONE consolidates all its datacentets in silicon valley with a bunch of 168s (trivia: when facebook 1st moves into silicon valley, it was new bldg built next door to former consolidated US HONE datacentet). I then add multiprocessor/SMP support into my VM370R3-based CSC/VM, initially for HONE (so they can upgrade all their 168s to multiprocessor/SMP)
370/165 avg 2.1 machine cycles per 370 instruction. move to 168 optimized microcode to avg 1.6 cycles per 370 instruction and new memory 4-5 times faster (getting about 2.5MIPS). 168-3 doubled processor cache size getting to about 3MIPS.
168-3 used 2k bit to index additional cache entries ... as result 2k page mode (vs1, dos/vs) only ran with half cache (same size as 168-1). VM/370 ran in 4k mode, except when running 2k virtual operating system (vs1, dos/vs), and could run much slower because the constant switching between 2k&4k modes, when hardware had to flush the cache.
First half 70s, IBM had the Future System effort,
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
which was going to completely replace 370 (and completely different than 370; internal politics was shutting down 370 efforts and lack of new 370s is credited with giving the 370 system clone makers their market foothold). When FS implodes, there is mad rush to get stuff back into 370 product pipelines, includig kicking off quick&dirty 3033&3081 efforts in parallel.
3033 starts out remapping 168 logic to 20% faster chips. They then further optimize 168 microcode to get it down to a avg. one machine cycle per 370 instruction (getting about 1.5 times 168-3 MIP rate).
303x channel director is 158 engine with just the integrated channel microcode. A 3031 is two 158 engines, one w/just integrated channel microcode, the other w/370 microcode. A 3032 is 168-3 with channel director (slower than 168-3 external channels)
After FS implodes, there is also a new effort to do a 370 16-cpu multiprocessor (SMP) that I got roped into helping (in part because my HONE 2-cpu implementation was getting twice throughput of single cpu) and we con the 3033 processor engineers into working on in it their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells head of POK that it could be decades before POK's favorite son operating system ("MVS") had (effective) 16-cpu SMP support (MVT/MVS documents that their 2-cpu support only getting 1.2-1.5 throughput of single processor; note: POK doesn't ship 16-cpu SMP until after turn of century).
Head of POK then directs some of us to never visit POK again and directs 3033 processor engineers heads down and no distractions. Contributing was head of POK was in the process of convincing corporate to kill the VM370 product, shutdown the product group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission for the midrange, but has to recreate a development group from scratch).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts mentioning CP67L, CSC/VM, and/or SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's 32 vs 64 bits, was VAX Newsgroups: comp.arch, alt.folklore.computers Date: Thu, 07 Aug 2025 07:32:35 -1000John Levine <johnl@taugh.com> writes:
original 360 I/O had only 24bit addressing, adding virtual memory (to all 370s) added IDALs, the CCW was still 24bit but were still being built by applications running in virtual memory ... and (effectively) assumed any large storage locations consisting of one contiguous area. Moving to virtual memory, I/O large "contiguous" area was now borken into page size chunks in non-contiguous areas. Translating "virtual" I/O program, the original virtual CCW ... would be converted to CCW with real addresses and flagged as IDAL ... where the CCW pointed to IDAL list of real addresses ... that were 32 bit words ... (31 bits specifying real address) for each (possibly non-contiguous) real page involved.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Tandem Non-Stop Date: 07 Aug, 2025 Blog: FacebookA small SJR group (including Jim Gray, misc. others from south san jose, and periodically even a number of non-IBMers) would have Friday's after work at local watering holes (I had worked with Jim Gray and Vera Watson on original sql/relational, System/R). Jim Gray then left SJR for Tandem fall 1980. I had been blamed for online computer conference on the IBM internal network late 70s and early 80s. It really took off the spring of 1981 when I distribute "friday" trip report to see Jim at Tandem. From IBMJargon:
Folklore is that when the corporate executive committee was told, 5of6 wanted to fire me. Tandem study from Jim
https://www.garlic.com/~lynn/grayft84.pdf
'85 paper
https://pages.cs.wisc.edu/~remzi/Classes/739/Fall2018/Papers/gray85-easy.pdf
https://web.archive.org/web/20080724051051/http://www.cs.berkeley.edu/~yelick/294-f00/papers/Gray85.txt
Original SQL/Relational posts
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: MVT/HASP Date: 07 Aug, 2025 Blog: FacebookI took two credit hr intro to fortan/computers. At the end of the semester, I was hired to rewrite 1401 MPIO for 360/30. Univ was getting 360/67 for tss/360 replacing 709(tape->tape)/1401(709 front-end) and 360/30 was temporary replacement for 1401 until 360/67 arrived. Univ shutdown datacenter on weekends and I would get the whole place to myself (although 48hrs w/o sleep made monday classes hard). I got a whole stack of hardware and software manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. Then within a year of taking intro class, the 360/67 arrived and I was hired fulltime responsible for OS/360 (TSS/360 hadn't come to fruition).
Student fortran had run under second on 709, but initially over a minute on 360/67 (running as 360/65). I install HASP and cut the time in half. I then start doing highly modified stage2 sysgen with MFT11, carefully placing datasets and PDS members to optimize arm seek & multi-track search; cutting another 2/3rds to 12.9secs. Student fortran never got better than 709 until I install Univ of Waterloo WATFOR (ran at 20,000 "cards"/min on 360/65, i.e. 333/sec ... its own monitor handling multiple jobs in single step; student fortran tended to be 30-60 cards, operations tended to do a tray of student fortran cards per run).
Then CSC comes out to install CP67/CMS (3rd installation after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my dedicated weekends. It came with 1052&2741 terminal support and Univ had some number of ascii tty 33s&35s and I add ascii terminal support.
First MVT sysgen I did was for 15/16, and then for MVT18/HASP, I remove 2780 support (to reduce core footprint) and add terminal support with a editor that simulated CMS edit-syntax for a CRJE-llke function for HASP.
Before I graduate, I was hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Boeing Renton datacenter largest in the world (joke was that Boeing was getting 360/65s like other companies got keypunches).
trivia: my (future) wife was in Crabtree's gburg JES group and one of the co-authors of the "JESUS" specification (all the features of JES2 & JES3 that respective customers couldn't live without). For various reasons never came to fruition.
ASP, HASP, JES3, JES2, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
some recent univ 709/1401, MPIO, and Boeing CFO posts
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025b.html#117 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#102 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#38 IBM Computers in the 60s
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#1 Large Datacenters
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Some VM370 History Date: 07 Aug, 2025 Blog: Facebookwell ... recent ref:
CSC comes out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly played with it during by dedicated weekend 48hrs. I start out rewriting pathlengths for running OS/360 running in virtual machine. Test stream ran 322secs on real machine, initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to channel transfer peak of 270).
CP/67 came with 1052 & 2741 terminal support, including automagic terminal type identification (used SAD CCW to change terminal type port scanner). Univ had some number of ASCII terminals (TTY 33&35) and I add TTY terminal support to CP67 (integrated with automagic terminal type id). I then want to have single dialup number ("hunt group") for all terminals. Didn't quite work, although could change port scanner type, IBM had taken short cut and hard wired line speed.
This kicks off univ. project to build our own IBM terminal controller,
build 360 channel interface card for Interdata/3 programmed to emulate
IBM 360 controller with addition doing line auto-baud. Then
Interdata/3 is upgraded to Interdata/4 for channel interface and
cluster of Interdata/3s for port interfaces. Interdata (and later
Perkin-Elmer) sells it as 360 clone controller, and four of us are
written up for (some part of) IBM clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
trivia: when ASCII/TTY port scanner first arrived for IBM controller, it came in Heathkit box.
CSC was picking up much of my code and shipping with CP67. Six months after installing CP67 at the univ, CSC had scheduled a CP67/CMS class on the west coast, and I'm scheduled to go. I arrive Sunday night and am asked to teach the CP67 class. It turns out the CSC people scheduled to teach had resigned that Friday to join NCSS (one of the early commercial CP67 startups). Later I join small group in Boeing CFO office and after I graduate, I join CSC (instead of staying with CFO). Almost immediatly I'm asked to teach (more) classes.
With regard to various agencies that had been heavy CP67 users back to
the 60s:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
Early 80s, 308x was suppose to be multiprocessor only (and was some warmed over technology from FS implosion). The 3081D 2-CPU had lower aggregate MIPS than the single processor Amdahl *and* some IBM production systems had no multiprocessor support (like ACP/TPF) and IBM was afraid that the whole market would move to Amdahl. There were a number of hacks done to VM370 multiprocessor to try and improve ACP/TPF throughput running in a single virtual machine by increase overlapped, asynchronous processing in an otherwise idle 2nd (3081) processor. However those "enhancements" had degraded nearly all the other VM370 customer multiprocessor throughput by 10-15+%. Then some VM370 tweaks were made to improve 3270 terminal response (attempting to mask the degradation).
There were some large customers back to the 60s that were fast ASCII glass teletypes which didn't see any benefit from those VM370 3270 tweaks. I had earlier done something similar, but in the CMS code ... which worked for all terminal types (not just 3270) and was asked in to help this large, long-time customer; initially reduced Q1 drops from 65/sec to 43/sec for the same amount of CMS intensive interactive throughput ... but I wasn't allowed to undo the VM370 ACP/TPF tweaks. I was allowed to put the VM370 DMKVIO code back to the original CP67 implementation ... which significantly reduced that part of VM370 overhead (somewhat offsetting the multiprocessor overhead tailored for running virtual ACP/TPF).
VM370 multiprocessor posting
https://www.garlic.com/~lynn/2025d.html#12 IBM 370/168
CSC postings
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller postings
https://www.garlic.com/~lynn/submain.html#360pcm
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM RSCS/VNET Date: 07 Aug, 2025 Blog: FacebookSome of the MIT ctss/7094 (had msg function on same machine) people went to the 5th flr for Multics. Others went to IBM Science Center on 4th flr did virtual machines (virtual memory hardware mods for 360/40 for CP40/CMS, morphs into CP67/CMS when 360/67 standard with virtual memory became available), Science Center wide-area network (morphs into VNET/RSCS internal corporate network, technology also used for corporate sponsored univ BITNET), lots of other stuff ... including messaging on the same machine.
IBM Pisa Science Center did "SPM" (sort of superset of later combination of IUCV, VMCF, and SMSG) for CP67 that was later ported to VM370. Original RSCS/VNET (before ship to customers) had SPM support ... that supported forwarding messages to anywhere on the network.
co-worker was responsible for CP67-based wide-area network, one of the
1969 inventors of GML (decade later morphs ISO SGML and after another
decade morphs into HTML at CERN)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
CSC CP67-based wide-area network then grows into the corporate internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s when the internal network was forced to convert to SNA).
There were problems with MVS/JES2 systems and had to be tightly regulated ... original HASP code had "TUCC" in cols68-71 and scavenged unused entries in the 255-entry psuedo device table (tended to be 160-180 entries). JES2 would trash traffic where origin or destination node wasn't in local table ... when the internal network was well past 255 nodes (and JES2 had to be restricted to edge nodes with no or minimal passthrough traffic).
Also NJE fields were somewhat intermixed with job control fields and there were tenancy for traffic between JES2 systems at different release levels to crash the destination MVS. As a result the RSCS/VNET simulated NJE driver built up a large amount of code that would recognize differences between MVS/JES2 origin and destination and adjust fields to correspond to the immediate destination MVS/JES2 (further restricting MVS systems to edge/boundary node, behind a protective VM370 RSCS/VNET system). There was infamous case where changes in a San Jose MVS system was crashing MVS systems in Hursley (England) and the Hursley VM370/VNET was blamed (because they hadn't installed the updates to account for the San Jose JES2 field changes).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
ASP, HASP, JES3, JES2, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
some RSCS, VNET, SPM, VMCF, IPCS, SMSG posts
https://www.garlic.com/~lynn/2025c.html#113 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025b.html#16 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025.html#116 CMS 3270 Multi-user SPACEWAR Game
https://www.garlic.com/~lynn/2025.html#114 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024g.html#97 CMS Computer Games
https://www.garlic.com/~lynn/2024d.html#43 Chat Rooms and Social Media
https://www.garlic.com/~lynn/2024b.html#82 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#44 Adventure Game
https://www.garlic.com/~lynn/2022f.html#94 Foreign Language
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#81 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2020.html#46 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2006k.html#51 other cp/cms history
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Some VM370 History Date: 08 Aug, 2025 Blog: Facebookre:
Some gov agency was very active in the SHARE VM370 group and on TYMSHARE's VMSHARE. SHARE installation code was 3letter that usually represented the company ... in this case, they chose "CAD" (supposedly standing for "cloak and dagger").
Name regular at SHARE and the name & agency shows up on
VMSHARE. Tymshare started providing their CMS-based online computer
conference for "free" to SHARE in Aug1976. After transfer from CSC to
SJR in 2nd half of 70s, I would regularly get to wander around
datacenters in silicon valley, and regular visits to Tymshare (and/or
see them at the monthly BAYBUNCH meetings hosted by Stanford SLAC). I
cut an early deal with Tymshare to get monthly tape dump of all
VMSHARE files for putting up on internal systems and network (biggest
problem was lawyers that were concerned that IBM internal employees
would be exposed to unfiltered customer information). After Tymshare
acquired by M/D in 1984, VMSHARE had to move to different platform.
http://vm.marist.edu/~vmshare/
random example: in 1974, CERN did a VM370/CMS comparison with MVS/TSO and presented paper at SHARE. Copies inside IBM were maked confidential/restricted (2nd highest security, required "need to know") to limit internal employee exposure to unfiltered customer information (later after "Future System" implosion and mad rush to get stuff back into 370 product pipelines, head of POK convinced corporate to kill VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA; Endicott eventually manages to save VM370 product mission, but had to recreate a development group from scratch).
In recent years, I'm was reading a couple works about Lansdale and one mentions an 1973 incident where the VP goes across the river to give a talk in agency auditorium. That week I'm teaching a class in the basement (some 30-40 people). In the middle of one afternoon, half the class gets up and quietly leaves. Then one of the people remaining tells me I can look at it in one of two ways, half the class leaves to go upstairs to listen to the VP in the auditorium and half the class stays to listen to me. I can't remember for sure if he was also my host at that 73 class.
trivia: for fun of it, search VMSHARE memo/note/prob/browse for that
last name, turns up several (not all same person). This happens to be
one also mentioning a silicon valley conference where I was frequently
only IBM attendee:
http://vm.marist.edu/~vmshare/browse.cgi?fn=SUNDEVIL&ft=NOTE
some past posts mentioning Lansdale:
https://www.garlic.com/~lynn/2022g.html#60 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022e.html#98 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#30 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2021j.html#37 IBM Confidential
https://www.garlic.com/~lynn/2021d.html#84 Bizarre Career Events
https://www.garlic.com/~lynn/2019e.html#98 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#90 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019.html#87 LUsers
https://www.garlic.com/~lynn/2018e.html#9 Buying Victory: Money as a Weapon on the Battlefields of Today and Tomorrow
https://www.garlic.com/~lynn/2018d.html#101 The Persistent Myth of U.S. Precision Bombing
https://www.garlic.com/~lynn/2018d.html#0 The Road Not Taken: Edward Lansdale and the American Tragedy in Vietnam
https://www.garlic.com/~lynn/2018c.html#107 Post WW2 red hunt
https://www.garlic.com/~lynn/2013e.html#16 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013d.html#48 What Makes an Architecture Bizarre?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 370 Virtual Memory Date: 09 Aug, 2025 Blog: FacebookIBM cambridge science center wanted 360/50 to modify for virtual memory, but all the extras were going to FAA/ATC, so had to settle for 360/40 and then did CP40/CMS. When 360/67 standard with virtual memory became available, CP40 morphs into CP67 (at the time, the official commercial support was TSS/360 which had 1200 people ... when CSC had 12 people in the CP67/CMS group). There were two commerical, online spinoffs of CSC in the 60s ... and later in the 70s also commerical operations like BCS & TYMSHARE, offering commercial online services. Of course by far, the largest "commercial" CP67 offering was the internal branch office online sales and marketing support HONE systems.
Early last decade, a customer asked me to tract down decision to find the IBM decision to add virtual memory to all 370s ... and found staff member to executive making decision. Bascially MVT storage management was so bad that region sizes had to be specified four times larger than used and a typical 1mbyte 370/165 would only run four concurrent regions, insufficient to keep system busy and justified. Going to running MVT in a 16mbyte virtual memory would allow number of regions to be increased by factor of four (capped at 15 by 4bit storage protect key) with little or no paging (sort of like running MVT in a CP67 16mbyte virtual machine).
Ludlow was doing the initial VS2 implementation on 360/67 (pending 370 engineering models with virtual memory) ... and I would periodically drop into visit. There was a little bit of code building the tables, page replacement, and page I/O. The biggest issue was (EXCP/SVC0) making copies of channel programs, replacing virtual addresses with real (same as CP67) and he borrows CP67 CCWTRANS to craft into EXCP (this was VS2/SVS, to get around the 15 region limit of using 4bit storage to keep regions separated, SVS was moved to VS2/MVS, giving each region its own virtual memory address space).
Note in 60s, Boeing had modified MVTR13 to run in virtual memory (sort of like initial VS2/SVS), but w/o paging (to partially address the MVT storage management issues) ... more akin to Manchester ... aka single virtual address space (not lots of virtual address spaces) at a time.
I had done a lot of work on CP67 as undergraduate before joining CSC (univ I was at, was 3rd CP67 installation after CSC itself and MIT Lincoln labs), that CSC would ship in the product). In the decision to add virtual memory to all 370s, there was also decision for CP67->VM370 and a lot of features were simplified or dropped. When I graduated and joined CSC, one of my hobbies was enhanced production operating systems for internal datacenters and HONE was one of my first (and long-time customer). SHARE orgination were submitting resolutions to IBM for releasing lots of my CP67 enhancements incorporated into VM370 for release to customers. Some pieces dribbled out in VM370R3 & VM370R4.
Also in the early half of 70s, was the IBM FS effort (completely different than 370 and was going to completely replace it), and internal politics was killing off 370 efforts, the lack of new 370 during the period is credited with given the clone 370 makers (including Amdahl) their market foothold. Then when FS imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel.
The head of POK was also convincing corporate to kill the VM370
product, shutdown the development group and transfer all the people to
POK for MVS/XA (Endicott eventuall manages to save the VM370 product
mission, but had to recreate a development group from scratch).
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
The final nail in the FS coffin was analysis by the IBM Houston Science Center that if 370/195 applications were redone for FS machine made out of the fastest available technology, the throughput would be about that of 370/145 (about 30 times slowdown).
Original target for CP67->VM370 was 370/145 ... and greatly simplified my (undergraduate) dynamic adaptive scheduling and resource management done for CP67 ... which Kingston Common really struggled with for higher end machines. I spent much of 1974 moving lots of CP67 stuff into VM370R2 (including my dynamic adaptive code) for my internal CSC/VM. Then moved CP67 multiprocessor support into VM370R3-based CSC/VM ... originally for HONE (US HONE datacenters had been consolidated in silicon valley) so they could upgrade all the 168s to 2-CPU multiprocessor (getting twice throughput of 1-CPU .... at a time when MVS docs claimed only 1.2-1.5 times the throughput of 1-CPU).
I had transferred from CSC to SJR on the west coast and got to wander a lot of IBM (and non-IBM) datacenters including disk bldg14/engineering and bldg15/product test across the street. They were running prescheduled, 7x24, stand-alone mainframe testing and had mentioned they had tried MVS, but it had 15min MTBF (requiring manual re-ipl) in that environment. I offered to rewrite I/O supervisor making it bullet-proof and never fail, allowing any amount of on-demand, concurrent testing (greatly improving productivity). A couple years later with 3380s about to ship, FE had a test of 57 simulated errors (they believe likely to occur), MVS was still failing in all 57 cases (and in 2/3rds of the cases, no indications of why).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online (virtual machine based) commercial offerings
https://www.garlic.com/~lynn/submain.html#online
dynamic adaptive scheduling and resource managemeent posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging, page replacement, working set, etc posts
https://www.garlic.com/~lynn/subtopic.html#clock
CP67L, CSC/VM, SJR/VM, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
post about decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
Melinda's VM370 (and some CP67) history
https://www.leeandmelindavarian.com/Melinda#VMHist
some recent posts
https://www.garlic.com/~lynn/2025d.html#15 MVT/HASP
https://www.garlic.com/~lynn/2025d.html#16 Some VM370 History
https://www.garlic.com/~lynn/2025d.html#17 IBM RSCS/VNET
https://www.garlic.com/~lynn/2025d.html#18 Some VM370 History
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 370 Virtual Memory Date: 09 Aug, 2025 Blog: Facebookre:
Other trivia: 23jun1969 unbundling announcement including charging for (application) software (but made the case that kernel software should still be free). Then with the demise of FS and mad rush to get stuff back into 370 product pipelines (along with the associate rise of 370 clone makers), there was transition to start charging for incremental kernel addons (eventually resulting for all kernel software in the 80s) ... and a bunch of my internal stuff was chosen as guinea pig for (charged for) release (I had to spend some amount of time with lawyers and business people about kernel software policies), aka became SEPP, prior to SP.
Unfortunately, I included VM370 kernel reorganization for multiprocessor operation (but not actual multiprocessor support). The initial kernel charge policy was hardware support was still (initially) free (and couldn't have prereq of charge-for software). When the decision was made to release multiprocessor support ... that created a problem with its dependency on the corresponding (charge-for) kernel reorg. The eventual decision was to move all of that software into the "free" base (while not changing the price of the remaining kernel add-on).
23jun1969 unbundling
https://www.garlic.com/~lynn/submain.html#unbundle
future system
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: HA/CMP Date: 09 Aug, 2025 Blog: FacebookMid-80s, communication group was fighting off IBM mainframe TCP/IP support release, when they lost, they then change tactic and said that since they had corporate responsibility for everything that crossed the datacenter walls, it had to be released through them; what shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then did RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 cpu (something like 500 times improvement in bytes moved per instruction executed).
1988, IBM branch office asked if I could help LLNL standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980; initial 1gbit/sec, full-duplex, 200mbytes/sec). Then the IBM POK mainframe group finally releases some serial stuff with ES/9000 as ESCON (when it is already obsolete, initially 10mbyte/sec, later upgraded to 17mbyte/sec).
Also 1988, HA/6000 is approved, initially for NYTimes to move their
newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had VAXCluster support in same source base as Unix. Early Jan1992 in meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Mid-Jan92, I convince IBM FSD to use HA/CMP for gov supercomputer bids. Then late-Jan92, cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we were told we can't work on anything with more than four systems (we leave IBM a few months later).
There apparently some concern that HA/CMP would eat the commercial mainframe (1993):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Late 90s, I did some consulting for Steve Chen (at the time CTO of Sequent, before IBM bought it and shut it down).
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
trivia: sometime after leaving IBM, I'm brought in as consultant at a small client/server startup, two of the former Oracle employees (that were in the Jan92 Ellison/Hester meeting) are their responsible for something called "commerce server" and they want to do payment transactions. The startup had also invented this technology called "SSL" they want to use. The result is now sometimes called "electronic commerce". I have responsibility for webservers to payment networks.
Payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 370 Virtual Memory Date: 09 Aug, 2025 Blog: Facebookre:
Manchester & VS2/SVS used virtual memory for mapping single address space ... needing more addressing than available real memory in VS2/SVS case it was more to compensate for the poor MVT storage management ... because there was little or no paging. Then the move to VS2/MVS was because needed separation/protection for more than 15 concurrently executing regions (provided by 4bit storage protection keys); giving each executing region its own separate virtual address space.
Then the move to MVS/XA was because the extensive OS/360 pointer passing API. The switch from VS2/SVS where everything was in the same address space ... and the kernel calls (SVC) met supervisor directly addressing parameters pointed to by the caller's pointer ... requiring an 8mbyte kernel image occupied 8mbytes of every caller's virtual address space (cutting application space from 16mbytes to 8mbytes). Then because subsystems were moved to their own separate address space ... to access calling parameters, had to place them into the CSA (common segment area) that was mapped into every application address space (leaving 7mbytes). Then because CSA space requirements were somewhat proportional to number of concurrent regions and number of subsystems, CSA became common system area ... and by 3033 had exploded to 5-6mbytes (leaving 2-3mbytes for application (but threatening to become 8mbytes, leaving zero).
370/xa introduced access registers and primary/secondary address spaces for subsystems ... parameters could stay in caller's address space (not CSA) ... system would switch the caller's address space to secondary and load the subsystem's address space into primary ... now subsystems can access everything in the caller's address space (including parameters) ... on return the process was reversed, moving secondary address space back to primary. The 3033 issue was becoming so dire that subset of access registers was retrofitted to 3033 as "dual address space mode".
trivia: the person that retrofitted "dual address space mode" for 3033, in the early 80s left IBM for HP ... and later was one of the primary architects for Intel Itanium.
paging posts:
https://www.garlic.com/~lynn/subtopic.html#clock
some posts mentioning pointer passing API, MVS problems and CSA
(both segment and system)
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#67 IBM Mainframe Addressing
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#40 Mainframe Family tree and chronology 2
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2015h.html#116 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2013m.html#71 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2013.html#22 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012o.html#30 Regarding Time Sharing
https://www.garlic.com/~lynn/2012n.html#21 8-bit bytes and byte-addressed machines
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#17 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2010p.html#21 Dataspaces or 64 bit storage
https://www.garlic.com/~lynn/2010c.html#41 Happy DEC-10 Day
https://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 370 Virtual Memory Date: 10 Aug, 2025 Blog: Facebookre:
other trivia: mentions customers weren't moving to VS2/MVS (as fast as
needed, I was at the SHARE when it was 1st played), see "$4K"
reference in Glossary:
http://www.mxg.com/thebuttonman/boney.asp
with the FS implosion there was mad rush to get stuff back into the 370 product pipelines, kicking off quick&dirty 3033&3081 in parallel ... along with 370/xa ... referred to as "811" (for Nov78 publication of specification, design, architecture) ... nearly all done for MVS/XA (head of POK had already convinced corporate to kill vm370 product, shutdown the development group and transfer all the people to POK for MVS/XA; endicott managed to save vm370 product mission for the mid-range, but had to recreate development group from scratch).
Later, customers weren't migrating from MVS to MVS/XA (as required, and CSA is threatening to take over all the remaining of 16mbyte address space). Amdahl was having more success because Amdahl machines had the microcode (virtual machine) hypervisor (multiple domain) and could run MVS & MVS/XA concurrently (IBM wasn't able to respond with LPAR&PR/SM for nearly a decade). POK had done a simplified VMTOOL for MVS/XA development, needed special microcode (to slip in&out of VM-mode, eventually named SIE) and the microcode had to be swapped in&out (sort of like overlays) because of limited 3081 microcode space (so never was targeted for performance) ... eventually VMTOOL made available to 3081 customers as VM/MA (migration aid) and VM/SF (system facility).
Part of the issue needing ever increasing number of concurrent executing regions as machines increased in power was a tome I wrote in the early 80s (started pointing out in the mid-70s), that disk relative system throughput had decline by order of magnitude since 360 announce (in the 60s), i.e. disks got 3-5 times faster while systems got 40-50 times faster. A disk division executive took exception to the analysis and assigned the division performance group to refute the claim. However after a couple weeks, they came back and effectively said that I had slightly understated the problem. Their analysis then was respun for a presentation on how to configure disks and filesystems for better system throughput (16Aug1984, SHARE 63, B874).
recent past posts about MVS & MVS/XA migration
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Yorktown Research Date: 10 Aug, 2025 Blog: FacebookTransferred from CSC to SJR on the west coast ... and then for numerous transgressions (folklore; 5of6 of corporate executive committee wanted to fire me), was transferred to YKT ... still lived in San Jose and had various IBM offices/labs in the area, but had to commute to YKT a couple times a month (SJ monday, SFO->JFK redeye monday night, bright and early Tues in YKT, Tues-Fri in YKT and JFK->SFO Fri afternoon). Was told that they could never make me a fellow with 5of6 of corporate executive committee wanting to fire me ... but if I kept my head down, they could route funding my way as if I were one.
I also had part of wing and labs in Los Gatos lab. and along the way,
funding for "HSDT", T1 and faster computer links ... and battles with
the communication group (IBM had 2701 controllers in the 60s w/T1
support, but transition in 70s to SNA/VTAM and various issues, caped
controllers at 56kbit/sec). Initially had T1 circuit over the company
T3 C-band TDMA satellite system, between LSG and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi E&S lab in Kingston that
had boat loads of Floating Point System boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems
Then got dedicated custom designed Ku-band TDMA system, initially three stations, LSG, YKT, and Austin (included allowing RIOS chip design team to use the EVE in San Jose)
Was also working with NSF director and was suppose to get $20M to
interconnect the NSF Supercomputer Centers. Then congress cuts the
budget, some other things happen and eventually an RFP is release NSF
28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.
IBM branch asked if I could help LLNL standardize some serial stuff
they were working with, quickly became fibre-channel standard ("FCS",
including some stuff I had done in 1980, initial 1gbit/sec,
full-duplex, aggregate 200mbyte/sec). Later POK ships their fiber
stuff as ESCON (when it is already obsolete). Same year, got HA/6000
project, initially for NYTimes to move their newspaper system (ATEX)
off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Inges, Informix) that had VAXcluster
support in the same source base with Unix.
We had reported to executive that goes over to head up (AIM) Somerset.
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid92 and 128-system cluster ye92. Mid-jan1992, convinced FSD to bid HA/CMP for gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*), told not allowed to work with anything more than 4-system clusters, then leave IBM a few months later.
A little later, asked in as consultant to small client/server startup. Two former Oracle employees (that were in Ellison/Hester meeting) were there responsible for something called "commerce server" and they wanted to do payment transactions. The startup had done some technology they called "SSL" they wanted to use; it is now frequently called "electronic commerce"; I had responsibility for everything between webservers and payment networks. IETF/Internet RFC Editor, Postel also let me help him with the periodically re-issued "STD1".
Designed security chip and working with Seimens guy with office in the
old ROLM facility. Seimens spins chips off as Infineon and guy working
with became its president and rang bell at NYSE. Was then getting it
fab'ed at new security chip fab in Dresden (already certified by US &
German govs) and was required to do audit walk through. TD to agency
DDI was doing assurance panel in the Trusted Computing track at IDF
... ref gone 404, but lives on at wayback machine
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
some x959&AADS posts
https://www.garlic.com/~lynn/subpubkey.html#x959
x959&aads refs
https://www.garlic.com/~lynn/x959.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Management Date: 11 Aug, 2025 Blog: Facebook1972, Learson tried (and failed) to block bureaucrats, careerists, and MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of management briefings 1958-1988
F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
Future Systems posts
https://www.garlic.com/~lynn/submain.html#futuresys
Late 80s, AMEX and KKR were in competition for private-equity,
reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help.
I was introduce to John Boyd in the early 80s and would sponsor his
briefings at IBM. In 89/90, the Marine Corps Commandant leverages Boyd
for makeover of the corps (at a time when IBM was desperately in need
of a makeover). Then IBM has one of the largest losses in the history
of US companies and was being reorganized into the 13 baby blues in
preparation for breaking up the company (take-off on "baby bell"
breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
IBM downturn/downfall/breakup posts:
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
and
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
from Learson 1972 Management Briefing:
Management Briefing
Number 1-72: January 18,1972
ZZ04-1312
TO ALL IBM MANAGERS:
Once again, I'm writing you a Management Briefing on the subject of
bureaucracy. Evidently the earlier ones haven't worked. So this time
I'm taking a further step: I'm going directly to the individual
employees in the company. You will be reading this poster and my
comment on it in the forthcoming issue of THINK magazine. But I wanted
each one of you to have an advance copy because rooting out
bureaucracy rests principally with the way each of us runs his own
shop.
We've got to make a dent in this problem. By the time the THINK piece
comes out, I want the correction process already to have begun. And
that job starts with you and with me.
Vin Learson
... snip ...
IBM wild duck poster 1973
https://collection.cooperhewitt.org/objects/18618011/
Before research, I had joined cambridge science center after graduation. I would attend user group meetings and drop into customer accounts, director of one of IBM's largest (financial industry) datacenters especially liked me to drop in and talk technology. At one point, the local IBM branch manager horribly offended the customer and in retribution, they ordered an Amdahl computer (it would be a lonely Amdahl in a vast sea of blue; Amdahl had been selling into technical/scientific market and this would be the first for true blue, commercial customer). I was asked to go live on-site for 6-12 months (to help obfuscate the reason for the order). I talked it over with the customer and then refused the request. I was then told that the branch manager was good sailing buddy of IBM CEO and if I refused, I could say goodby to career, promotions, raises.
Amdahl leaves after ACS/360 is killed
https://people.computing.clemson.edu/~mark/acs_end.html
Later after transferring to SJR on west coast, I 1st tried to have Boyd briefing done through San Jose plant site education. Initially, they agreed ... but later as I provided more info about briefing and prevailing in adversarial situations, they told me IBM spends a great deal educating managers in handling employees and it wouldn't be in IBM's best interest to expose general employees to Boyd. I should limit the audience to senior members of competitive analysis departments. First briefing was in SJR auditorium (open to all). I did learn that a "cookie guard" was required for break refreshments ... otherwise the refreshments will have disappeared into the local population by break time. I was then admonished that unspoken rule was talks by important people, had to be scheduled first in YKT before other research locations.
other Boyd:
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory
https://en.wikipedia.org/wiki/OODA_loop
Boyd related posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 1655 Date: 11 Aug, 2025 Blog: Facebook1982, started getting "IBM 1655" (solid state disk from intel) ... it could emulate four 2305 (48mbyte) on 1.5mbyte channel (7-8ms/page) ... but could also be configured as native mode and 3mbyte data streaming (3ms/page). My "SYSPAG" was a way of specifying DASD configuration for paging, w/o having explicitly coded device type rules. A decade earlier, I had released "page migration" checking for idle pages on "fast" paging devices and moving to "slower" paging devices (page replacement for 3 level, rather than just memory/DASD 2-level)
paging, page replacement, page I/O:
https://www.garlic.com/~lynn/subtopic.html#clock
posts mentioning 1655 and SYSPAG
https://www.garlic.com/~lynn/2021j.html#28 Programming Languages in IBM
https://www.garlic.com/~lynn/2019b.html#4 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2011e.html#79 I'd forgotten what a 2305 looked like
https://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance
https://www.garlic.com/~lynn/2007c.html#0 old discussion of disk controller chache
https://www.garlic.com/~lynn/2006y.html#9 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#18 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 1655 Date: 11 Aug, 2025 Blog: Facebookre:
I got HSDT, T1 and faster computing links (both terrestrial and satellite) and lots of battles with communication group (60s IBM 2701 controller supported T1, but 70s move to SNA/VTAM and issues, appeared to cap controllers at 56kbits/sec).
Mid-80s, they generated analysis that customers weren't looking for T1 support until sometime well into the 90s. The showed number of "fat pipe" configurations (parallel 56kbit treated as single logical links) ... and found they dropped to zero by seven parallel links (what they didn't know, or didn't want to publicize, was that typical telco tariff at five or six 56kbit was about the same as full T1). Trivial survey by HSDT found 200 customers with full T1, they just switched to non-IBM hardware and software.
About the same time they were fighting off release of mainframe TCP/IP support. When they lost, they changed the tactics and said that since they had corporate ownership of everything that crossed datacenter walls, it had to be release through them. What shipped got aggregate 44kbytes/sec using nearly full 3090 processor. I then added RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest about of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).
Some univ analysis claimed that LU6.2 VTAM pathlength was 160K instructions while equivalent unix workstation TCP pathlength was 5K instruction.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Univ, Boeing/Renton, IBM/HONE Date: 11 Aug, 2025 Blog: FacebookWithin a year of taking two credit hr fortran/computer class, a 360/67 arrived (part of replacing 709/1401), originally for TSS/360 (but never came to production), and I was hired fulltime responsible for os/360. Then CSC comes out and installs CP67/CMS (3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my dedicate weekend time (univ shutdown datacenter on weekends although 48hrs w/o sleep made monday classes hard). CP67 supported 1052&2741 terminals with automagic terminal type identification (switching terminal-type port scanner as needed). Univ. had some number of ASCII TTY 33&35, so I add ascii terminal support (integrated with automagic terminal type id; trivia when the ASCII port scanner had been delivered to the univ, it came in Heathkit box). I then wanted single dial-up number ("hunt group") for all terminals. Didn't quite work, IBM had taken short-cut and hardwired line speed for each port. That kicks off clone controller project, implement channel interface board for Interdata/3 programmed to emulate IBM controller, with addition it support auto line speed. It is then upgraded with Interdata/4 for channel interface, with cluster of Interdata/3s for port interfaces. Four of us are then written up for (some part of) clone controller business ... sold by Interdata and later Perkin-Elmer
Then before I graduate, I'm hired into small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit). I think Boeing Renton datacenter largest in the world (360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room, joke that Boeing got 360/65s like other companies acquired keypunches). Lots of politics between Renton directory and CFO, who only had 360/30 up at Boeing Field for payroll (although they enlarge the room and install 360/67 for me to play with, when I'm not doing other stuff).
Then when I graduate, I join IBM science center (instead of staying with CFO). One of my hobbies after joining IBM was enhanced production systems for internal datacenters (online sales&marketing support HONE systems was one of the first and long time customer, initially CP67, then VM370, also HONE had me go along for early non-US HONE installs). With the decision to add virtual memory to all 370s, a new group was formed to morph CP67 into VM370, but lots of CP67 stuff was greatly simplified and/or dropped. 1974, I start moving lots of stuff into VM370R2 for my CSC/VM. HONE then consolidates their US 370 datacenters in Palo Alto (across the back parking lot from PASC, trivia: when FACEBOOK 1st moves into Silicon Valley, it was into a new bldg built next door to the former HONE datacenter). I then start putting multiprocessor support into VM370R3-based CSC/VM, initially for US HONE so they could upgrade all their 370/168s to 2-CPU systems.
trivia: after bay area earthquake in early 80s, HONE was 1st replicated in Dallas, and then a 3rd in Boulder.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller
https://www.garlic.com/~lynn/submain.html#360pcm
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Recent posts mentioning Boeing CFO, Renton, BCS ("boeing computer
services")
https://www.garlic.com/~lynn/2025d.html#15 MVT/HASP
https://www.garlic.com/~lynn/2025d.html#12 IBM 370/168
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#103 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#100 When Big Blue Went to War
https://www.garlic.com/~lynn/2025c.html#83 IBM HONE
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025b.html#117 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#106 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#38 IBM Computers in the 60s
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#1 Large Datacenters
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#102 Large IBM Customers
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#70 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#22 IBM SE Asia
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#40 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#58 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM PS2 Date: 12 Aug, 2025 Blog: FacebookHead of POK took over Boca & PCs. There was joke that IBM lost $5 on every PS2 made, but IBM would make it up with volumes. Boca then hires Dataquest (since bought by Gartner) to do study of PC futures (including video tape round table of several silicon valley experts). The person running the study I had known for several years ... and I was asked to be a silicon valley expert, I clear it with my local management and Dataquest would obfuscate my bio so Boca wouldn't recognize me as IBM employee.
2010 max configured z196 (mainframe) benchmarked (industry standard number program iterations compared to reference platform) at 50BIPS and went for $30M. At same time E5-2600 server blade benchmarked for 500BIPS (program iterations compared to same reference platform) and IBM base list price was $1815 ... and large cloud operations (dozens or scores of megadatacenters around the world, each with half million or more blade servers and enormous automation) were claiming they assemble their own blade servers at 1/3rd price of brand name servers. Then industry press had article that server component vendors were shipping at least half their product directly to cloud megadatacenters, and IBM sells off its server blade business.
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
posts mentioning Dataquest, Gartner, PS2, Boca
https://www.garlic.com/~lynn/2024f.html#42 IBM/PC
https://www.garlic.com/~lynn/2024e.html#103 Rise and Fall IBM/PC
https://www.garlic.com/~lynn/2023g.html#59 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#13 IBM/PC
https://www.garlic.com/~lynn/2022h.html#109 terminals and servers, was How convergent was the general use of binary floating point?
https://www.garlic.com/~lynn/2022h.html#104 IBM 360
https://www.garlic.com/~lynn/2022h.html#38 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2021k.html#36 OS/2
https://www.garlic.com/~lynn/2021f.html#72 IBM OS/2
https://www.garlic.com/~lynn/2021.html#68 OS/2
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2017h.html#113 IBM PS2
https://www.garlic.com/~lynn/2017f.html#110 IBM downfall
https://www.garlic.com/~lynn/2017d.html#26 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2017b.html#23 IBM "Breakup"
https://www.garlic.com/~lynn/2014l.html#46 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2013i.html#4 IBM commitment to academia
https://www.garlic.com/~lynn/2012k.html#44 Slackware
https://www.garlic.com/~lynn/2010c.html#78 SLIGHTLY OT - Home Computer of the Future (not IBM)
https://www.garlic.com/~lynn/2008d.html#60 more on (the new 40+ yr old) virtualization
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 370 Virtual Memory Date: 12 Aug, 2025 Blog: Facebookre:
VM/370 CMS had 64k bytes of OS/360 simulation (joke that CMS 64kbytes was more effective than MVS's 8mbytes). Circa 1980, san jose plant site had some large apps that required MVS because wouldn't run on CMS. Then the Los Gatos lab added 12kbytes of further OS/360 simulation and got nearly all the rest ported from MVS to CMS.
At the time Burlington had 7mbyte VLSI design Fortran app and special generated MVS systems restricted to 8mbyte kernel image and 1mbyte CSA ... creating brick wall at 7mbyte for the fortran app (any time enhancements/changes were made, it was solid at brick wall at 7mbyte). Los Gatos offered to provide them extra 12kbytes of OS/360 ... CMS running in 16mbyte virtual machine would use less than 192kbye ... leaving the rest of the 16mbyte for the Burlington VLSI fortran app (more than doubling addressing available, compared to their specially created MVS systems). However Burlington was a heavily influenced POK shop, and the head of POK had already gotten corporate to kill VM370 product, shutdown the development group, and transfer all the people to POK (for MVS/XA) ... having all the Burlington 370s move to VM370/CMS would be great loss of face (Endicott had managed to save VM370 product for mid-range, but was still in the process of recreating a development group from scratch ... so much of VM370/CMS work was being done by the internal community).
some recent posts mentioning Los Gatos lab:
https://www.garlic.com/~lynn/2025d.html#24 IBM Yorktown Research
https://www.garlic.com/~lynn/2025d.html#7 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#1 Chip Design (LSM & EVE)
https://www.garlic.com/~lynn/2025c.html#116 Internet
https://www.garlic.com/~lynn/2025c.html#110 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#107 IBM San Jose Disk
https://www.garlic.com/~lynn/2025c.html#104 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#93 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025b.html#86 Packet network dean to retire
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025b.html#74 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#37 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2025b.html#21 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#33 IBM ATM Protocol?
https://www.garlic.com/~lynn/2025.html#12 IBM APPN
https://www.garlic.com/~lynn/2025.html#2 IBM APPN
https://www.garlic.com/~lynn/2024g.html#103 John Boyd and Deming
https://www.garlic.com/~lynn/2024g.html#102 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#76 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#57 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2024g.html#6 IBM 5100
https://www.garlic.com/~lynn/2024f.html#82 IBM Registered Confidential and "811"
https://www.garlic.com/~lynn/2024f.html#45 IBM 5100 and Other History
https://www.garlic.com/~lynn/2024f.html#39 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024e.html#145 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#79 NSFNET
https://www.garlic.com/~lynn/2024e.html#63 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#58 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#40 Instruction Tracing
https://www.garlic.com/~lynn/2024e.html#28 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#85 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#80 IBM ATM At San Jose Plant Site
https://www.garlic.com/~lynn/2024d.html#19 IBM Internal Network
https://www.garlic.com/~lynn/2024d.html#5 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#114 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#81 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#68 Berkeley 10M
https://www.garlic.com/~lynn/2024b.html#27 HA/CMP
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024b.html#15 IBM 5100
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2024.html#42 Los Gatos Lab, Calma, 3277GA
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#15 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2024.html#9 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#8 Niklaus Wirth 15feb1934 - 1jan2024
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Public Facebook Mainframe Group Date: 13 Aug, 2025 Blog: Facebookre:
... mostly repeat from post:
late 70s & early 80s, I was blamed for online computer conferencing on
the internal network (larger than arpanet/internet from the late 60s
to sometime mid/late 80s, about the time it was forced to convert to
SNA). It really took off the spring '81 when I distributed trip report
to visit Jim Gray at Tandem (he had left IBM SJR fall of 1980), only
about 300 actually participated but claims upwards of 25,000 were
reading. From IBMJargon:
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ...
Six copies of 300 page extraction from the memos were put together in
Tandem 3ring binders and sent to each member of the executive
committee, along with executive summary and executive summary of the
executive summary. A small bit reproduced in this (linkedin) post:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Some task forces were formed to study the phenomena and researcher was hired to study how I communicated. The researcher sat in the back of my office for nine months, taking notes on conversations and phone calls, got copies of all my incoming and outgoing email, and logs of all instant messages. The result was IBM (internal) reports, conference talks&papers, books and a Stanford PHD (joint between language and computer AI, Winograd was advisor on AI side). Eventually IBM forum software was created along with officially sanctioned, moderated FORUMs.
Also from
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
1972, Learson had tried (and failed) to block the bureaucrats,
careerists, and MBAs from destroying Watson culture/literacy,
pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
then from Future System in 1st half of 70s, 1993 Computer Wars: The
Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
Leading to IBM with one of the largest losses in the history of US
companies, was being reorged into the 13 baby blues in preparation
for breaking up the company (take-off on "baby bell" breakup a decade
earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup.
note: late 80s, senior disk engineer got a talk scheduled at annual, world-wide, internal communication group conference, supposedly on 3174 performance. However, his opening was that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing mainframe datacenters to more distributed computing friendly platforms. The disk division had come up with a number of solutions, but they were all being vetoed by the communication group (with their corporate ownership of everything that crossed datacenter walls). The communication group stranglehold on mainframe datacenters wasn't just disks and a couple yrs later, IBM has one of the largest losses in the history of US companies. Disk division executive partial countermeasure (to communication group) was investing in distributed computing startups that would use IBM disks (and he would periodically call us in to visit his investments to see if we could provide any help).
... other trivia, mid-80s, the communication group was fighting off release of mainframe tcp/ip support, when they lost, they changed the strategy. Since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then added RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times increase in the bytes moved per instruction executed).
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
communication group terminal emulation & strangle hold on datacenters
https://www.garlic.com/~lynn/subnetwork.html#emulation
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network Date: 13 Aug, 2025 Blog: FacebookMIT CTSS/7094 had a form of email.
Then some of the MIT CTSS/7094 people went to the 5th flr to do
MULTICS. Others went to the IBM Cambridge Science Center on the 4th
flr and did virtual machines (1st modified 360/40 w/virtual memory and
did CP40/CMS, morphs into CP67/CMS when 360/67 standard with virtual
memory becomes available, precursor to VM370), science center
wide-area network that morphs into the IBM internal network, larger
than arpanet/internet from beginning until sometime mid/late 80s about
time forced to convert to SNA; technology also used for the corporate
sponsored univ BITNET), invented GML 1969 (precursor to SGML and HTML,
etc). From one of the GML inventors about the science center wide-area
network
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
IBM Pisa Scientific Center had done SPM for CP67 (later ported to internal VM370), superset of combination of (the later VM370) VMCF, IUCV and SMSG. RMSG/VNET supported SPM (even version sent to customers) ... which could be used for instant messaging on the internal network. SPM was used by multi-user client/server space war game (and with RMSG/VNET SPM support, clients could be on any node on the internal network). Some number of apps internally and on BITNET supporting the instant messaging capability.
PROFS started out picking up internal apps and wrapping 3270 menus around (for the less computer literate). They picked up a very early version of VMSG for the email client. When the VMSG author tried to offer them a much enhanced version of VMSG, profs group tried to have him separated from the company. The whole thing quieted down when he demonstrated that every VMSG (and PROFS email) had his initials in a non-displayed field. After that he only shared his source with me and one other person. VMSG also contained ITPS format option for email sent to the gateway between internal network and ITPS.
The VMSG author also did Parasite/Story, CMS application that used
3270 psuedo devices and its own HLLAPI-like (before IBM/PC) ... could
talk to CCDN via PASSTHRU/CCDN gateway. Old archived post with
PARASITE/STORY information (remarkable aspect was code so efficient,
could run in less than 8k bytes).
https://www.garlic.com/~lynn/2001k.html#35
and (field engineering) RETAIN PUT Bucket Retriever "Story"
https://www.garlic.com/~lynn/2001k.html#36
Another system was branch office online sales&marketing support HONE systems. When I joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters and HONE was 1st (and long time) customer, initially CP67/CMS systems and 2741 terminals, moving to VM370/CMS systems (all over the world) and 3270 terminals.
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
posts mentioning vmsg, parasite, story
https://www.garlic.com/~lynn/2025b.html#60 IBM Retain and other online
https://www.garlic.com/~lynn/2025.html#90 Online Social Media
https://www.garlic.com/~lynn/2024f.html#91 IBM Email and PROFS
https://www.garlic.com/~lynn/2024e.html#27 VMNETMAP
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#43 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2019d.html#108 IBM HONE
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018.html#20 IBM Profs
https://www.garlic.com/~lynn/2017k.html#27 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2017g.html#67 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2015d.html#12 HONE Shutdown
https://www.garlic.com/~lynn/2014k.html#39 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014j.html#25 another question about TSO edit command
https://www.garlic.com/~lynn/2014h.html#71 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014e.html#49 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013d.html#66 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012d.html#17 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011o.html#30 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2011m.html#44 CMS load module format
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009q.html#66 spool file data
https://www.garlic.com/~lynn/2009q.html#4 Arpanet
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network Date: 14 Aug, 2025 Blog: Facebookre:
The Cambridge Science Center had also ported APL\360 to CP67/CMS for CMS\APL (needed to rework storage management from 16kbyte workspace swapping, to large demand page workspaces and also API for using system services like file I/O) and most of sales marketing and support apps were done CMS\APL ... upgraded to APL\CMS in move to VM370/CMS. In the morph of CP67->VM370, lots of stuff was simplified and/or dropped. 1974, with VM370R2, I started moving lots of stuff (feature, function, performance) into VM370 for my internal CSC/VM. Then for VM370R3 CSC/VM, I put multiprocessor support back in, originally for HONE so they could upgrade all their 168s to 2-CPU systems (each system getting twice the throughput of single CPU). US HONE had consolidated all their datacenters in silicon valley (trivia: when FACEBOOK 1st moved into silicon valley, it was into a new bldg built next door to the former US HONE datacenter). After the early 80s bay area earthquake, US HONE was 1st replicated in Dallas and then a another in Boulder.
One of 1st overseas trips having recently graduated and joined IBM, was HONE asked me to go along for early non-US HONE install in Paris.
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network Date: 14 Aug, 2025 Blog: Facebookre:
Also asked to go over for HONE install in Tokyo ... Okura hotel, right across from the US compound, IBM was down the hill, then under the highway overpass on the other side (I think yen was 330/$).
After transfer from CSC to Research on the west coast (CSC/VM turns into SJR/VM), got to wander around IBM (& non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test ... on the other side of the street. Bldg14&15 were running pre-scheduled, 7x24 stand-alone testing and mentioned that they had recently tried MVS, but it had 15min MTBF (requiring manual re-ipl) in that environment. I offer to rewrite I/O supervisor making it bullet proof and never fail (allowing any amount of on-demand concurrent testing, greatly improving productivity. Then bldg15 gets 1st engineering 3033 outside POK 3033 processor engineering. Testing was only taking percent or two of CPU so we scrounge a 3830 & 3330 string for putting up our own private online service. I do an internal research report on all the I/O integrity work and happen to mentiong the MVS 15min MTBF ... bringing down the wrath of the MVS group on my head. A few years later, just before 3380s were about to ship, FE has 57 simulated hardware errors they considered likely to occur. In all 57 cases, MVS was still crashing in all cases and in 2/3rds of the cases, no indication what caused the crashes (and I didn't feel sorry).
I would also stop by TYMSHARE and/or see them at monthly meetings
hosted by Stanford SLAC. They had made their CMS online computer
conferencing free to the SHARE organization in Aug1976 as VMSHARE;
archives here:
http://vm.marist.edu/~vmshare/
I cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE
files for putting up on internal network and systems (including HONE),
biggest problem were lawyers concerned that internal employees would
be contaminated by access to "unfiltered" customer information.
Some dates back to CERN's analysis comparing MVS/TSO and VM370/CMS,
presented at SHARE in 1974. Internally, inside IBM, copies of the
report were stamped "IBM Confidential - Restricted" (2nd higest
classification, available on a need to know only). Then after FS
implodes
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
the head of POK managed to convince corporate to kill the VM370 product, shutdown the VM370 development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 mission for the mid-range, but has to recreate a VM370 development group from scratch). POK executives were also strong arming HONE trying to force them to convert to MVS.
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
getting to play disk engineer in bldgs 14&15:
https://www.garlic.com/~lynn/subtopic.html#disk
commercial virtual machine offerings
https://www.garlic.com/~lynn/submain.html#online
TYMSHARE & VMSHARE posts
https://www.garlic.com/~lynn/2025d.html#18 Some VM370 History
https://www.garlic.com/~lynn/2025c.html#89 Open-Source Operating System
https://www.garlic.com/~lynn/2025.html#126 The Paging Game
https://www.garlic.com/~lynn/2024g.html#45 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#125 Adventure Game
https://www.garlic.com/~lynn/2023f.html#64 Online Computer Conferencing
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023d.html#37 Online Forums and Information
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2022f.html#37 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2019e.html#87 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019b.html#54 Misinformation: anti-vaccine bullshit
https://www.garlic.com/~lynn/2018f.html#77 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2017j.html#26 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017i.html#4 EasyLink email ad
https://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2015g.html#91 IBM 4341, introduced in 1979, was 26 times faster than the 360/30
https://www.garlic.com/~lynn/2014g.html#98 After the Sun (Microsystems) Sets, the Real Stories Come Out
https://www.garlic.com/~lynn/2014d.html#44 [CM] Ten recollections about the early WWW and Internet
https://www.garlic.com/~lynn/2012p.html#22 What is a Mainframe?
https://www.garlic.com/~lynn/2012i.html#40 GNOSIS & KeyKOS
https://www.garlic.com/~lynn/2012i.html#39 Just a quick link to a video by the National Research Council of Canada made in 1971 on computer technology for filmmaking
https://www.garlic.com/~lynn/2012e.html#38 A bit of IBM System 360 nostalgia
https://www.garlic.com/~lynn/2011k.html#2 First Website Launched 20 Years Ago Today
https://www.garlic.com/~lynn/2011f.html#75 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2009s.html#12 user group meetings
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2006v.html#22 vmshare
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network Date: 15 Aug, 2025 Blog: Facebookre:
RAID history
https://en.wikipedia.org/wiki/RAID#History
In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was
subsequently named RAID 4.[5]
... snip ...
trivia: Ken worked in bldg14. I had transferred from CSC to SJR on the west coast the same year and got to wander around IBM (and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test, across the street. They were running pre-scheduled, 7x24, stand-alone mainframe testing and said that they had recently tried MVS, but it had 15min MTBF (in that environment) requiring manual re-ipl. I offer to rewrite I/O supervisor making it bullet-proof and never fail, allowing any amount of on-demand, concurrent testing (greatly improving productivity). I do (internal only) Reseach Report on the I/O Integrity work and happen to mention the MVS 15min MTBF, bringing down the wrath of the MVS organization on my head. A couple years later with 3380s about to ship, FE had a test of 57 simulated errors (they believe likely to occur), MVS was still failing in all 57 cases (and in 2/3rds of the cases, no indications of why).
Note: no IBM CKD DASD has been made for decades, all being emulated on industry standard fixed-block devices.
posts getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
trivia: some of the MIT CTSS/7094 went to the 5th flr for Multics, others went to the IBM cambridge science center on the 4th floor, modified 360/40 with virtual memory and did CP40/CMS, which morphs into CP67/CMS when 360/67 standard with virtual memory becomes available ... also invented GML (letters after inventors last names) in 1969 (after a decade it morphs into ISO standard SGML and after another decade morphs into HTML at CERN), bunch of other stuff. In early 70s, after decision to add virtual memory to all 370s, some of CSC splits off and takes over the IBM Boston Programming Center (on the 3rd flr) for the VM370 development group.
FS was completely different than 370 and was going to completely replace it (during FS, internal politics was killing off 370 efforts, lack of new 370 during FS, is credited with giving clone 370 system makers, their market foothold). When FS implodes there is mad rush to get stuff back into product pipelines, including kicking off quick&dirty 30333&3081 efforts in parallel.
Head of POK also manages to convince corporate to kill VM370, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save 370 product mission for the mid-range, but had to recreate a development group from scratch. Later customers weren't converting from MVS to MVS/XA as planned. Amdahl was having more success because they had (purely) microcode hypervisor ("multiple domain") and was able to run MVS & MVS/XA concurrently (note IBM wasn't able to responsd with LPAR&PR/SM on 3090 for nearly decade). POK had done primitive virtual machine ("VMTOOL") for MVS development, also needed SIE instruction to slip in&out of virtual machine mode ... part performance problems was 3081 didn't have enough microcode space so SIE stuff had to be swapped in&out.
Melinda's history (including CP40 & CP67)
https://www.leeandmelindavarian.com/Melinda#VMHist
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
past posts mentioning "SIE", MVS/XA, VMTOOL, Amdahl
https://www.garlic.com/~lynn/2025d.html#23 370 Virtual Memory
https://www.garlic.com/~lynn/2025c.html#78 IBM 4341
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#27 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2014j.html#10 R.I.P. PDP-10?
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2013n.html#46 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2011p.html#114 Start Interpretive Execution
https://www.garlic.com/~lynn/2006j.html#27 virtual memory
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Univ, Boeing/Renton, IBM/HONE Date: 16 Aug, 2025 Blog: Facebookre:
23jun69 unbundling started charging for application software, SE services, maint, etc.
The charging for SE services pretty much put an end to SE support teams at customer sites ... where new SEs learned the trade ... sort of as apprentices (since they couldn't figure out how not to charge for new, inexperienced SEs at customer site). In reaction, HONE (Hands-On Network Environment) was setup ... a number of internal cp67 data centers providing virtual machine access to SEs in the branch office working with guest operating systems. The concept was that SEs could get hands-on operating experience via remote access running in (CP67) virtual machines.
CSC had also ported apl\360 to CMS (for cms\apl) and a number of sales & marketing support applications were developed in CMS\APL and (also) deployed on HONE. Relatively quickly the sales&marketing applications came to dominate all HONE activity (personal computing, time-sharing) ... and the original objective of SE training (using guest operating systems in virtual machines) withered away.
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
recent related comments/replies
https://www.garlic.com/~lynn/2025d.html#32 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#33 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#34 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#35 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: TYMSHARE, VMSHARE, ADVENTURE Date: 16 Aug, 2025 Blog: FacebookAfter transferring from science center to SJR Research on the west coast, got to wander around lots of (IBM & no-IBM) datacenters in silicon valley, including TYMSHARE ... also see them at the monthly BAYBUNCH meetings hosted at Stanford SLAC. TYMSHARE started offering their CMS based online computer conferencing, free to (mainframe user group) SHARE in Aug1976 as VMSHARE, archives:
I cut a deal with TYMSHARE to get monthly tape dump/copy of VMSHARE (and later PCSHARE) files for putting up on internal network and systems (including world-wide branch office online HONE systems). Biggest problem was concern that internal employees would be contaminated exposed to unfiltered customer information. Some of this dated back to 1974, when CERN made presentation at SHARE comparing VM370/CMS and MVS/TSO (inside IBM, copies were stamped "IBM Confidential - Restricted", 2nd highest classification, aka need to know only)
One such TYMSHARE visit, they demonstrated ADVENTURE that they had
found on Stanford SAIL PDP10 and ported to VM370/CMS. I got copy for
putting up on internal systems. I would send source to anybody that
proved that got all points. Within short time, versions with more
points as well as PLI versions appeared
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure
Most internal 3270 logon screens had "For Business Use Only", however SJR 3270 logon screens had "For Management Approved Use" ... came in handy when some people from corporate audit demanded all the demo programs (like adventure) had to be removed.
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
misc posts mentioning TYMSHARE, VMSHARE, and Adventure:
https://www.garlic.com/~lynn/2025.html#126 The Paging Game
https://www.garlic.com/~lynn/2024g.html#97 CMS Computer Games
https://www.garlic.com/~lynn/2024g.html#45 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#125 Adventure Game
https://www.garlic.com/~lynn/2024f.html#11 TYMSHARE, Engelbart, Ann Hardy
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#139 RPG Game Master's Guide
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2018f.html#111 Online Timsharing
https://www.garlic.com/~lynn/2017j.html#26 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017h.html#11 The original Adventure / Adventureland game?
https://www.garlic.com/~lynn/2017f.html#67 Explore the groundbreaking Colossal Cave Adventure, 41 years on
https://www.garlic.com/~lynn/2017d.html#100 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016e.html#103 August 12, 1981, IBM Introduces Personal Computer
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
https://www.garlic.com/~lynn/2011f.html#75 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#57 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2006n.html#3 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2005u.html#25 Fast action games on System/360+?
https://www.garlic.com/~lynn/2005k.html#18 Question about Dungeon game on the PDP
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Mosaic Date: 16 Aug, 2025 Blog: FacebookGot HSDT in early 80s, T1 and faster computer links (both satellite and terrestrial) ... and working with NSF Director; was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.
1988, got HA/6000 project, initially for NYTimes to move their
newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Inges, Informix) that had VAXcluster
support in the same source base with Unix.
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid92 and 128-system cluster ye92. Mid-jan1992, convinced FSD to bid HA/CMP for gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*), told not allowed to work with anything more than 4-system clusters, then leave IBM a few months later.
A little later, asked in as consultant to small client/server startup. Two former Oracle employees (that were in Ellison/Hester meeting) were there responsible for something called "commerce server" and they wanted to do payment transactions. The startup had done some technology they called "SSL" they wanted to use; it is now frequently called "electronic commerce"; I had responsibility for everything between webservers and payment networks. IETF/Internet RFC Editor, Postel also let me help him with the periodically re-issued "STD1"; Postel also sponsored my talk at ISI on "Why Internet Isn't Business Critical Dataprocessing" (based on the software, procedures, documentation I had to do for "electronic commerce").
NCSA was major recipient of "new technologies" funding
https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
Some of the NCSA people move to silicon valley and formed Mosaic
Corp. NCSA complains about use of "Mosaic" ... trivia: where did they
get the rights for "Netscape".
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
payment network gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway
Posts mentioning Postel/ISI and "Why Internet Isn't Business Critical Dataprocessing
https://www.garlic.com/~lynn/2025b.html#97 Open Networking with OSI
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024g.html#80 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2017j.html#31 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM and non-IBM Date: 17 Aug, 2025 Blog: Facebooknot exactly, within a year of taking two credit hr intro to fortran/computers, univ hires me fulltime responsible for os/360 (360/67 arrived to replace 709/1401 for tss/360 ... which didn't come to production so ran as 360/65); univ shutdown datacenter for weekends and I would have it dedicated, although 48hrs w/o sleep made monday classes hard. then CSC came out to install CP67/CMS (3rd after CSC itself and MIT Lincoln Labs) and mostly I played with it during my dedicated weekends.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67 supported 1052&2741 terminals with automagic terminal type
identification (switching terminal-type port scanner as
needed). Univ. had some number of ASCII TTY 33&35, so I add ascii
terminal support (integrated with automagic terminal type id; trivia
when the ASCII port scanner had been delivered to the univ, it came in
Heathkit box). I then wanted single dial-up number ("hunt group") for
all terminals. Didn't quite work, IBM had taken short-cut and
hardwired line speed for each port. That kicks off clone controller
project, implement channel interface board for Interdata/3 programmed
to emulate IBM controller, with addition it support auto line
speed. It is then upgraded with Interdata/4 for channel interface,
with cluster of Interdata/3s for port interfaces. Four of us are then
written up for (some part of) IBM clone controller business ... sold
by Interdata and later Perkin-Elmer
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
trivia: Some of the MIT CTSS/7094 went to the 5th flr and did MULTICs
and others went to IBM science center on 4th flr (among many things,
modify 360/40 with virtual memory and did CP40/CMS, which morphs into
CP67/CMS when 360/67 standard with virtual memory). Folklore is that
(Mutlics) Bell people returned home and did a simplified MULTICS as
UNIX.
https://en.wikipedia.org/wiki/Multics#Unix
Then portable UNIX was 1st developed on Interdata.
https://en.wikipedia.org/wiki/Interdata_7/32_and_8/32#Operating_systems
360 plug-compatible (clone) controller
https://www.garlic.com/~lynn/submain.html#360pcm
other trivia: mid-80s, communication group was fighting release of mainframe TCP/IP, when that failed they changed strategies. Since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then add RFC1044 support and in some tuning tests at Cray Research, between a Cray and 4341, got sustained 4341 channel throughput, using only modest amount of 4341 CPU (something like 500 times increase in bytes moved per instruction executed).
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
then in 1988, IBM Branch asks if I can help LLNL (national lab) get some serial stuff they are working with, standardized ... which quickly became fibre-channel standard ("FCS", including some stuff I did in 1980; ... 1gbit/sec, full-duplex, aggregate 200mbytes/sec). Then POK finally get their stuff released as ESCON (when it is already obsolete), initially 10mbyte/sec, later increased to 17mbyte/sec. Later some POK engineers becomes involved FCS and define a heavy-weight protocol that significantly reduces throughput, eventually released as FICON. 2010, IBM releases z196 "Peak I/O" benchmark getting 2M IOPS using 104 FICON (on 104 FCS, 20k IOPS/FICON). About same time a FCS is released for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend SAPs (system assist processors that do actual I/O) CPU be kept to 70% ... or about 1.5M IOPS. More recently they claim they have zHPF (more like what I did in 1980 and original FCS) has got it up to 100K IOPS/FICON (five times original and closer to one tenth 2010 "FCS")
(1980) channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: EMACS Date: 18 Aug, 2025 Blog: Facebookstarting PC/RT 38yrs ago, following year got ha/6000 project, originally for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I then rename it HA/CMP
when start doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS vendors (oracle, sybase, ingres, informix) that had VAXCluster in same source base with Unix. I do a distributed lock manager supporting VAXCluster semantics (and especially Oracle and Ingres have a lot of input on improving scale-up performance). trivia: previously worked on original SQL/relational, System/R with Jim Gray and Vera Watson
still using emacs daily for editing, shell, quite a bit of lisp programming, etc.
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
recent posts mentioning distributed lock manager
https://www.garlic.com/~lynn/2025c.html#69 Tandem Computers
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#48 IBM Technology
https://www.garlic.com/~lynn/2025c.html#40 IBM & DEC DBMS
https://www.garlic.com/~lynn/2025c.html#37 IBM Mainframe
https://www.garlic.com/~lynn/2025c.html#10 IBM System/R
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#104 IBM S/88
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#30 Some Career Highlights
https://www.garlic.com/~lynn/2025b.html#26 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025.html#125 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#119 Consumer and Commercial Computers
https://www.garlic.com/~lynn/2025.html#106 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput
https://www.garlic.com/~lynn/2025.html#76 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2024f.html#109 NSFnet
https://www.garlic.com/~lynn/2024f.html#70 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#67 IBM "THINK"
https://www.garlic.com/~lynn/2024f.html#25 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024e.html#117 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#75 IBM San Jose
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#84 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#52 Cray
https://www.garlic.com/~lynn/2024c.html#105 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2024b.html#80 IBM DBMS/RDBMS
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#55 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#29 DB2
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#93 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#82 Benchmarks
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: DASD Date: 18 Aug, 2025 Blog: FacebookDASD originated with DRUMS, DISKS, Datacell ... possibly dating when it wasn't obvious what would prevail.
Trivia: ECKD started off protocol for CKD disks for 3880 "CALYPSO" speed-matching buffer ... 3380 3mbyte/sec with 1.5mbyte/sec channels.
3370 FBA (fixed block architecture) ... however some operating systems were so intertwined with CKD ... that they couldn't be weaned. Next 3380 (3mbyte/sec) was "CKD" ... but already moving to fixed-block (can be seen in records/track formulas, where record length has to be rounded up to multiple of fixed cell size). Now not even that level of obfuscation ... for decades CKD disks has been simulated on industry standard fixed-block devices.
Trivia: expanded store originated with 3090 ... it was obvious production throughput needed more memory than could be packaged within 3090 memory access latency. 3090 expanded store bus was wide, high performance ... that used a synchronous instruction that moved 4k page between expanded store and standard processor memory (a trivial fraction of pathlength required for I/O operation). Expanded store bus was also adapted for 3090 vector market being able to attach HIPPI devices (LANL standardization of Cray 100mbyte/sec channel) .... using PC "peak/poke" paradigm and "move" I/O commands to reserved expanded store addresses.
FCS trivia: IBM mainframe didn't get equivalent until FICON. In 1988, IBM Branch office asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then IBM released their serial stuff as ESCON (when it is already obsolete), initially 10mbyte/sec, later upgraded to 17mbyte/sec. Then some IBM engineers becomes involved with FCS and define a heavy-weight protocol that radically reduces throughput, released as FICON. 2010, z196 "Peak I/O" benchmark gets 2M IOPS using 104 FICON (20K IOPS/FICON). Same year, FCS is announced for E5-2600 server blades getting over million IOPS (two such FCS having higher throughput than 104 FICON). Note also, IBM pubs recommend SAPs (system assist processors that do actual I/O) be restricted to 70% CPU (about 1.5M IOPS).
70s & early 80s, getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, vtoc, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
some posts mentioning 3090 expanded store
https://www.garlic.com/~lynn/2025b.html#65 Supercomputer Datacenters
https://www.garlic.com/~lynn/2024g.html#29 Computer System Performance Work
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024.html#30 IBM Disks and Drums
https://www.garlic.com/~lynn/2023g.html#54 REX, REXX, and DUMPRX
https://www.garlic.com/~lynn/2023g.html#7 Vintage 3880-11 & 3880-13
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2021k.html#110 Network Systems
https://www.garlic.com/~lynn/2021e.html#25 rather far from Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2019e.html#120 maps on Cadillac Seville trip computer from 1978
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2019c.html#44 IBM 9020
https://www.garlic.com/~lynn/2019c.html#33 IBM Future System
https://www.garlic.com/~lynn/2019b.html#77 IBM downturn
https://www.garlic.com/~lynn/2019b.html#52 S/360
https://www.garlic.com/~lynn/2018e.html#71 PDP 11/40 system manual
https://www.garlic.com/~lynn/2018b.html#47 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2017k.html#11 thrashing, was Re: A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017h.html#50 System/360--detailed engineering description (AFIPS 1964)
https://www.garlic.com/~lynn/2017g.html#102 SEX
https://www.garlic.com/~lynn/2017g.html#61 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017d.html#63 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#4 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017b.html#69 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#71 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016f.html#5 More IBM DASD RAS discussion
https://www.garlic.com/~lynn/2016e.html#108 Some (IBM-related) History
https://www.garlic.com/~lynn/2016d.html#24 What was a 3314?
https://www.garlic.com/~lynn/2016b.html#111 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016b.html#23 IBM's 3033; "The Big One": IBM's 3033
https://www.garlic.com/~lynn/2015f.html#88 Formal definition of Speed Matching Buffer
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM OS/2 & M'soft Date: 19 Aug, 2025 Blog: FacebookNov1987, Boca OS2 sent email to Endicott asking for help with dispatch/scheduling (saying VM370 was considered much better than OS/2), Endicott forwards it to Kingston, Kingston forwards it to me (when I was undergraduate 20yrs earlier, had done it originally for CP/67). After graduating and joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters and and online sales&marketing support HONE was one of the 1st (and long time) customer. trivia: late 70s, I do CMSBACK for several internal operations ... including HONE (later morphs into WDSF and ADSM).
posts mentioning CP67L, CSC/VM, and/or SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
other trivia: 1972, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
FS completely different from 370 and going to completely replace it
(during FS, internal politics was killing off 370 efforts, limited new
370 is credited with giving 370 system clone makers their market
foothold). One of the final nails in the FS coffin was analysis by the
IBM Houston Science Center that if 370/195 apps were redone for FS
machine made out of the fastest available hardware technology, they
would have throughput of 370/145 (about 30 times slowdown)
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
AMEX and KKR were in competition for private equity (LBO, junk bonds
got such a bad reputation during the 80s S&L crisis they change the
name to private equity) take-over of RJR:
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
and KKR wins, then runs into trouble and hires away president of AMEX
to help.
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
20yrs after Learson failed to block destruction of Watson
culture/legacy, IBM has one of the largest losses in the history of US
corporations and was being reorganized into the 13 baby blues in
preparation for breaking up the company (baby blues take-off on the
"baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
Same year that IBM has its enormous loss, AMEX spins off much of its
mainframe datacenters along with financial transaction outsourcing
business, in the largest IPO up until that time (many of the
executives had previously reported to former AMEX president, the new
IBM CEO). Disclaimer: turn of the century, the former AMEX operation,
I'm hired as chief scientist; 2005 interview for IBM System Magazine
(although some history info slightly garbled)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history
Also turn of the century, it was doing complete credit card
outsourcing for half of all cards in the US (plastic, transactions,
auths, settlement, statementing/billing, call centers, etc),
Same time, asked to spend time in Seattle area to help with "electronic commerce". Background: after leaving IBM was brought in as consultant into small client/server startup, two former Oracle employees (that were in the Ellison/Hester meeting) are there responsible for something called "commerce server" and want to do payment transactions. The startup had also invented this technology they called "SSL" they want to use. It is now frequently called "electronic commerce". I had responsibility for everything between webservers and financial industry payment networks. Based on the procedures, software, documentation had to do for "electronic commerce", I do a talk "Why The Internet Isn't Business Critical Dataprocessing", that the Internet IETF RFC Standards Editor, Postel sponsors at USC/ISI.
"electronic commerce" gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
The 80s former head of IBM POK mainframe and then head of Boca PS2/OS2, was then CEO for-hire at Seattle area security startup that had a contract with M'soft porting Kerberos to NT for active directory (tended to have monthly meetings with him). M'soft was also in a program with the former AMEX group to deploy online banking service. Numbers showed that NT didn't have the required performance and would require SUN servers and I was elected to explain it to M'soft CEO. Instead, a couple days before, the M'soft organization decided that online bank services would be limited to what NT could handle (increasing as NT throughput improves).
When he was at Boca, he had hired Dataquest (since bought by Gartner) to do study of the future of personal computing, including a multi-hour video tape round-table of silicon valley experts. For a number of years, I had known the Dataquest person running the study and was asked to be a silicon valley expert. I clear it with my local IBM management and Dataquest garbles my bio so Boca wouldn't recognize me as IBM employee.
Note: late 80s, senior disk engineer gets talk scheduled at annual, internal, world-wide communication group conference, supposedly on 3174 performance. However, the opening was that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing mainframe datacenters to more distributed computing friendly platforms. The disk division had come up with a number of solutions, but they were constantly being vetoed by the communication group (with their corporate ownership of everything that crossed the datacenter walls) trying to protect their dumb terminal paradigm. The communication group stranglehold on mainframe datacenters wasn't just disk and a couple years later, IBM has one of the largest losses in the history of US companies.
Disk division executive (software, also responsible for ADSM) partial countermeasure (to communication group) was investing in distributed computing startups that would use IBM disks ... he would periodically ask us to visit his investments to see if we could provide any help.
communication group and dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM OS/2 & M'soft Date: 19 Aug, 2025 Blog: Facebookre:
MIT CTSS/7094 had a form of email.
https://multicians.org/thvv/mail-history.html
Then some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS. Others went to the IBM Science Center on the 4th flr and did virtual machines (1st modified 360/40 w/virtual memory and did CP40/CMS, morphs into CP67/CMS when 360/67 standard with virtual memory becomes available), science center wide-area network (that grows into corporate internal network, larger than arpanet/internet from science-center beginning until sometime mid/late 80s; technology also used for the corporate sponsored univ BITNET), invented GML 1969 (precursor to SGML and HTML), lots of performance tools, etc. Later with decision was made to add virtual memory to all 370s, there was project that morphed CP67 into VM370 (although lots of stuff was initially simplified or dropped).
Account of science center wide-area network by one of the science
center inventors of GML
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
PROFS started out picking up internal apps and wrapping 3270 menus around (for the less computer literate). They picked up a very early version of VMSG for the email client. When the VMSG author tried to offer them a much enhanced version of VMSG, profs group tried to have him separated from the company. The whole thing quieted down when he demonstrated that every VMSG (and PROFS email) had his initials in a non-displayed field. After that he only shared his source with me and one other person.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
a couple recent posts mentioning VMSG, PROFS, CP-67-based Wide Area
Network:
https://www.garlic.com/~lynn/2025d.html#32 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025c.html#113 IBM VNET/RSCS
https://www.garlic.com/~lynn/2024f.html#44 PROFS & VMSG
https://www.garlic.com/~lynn/2024e.html#99 PROFS, SCRIPT, GML, Internal Network
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM OS/2 & M'soft Date: 19 Aug, 2025 Blog: Facebookre:
Early 80s, got HSDT, T1 and faster computer links (both terrestrial and satellite) and lots of battles with communication group (60s, IBM had 2701 controller that supported T1 computer links, 70s transition to SNA/VTAM and issues capped computer links at 56kbytes/sec). Was also working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happened and finally an RFP is released (in part based on what we already had running).
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
Around the same time, communication group was fighting release of mainframe TCP/IP support. When they lost, they changed their tactic and since they had corporate ownership of everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then add RFC104r support and in some tuning tests at Cray Research between Cray and 4341, 4341 got sustained channel throughput using only modest amount of the CPU (something like 500 times increase in bytes moved per instruction executed).
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Ellison/Hester meeting ref; last product done at IBM; approved 1988 as
HA/6000, originally for NYTimes to move their newspaper system (ATEX)
off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on improving scale-up performance). trivia: previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson. S/88
Product Administrator started taking us around to their customers and
also had me write a section for the corporate continuous availability
document (it gets pulled when both AS400/Rochester and mainframe/POK
complain they couldn't meet requirements).
Original SQL/relational System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid92 and 128-system cluster ye92. Mid-jan1992, convinced FSD to bid HA/CMP for gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*), told not allowed to work on anything with more than 4-system clusters, then leave IBM a few months later.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
other trivia: before MS/DOS:
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, kildall worked on IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
Side-track: In the aftermath of the FS implosion, the 70s head of POK manages to convince corporate to kill VM370 product (follow-on to CP/67), shutdown the development group, and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 mission for the mid-range, but had recreate a development group from scratch).
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Some VM370 History Date: 20 Aug, 2025 Blog: Facebookre:
... there was study that there was about the same number of lines of code (modifying and/or additions to VM370) on Kingston CSL tape as on the (SHARE) Univ. of Waterloo tape.
transfer from CSC to SJR on west coast in '77 ... wandering around included disk bldg14/engineering and bldg15/product test across the street. They were running 7x24, pre-scheduled, stand-alone mainframe testing and mentioned that they had recently tried MVS, but it had 15min MTBF (requiring manual re-ipl) in that environment. I offer to rewrite I/O supervisor making bullet-proof and never fail, allowing any amount of on-demand, concurrent testing ... greatly improving productivity. I then do an (internal only) research report on the I/O integrity work and happen to mention MVS 15min MTBF ... bringing the wrath of the MVS organization down on my head. A couple yrs later, 3380 was about to ship and FE had a simulated test of 57 errors that were likely to occur, and MVS was failing (requiring manual re-ipl) in all 57 ... and in 2/3rds of the cases, no indication what was causing failure.
Late 70s and early 80s, I was blamed for online computer conferencing on the internal network; it really took off spring '81 when I distribute trip report of visit to Jim Gray at Tandem (he had left SJR in fall of 80, I had worked with Jim and Vera Watson on original SQL/relational, System/R), only about 300 participated (but claims 25,000 were reading) ... folklore is when corporate executive committee was told, 5of6 wanted to fire me. For that and other transgressions, I was transferred to YKT ... but left in San Jose and having to commute to YKT a couple times a month.
getting to play disk engineer in bldgs 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
online computer conferencing
https://www.garlic.com/~lynn/subnetwork.html#cmc
original sql/relational, System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
a few other posts specifically mentioning FE 57 simulated and
MVS failure
https://www.garlic.com/~lynn/2023g.html#105 VM Mascot
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#35 Error Handling
https://www.garlic.com/~lynn/2019b.html#53 S/360
https://www.garlic.com/~lynn/2018f.html#57 DASD Development
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM OS/2 & M'soft Date: 20 Aug, 2025 Blog: Facebookre:
part of communication group fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm) was the severe performance kneecapping of microchannel cards.
AWD had done their own cards for PC/RT (16bit PC/AT bus) ... including 4mbit Token-Ring card. For the RS/6000 (w/microchannel, 32bit bus), AWD was told they couldn't do their own cards, but had to use the PS2 cards. It turns out the PS2 microchannel 16mbit Token-Ring card had lower throughput than the PC/RT 4mbit Token-Ring card (i.e. joke that PC/RT server with 4mbit T/R would have higher throughput than RS/6000 server with 16mbit T/R.
Almaden was extensive provisioned with IBM CAT wiring assuming 16mbit T/R. However they found $69 10mbit Ethernet cards (over CAT wiring) had much higher throughput than $800 16mbit T/R cards ... and 10mbit Ethernet LAN had lower latency and higher aggregate throughput than 16mbit T/R.
Also for just the difference in card cost (300*69=$20,700, 300*800=$240,00), $219,300 ... could get a few high-performance TCP/IP routers, 16 high-performance Ethernet interfaces/router and IBM channel interface (also had non-IBM mainframe channel interfaces, telco T1 & T3, and FDDI LAN options)
trivia: mid-80s, communication group was also trying to block release of mainframe TCP/IP product. When that failed, they changed strategy; that since they had corporate ownership of everything that crossed datacenter walls, it had to be release through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then add RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, the 4341 got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
some recent posts mentioning AWD, microchannel, 4mbit & 16mbit T/R:
https://www.garlic.com/~lynn/2025d.html#8 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#2 Mainframe Networking and LANs
https://www.garlic.com/~lynn/2025c.html#114 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#88 IBM SNA
https://www.garlic.com/~lynn/2025c.html#74 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#56 IBM OS/2
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#52 IBM 370 Workstation
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#34 TCP/IP, Ethernet, Token-Ring
https://www.garlic.com/~lynn/2025b.html#28 IBM WatchPad
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025.html#106 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2024g.html#101 IBM Token-Ring versus Ethernet
https://www.garlic.com/~lynn/2024g.html#18 PS2 Microchannel
https://www.garlic.com/~lynn/2024f.html#42 IBM/PC
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024e.html#102 Rise and Fall IBM/PC
https://www.garlic.com/~lynn/2024e.html#81 IBM/PC
https://www.garlic.com/~lynn/2024e.html#71 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#64 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024d.html#30 Future System and S/38
https://www.garlic.com/~lynn/2024d.html#14 801/RISC
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024b.html#52 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#47 OS2
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#97 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM HSDT and SNA/VTAM Date: 22 Aug, 2025 Blog: FacebookI got HSDT in early 80s, T1 and faster computer links (terrestrial and satellite) and lots of battles with the communication group (60s, IBM had 2701 supporting T1, 70s transition to SNA/VTAM and associated issues appeared to be responsible for capping controllers at 56kbits/sec). Part of HSDT funding requirement supposedly was showing some IBM content ... eventually found FSD ZIRPEL card for S/1 ... supporting T1 ... for gov. customers that had ancient 2701s falling apart. I went to order several S/1 and found there was year's backlog (with recent ROLM purchase, that was Data General house ... to show IBM, they ordered huge number of S/1s. Turns out I knew the ROLM datacenter manager and in return for helping ROLM with some issues, got several of their S/1s.
Then Boca and some IBM people on baby bell account (including
consulting SE), con me into working on turning out a VTAM/NCP
implementation done on distributed/cluster S/1s as TYPE-1 product
(with followup port to RIOS/RS6000). Old archive post with part of
analysis/comparison of the baby bell implementation compared to
Raleigh existing VTAM/3725 ... that I gave at fall 1986 SNA ARB
meeting in Raleigh.
https://www.garlic.com/~lynn/99.html#67
and part of baby bell presentation at spring COMMON user group meeting
https://www.garlic.com/~lynn/99.html#70
Then comes barrage from communication group that it was invalid; however the baby bell numbers are directly from production operation and the 3725 numbers are feeding the baby bell production operation numbers into the communication group HONE 3725 configurators (if there is anything invalid, then it would be in the 3725 configurators and would need correction) ... never any response on what might be invalid (other than claims about being invalid). What happen then to kill the effort can only be call truth is stranger than fiction. Several people also questioned how I even had HONE access (they forget that from when I 1st joined IBM one of my hobbies was enhanced production operating systems for internal datacenters and HONE was one of the 1st (and long time) customer.
other trivia: the communication group had also produced a study what customers weren't interested in T1 support much before mid/late 90s. They showed the number customer "fat pipe" (parallel 56kbit links treated as single logicl link) ... number for installation for 2, 3, 4, ... etc, parallel ... dropping to zero by seven. What they didn't know (or avoided using) was at the time, typical telco tarif for T1 was about the same as five or six 56kbit/sec links (trivial survey found 200 full T1 links, but they had moved to non-IBM hardware and software).
The communication group eventually ships 3737 for (short-haul, terrestrial) T1 links. Problem was returning ACKs for transmitted traffic was taking longer than the time to transmit the full host VTAM window (at which time it stopped transmitting until ACKs appeared, VTAM unable to fully utilizing even trivial T1 operation). 3737 had a boatload of motorola 68k processor and boatload of memory. It had a mini-VTAM simulating CTCA with local host processor constantly immediately sending ACKs (doing real transmission in the background to the remote 3737).
HSDT was also working with NSF director and was suppose to get $20M to
interconnect the NSF supercomputing centers. Then congress cuts the
budget, some other things happen and eventuall an RFP is release (in
part based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet
In parallel, communication group was trying hard to not let mainframe TCP/IP support be released. When they lost, they changed strategy and said since they had corporate responsiblity for everything that cross datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then added RFC1044 support and in some tuning tests at Cray Reseach between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
other posts mentioning baby bell S/1 emulated VTAM/NCP
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2025c.html#70 Series/1 PU4/PU5 Support
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025.html#109 IBM Process Control Minicomputers
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024f.html#48 IBM Telecommunication Controllers
https://www.garlic.com/~lynn/2023g.html#21 Vintage X.25
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023c.html#57 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#3 IBM 370
https://www.garlic.com/~lynn/2021k.html#115 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2019d.html#114 IBM HONE
https://www.garlic.com/~lynn/2019.html#2 The rise and fall of IBM
https://www.garlic.com/~lynn/2018f.html#34 The rise and fall of IBM
https://www.garlic.com/~lynn/2018e.html#94 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2018e.html#2 Frank Heart Dies at 89
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2016d.html#42 Old Computing
https://www.garlic.com/~lynn/2013d.html#57 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2010q.html#62 Is email dead? What do you think?
https://www.garlic.com/~lynn/2009l.html#66 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2003c.html#28 difference between itanium and alpha
https://www.garlic.com/~lynn/2002.html#48 Microcode?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Vietnam Date: 23 Aug, 2025 Blog: Facebooksomebody else's reply to one of my comments about John Boyd:
When Big Blue Went to War: A History of the Ibm Corporation's Mission
in Southeast Asia During the Vietnam War (1965-1975)
https://www.amazon.com/When-Big-Blue-Went-War-ebook/dp/B07923TFH5/
loc192-99:
We four marketing reps, Mike, Dave, Jeff and me, in Honolulu (1240 Ala
Moana Boulevard) qualified for IBM's prestigious 100 Percent Club
during this period but our attainment was carefully engineered by
mainland management so that we did not achieve much more than the
required 100% of assigned sales quota and did not receive much in
sales commissions. At the 1968 100 Percent Club recognition event at
the Fontainebleau Hotel in Miami Beach, the four of us Hawaiian Reps
sat in the audience and irritably watched as eight other "best of the
best" IBM commercial marketing representatives from all over the
United States receive recognition awards and big bonus money on
stage. The combined sales achievement of the eight winners was
considerably less than what we four had worked hard to achieve in the
one small Honolulu branch office. Clearly, IBM was not interested in
hearing accusations of war profiteering and they maintained that
posture throughout the years of the company's wartime involvement.
... snip ...
.... note: I had been introduced to John Boyd in the early 80s and use
to sponsor his briefings at IBM. He also had lots of stories, one
being very vocal that the electronics across the trail wouldn't
work. Possibly as punishment he was put in command of spook base
(about some time I was at Boeing), claims it had the largest air
conditioned bldg in that part of the world). some refs:
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White
One of Boyd's biographies claims that spook base was $2.5B windfall for IBM (ten times Boeing Renton, at the time, I thought the largest in the world).
In 89/90, the Marine Corps Commandant leveraged Boyd for a corps
make-over (at a time when IBM was desperately in need of
make-over). Then the (former) commandant continued to sponsor Boyd
conferences at (Quantico) Marine Corps Univ. through much of this
century (Boyd passed spring 97, Gray passed spring 24)
https://en.wikipedia.org/wiki/Alfred_M._Gray_Jr%2E
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory
https://en.wikipedia.org/wiki/OODA_loop
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
then IBM has one of the largest losses in the history of US companies
and was being re-orged into the 13 baby blues in preparation for
break-up (baby blues take-off on "baby bell" breakup a decade
earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup.
trivia: both Boeing and IBM people told story that on the day 360 was announced, Boeing walks into IBM marketing rep's office and gives him an order, making him the highest paid employee that year (straight commission, before quota). Following year, IBM has switched to quotas ... in January Boeing makes another order ... making his quota for the year. His quota is then "adjusted" and he leaves IBM shortly later.
Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html
some recent posts mentioning Boyd, "spook base", Boeing Renton, quotas
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#28 IBM FSD
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2019d.html#60 IBM 360/67
https://www.garlic.com/~lynn/2019b.html#38 Reminder over in linkedin, IBM Mainframe announce 7April1964
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Destruction of Middle Class Date: 25 Aug, 2025 Blog: Facebookpast posts
mentions various transgressions(?)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
folklore when the corporate executive committee was told about (from
IBMJargon):
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ...
5of6 wanted to fire me; only about 300 participated but claims that
25,000 were reading. At the time (89/90) make-over, IBM was
desperately in need of make-over and a couple years later it had one
of the largest losses in the history of US companies ... and was in
process of being reorged into the 13 baby blues in preparation for
breaking up the company (baby blues take-off on the "baby bell"
breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup.
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
Tandem Memos and online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Personal Computing Date: 25 Aug, 2025 Blog: Facebooktrivia: before MS/DOS:
... and "no internet" trivia, reference to the IBM science center
wide-area network ... by one of the 1969 CSC GML inventors
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
CSC CP67-based wide-area network morphs into the IBM internal network
(larger than arpanet/internet from just about the beginning until
sometime mid/late 80s, about the time it was forced to convert to
SNA/VTAM) ... technology also used for the corporate sponsored univ
BITNET.
https://en.wikipedia.org/wiki/BITNET
Co-worker at CSC responsible for CP67-based wide-area network
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
more trivia: early 80s, I got HSDT, T1 and faster computer links
(terrestrial and satellite) and lots of battles with the communication
group (60s, IBM had 2701 supporting T1, 70s transition to SNA/VTAM and
associated issues appeared to be responsible for capping controllers
at 56kbits/sec). Was also working with NSF director and was suppose to
get $20M to interconnect the NSF supercomputing centers. Then congress
cuts the budget, some other things happen and eventuall an RFP is
release (in part based on what we already had running). NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet,
OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems
Interconnection standards to become the global protocol for computer
networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."
... snip ...
The communication group tried to block my membership on Chessin's XTP TAB, There were several gov. participants so we tried to take XTP to ISO (ANSI X3S3.3) as "HSP". X3S3.3 eventually tells us that ISO has rule that they can only standardize protocols that conform to OSI model (joke was that ISO didn't have requirement that a standard be implementable while IETF had requirement at least two interoperable implementations before proceeding in standards process).
30yrs of PC markets
https://arstechnica.com/features/2005/12/total-share/
by the late 80s, (consumer) clones were starting to dominate the
market ... aka consumer clones less expensive than a microchannel
cards (& $69 10mbit Ethernet cards with much higher throughput than
$800 microchannel 16mbit Token-Ring cards)
6/23/91 12/22/91 2/16/92 6/7/92 7/26/92
486/50mhz(eisa) $1418 $1238
486/33mhz(eisa) $2398 $1455 $1328 $1018 $917
486/33mhz $1448 $785 $738 $630 $597
486/25mhz $1178 $735 $688 $590 $558
486sx/20mhz $585 $568 $448 $357
386/40mhz $898 $545 $508 $380 $353
386/33mhz $698 $535 $488 $376 $347
286-20 $388 $292 $292 $232 $230
some PC price past refs:
https://www.garlic.com/~lynn/2017i.html#0 EasyLink email ad
https://www.garlic.com/~lynn/2001n.html#82 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#81 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
XTPHSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
some past posts mentioning PC market share
https://www.garlic.com/~lynn/2025b.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#88 Wang Terminals (Re: old pharts, Multics vs Unix)
https://www.garlic.com/~lynn/2024e.html#102 Rise and Fall IBM/PC
https://www.garlic.com/~lynn/2024e.html#16 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#2 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024c.html#36 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#47 OS2
https://www.garlic.com/~lynn/2024b.html#25 CTSS/7094, Multics, Unix, CP/67
https://www.garlic.com/~lynn/2024.html#4 IBM/PC History
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023c.html#22 IBM Downfall
https://www.garlic.com/~lynn/2022h.html#109 terminals and servers, was How convergent was the general use of binary floating point?
https://www.garlic.com/~lynn/2022h.html#41 Christmas 1989
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2019e.html#137 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2018f.html#49 PC Personal Computing Market
https://www.garlic.com/~lynn/2018b.html#103 Old word processors
https://www.garlic.com/~lynn/2017g.html#73 Mannix "computer in a briefcase"
https://www.garlic.com/~lynn/2017g.html#72 Mannix "computer in a briefcase"
https://www.garlic.com/~lynn/2014l.html#54 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2014f.html#28 upcoming TV show, "Halt & Catch Fire"
https://www.garlic.com/~lynn/2013n.html#80 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2011m.html#56 Steve Jobs passed away
https://www.garlic.com/~lynn/2010h.html#4 What is the protocal for GMT offset in SMTP (e-mail) header
https://www.garlic.com/~lynn/2009o.html#68 The Rise and Fall of Commodore
https://www.garlic.com/~lynn/2008r.html#5 What if the computers went back to the '70s too?
https://www.garlic.com/~lynn/2007v.html#76 Why Didn't Digital Catch the Wave?
https://www.garlic.com/~lynn/2007m.html#63 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computing Clusters Date: 26 Aug, 2025 Blog: FacebookGot approval for HA/6000 in 1988, originally for NYTimes to move their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I rename it HA/CMP
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid92 and 128-system cluster ye92. Mid-jan1992, convinced FSD to bid HA/CMP for gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*), and we were told couldn't work on anything with more than 4-system clusters, then leave IBM a few months later.
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to MIPS
reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Executive we reported to, had gone over to head-up Somerset/AIM
(Apple, IBM, Motorola) and does single chip Power/PC ... using
Motorola 88K bus&cache ... enabling multiprocessor configurations
... enabling beefing up clusters with multiprocessor systems.
virtual machine trivia: After graduation and joining IBM, one of my hobbies was CP67/CMS enhanced production (mainframe) systems for internal datacenters and the online marketing and sales support HONE systems was one of 1st (and long time) customers. Then when it was decided to make all 370 virtual memory (started out as counter to severe problems with MVT storage management, resulting in VS2/SVS and then VS2/MVS) and decision to morph CP67->VM370 (where they simplified and/or dropped lots of features, including multiprocessor support). Then 1974, for VM370R2 base, I start putting lots of stuff back in for my internal CSC/VM system (including kernel re-org for multiprocessor support, but not the actual multiprocessor support). Then for VM370R3 based internal CSC/VM, I add multiprocessor back in, initially for HONE so they could upgrade all their 168 systems to 2-CPU (getting twice throughput of 1-CPU with the help of some cache affinity hacks).
Then with the implosion of FS (completely different than 370 and was
suppose to completely replace, during FS, internal politics was
killing off 370 projects and the lack of new 370 during the period is
credited with giving the clone 370 makers their market foothold) there
was mad rush to get stuff back into the 370 product pipelines
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
I get sucked into help with a 370 16-CPU multiprocessor implementation and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168-logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK (high-end 370) that it could be decades before POK's favorite son operating system ("MVS") has (effective) 16-CPU support (documentation at the time claimed MVT/MVS 2-CPU systems only had 1.2-1.5 times the throughput of 1-CPU systems). Then head of POK invites some of us to never visit POK again and directed 3033 processor engineers heads-down and no distractions. POK doesn't ship a 16-CPU system until after the turn of the century.
trivia: The people doing CP67->VM370 move out to the empty SBC bldg at Burlington Mall (on 128). Then with the implosion of FS, the head of POK managed to convince corporate to kill VM370 product, shutdown the development group, and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission, but has to recreate a development group from scratch).
They weren't planning on telling the group (of the move) until the very last minute (to minimize the number that might escape into the Boston/Cambridge area). It managed to leak and there were several escapees (joke was that head of POK was major contributor to VMS).
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
original SQL/Relational, System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Personal Computing Date: 26 Aug, 2025 Blog: Facebookre:
Amdahl wins battle to make ACS, 360-compatible. Then ACS/360 is killed
(folklore was concern that it would advance state-of-the-art too fast)
abd Amdahl then leaves IBM (before Future System); end ACS/360:
https://people.computing.clemson.edu/~mark/acs_end.html
Future System, completely different from 370 and was going to complete
replace it; during FS, internal politics was killing 370 efforts and
claim is that lack of new 370s during FS is what gave clone 370
markers (including Amdahl), their market foothold:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
Early 70s, Amdahl had talk in MIT auditorium (and many of us at the science center attend). One of the questions for Amdahl was what justification did he use with the financial investment people. He said that even if IBM were to completely walk away from 370s, customers had already spent billions on 360&370 software that would keep him in busy until end of century (sort of implied that he knew about FS, which he later consistently denied).
When FS imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081. Note 308x were originally going to be multiprocessor *only* and initial 3081D had less aggregate MIPs than the single processor Amdahl. The processor caches were doubled for 3081K bringing aggregate MIPS up to about the same as single processor Amdahl (although MVS multiprocessor overhead claimed its 2-CPU support only had 1.2-1.5 times the throughput of single processor (with half the MIPS), continuing to give Amdahl distinct advantage).
trivia: I had been introduced to John Boyd in the early 80s and would
sponsor his briefings at IBM. In 89/90, the Marine Corps Commandant
leveraged Boyd for a corps make-over, at a time when IBM was
desperately in need of make-over. A couple years later IBM has one of
the largest losses in the history of US companies ... and was in
process of being reorged into the 13 baby blues in preparation for
breaking up the company (baby blues take-off on the "baby bell"
breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computing Clusters Date: 26 Aug, 2025 Blog: Facebookre:
trivia: old archived post with decade of VAX ships, sliced&diced by
model, us, non-us. year ... also % of vax clustered in 85, 86, 87
https://www.garlic.com/~lynn/2002f.html#0
note IBM 4300s sold in the same mid-range market and in similar numbers except in early 80s, large corporations started ordering hundreds of vm/4341s at a time for placing out in departmental areas ... sort of the leading edge of the coming distributed computing tsunami.
also I got access to engineering 4341 in 1978 and branch office found out about it and in jan1979 cons me into doing benchmark for national lab looking at getting 70 for compute farm ... sort of the leading edge of the coming cluster supercomputing tsunami.
recent posts mentioning cluster supercomputing and distributed
computing tsunamis:
https://www.garlic.com/~lynn/2024g.html#81 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024d.html#15 Mid-Range Market
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computing Clusters Date: 26 Aug, 2025 Blog: Facebookre:
trivia: 1988, an IBM branch office asks if I could help LLNL with standardization of some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", initially 1gbit/sec transfer, full-duplex, aggregate 200mbytes/sec). Then IBM (POK) mainframe release some of their stuff as ESCON (when it is already obsolete), initially 10mbytes/sec, later upgraded to 17mbytes/sec. Then some POK engineers becomes involved with FCS and define a heavy weight protocol that radically reduces throughput, eventually released as FICON. 2010, a z196 "Peak I/O" benchmark released, getting 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time a FCS is announced for E5-2600 server blade claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend that SAPs (system assist processors that actually do I/O) be kept to 70% CPU (or 1.5M IOPS). Max z196 (80 cores) benchmarked at 50BIPS and went for $30M. The E5-2600 server blade (16 cores) benchmarked at 500BIPS (ten times max configured z196) and IBM had base list price of $1815.
IBM Mainframes this century, earlier benchmarks were actual industry
standard (number of program iterations compared to industry MIPs
reference platform), later numbers are from pubs giving % improvement
from previous model:
z900, 16 cores, 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS (1250MIPS/core), Jun2025
note most recent max configured z17 w/208 cores, still only half the
processing of 2010 E5-2600 server blade.
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Boeing Computer Services Date: 27 Aug, 2025 Blog: FacebookAs undergraduate in the 60s, univ. hires me fulltime responsible for os/360 (360/67 running as 360/65) ... univ. shutdown the datacenter on weekends and I would have the whole place to myself (although 48hrs w/o sleep made monday classes hard). Then some people from IBM Cambridge Science Center came out to install CP67/CMS (3rd installation after CSC itself and MIT Lincoln Labs, precursor to VM370) and I mostly got to play with it during my weekend dedicated time. Trivia: Six months after installing CP67 at the univ, CSC schedules a one week CP67 class in LA. I arrive on Sunday and am asked to teach the class, turns out the CSC people scheduled to teach the class had resigned that friday ... to join one of the commercial online CP67 services. The CSC spin-offs forming commercial online CP67 services would move up the value stream specializing in the financial industry.
Then before I graduate I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of BCS (consolidate all data processing into independent business unit). I think Renton Data Center possibly largest in the world. Lots of politics between Renton director and Boeing CFO, who only had a 360/30 up at Boeing Field for payroll (although they enlarge the room for a 360/67 for me to play with when I'm not doing other stuff). Note: When I graduate, I join CSC (instead of staying with Boeing CFO). One of the CSC people that was involved in porting APL\360 to CMS for CMS\APL, leaves and joined BCS in DC area. One time I stopped by he demo'ed some of contract they had with USPS to justify the increase in postal stamp price.
SAIC to acquire Boeing IT services group
https://www.nextgov.com/digital-government/1999/06/saic-to-acquire-boeing-it-services-group/237448/
https://boeing.mediaroom.com/1999-06-11-Boeing-Announces-SAIC-to-Purchase-Boeing-Information-Services
(computerworld) Boeing rolls out 'ultimate' network
https://books.google.com/books?id=a8FBzsfoBZEC&pg=PA69&dq=boeing+computer+services+computerworld&hl=en&sa=X&ved=0ahUKEwilm7W5kqPKAhUCyGMKHdwtBWIQ6AEILjAC#v=onepage&q=boeing%20computer%20services%20computerworld&f=false
(computerworld) Try to find software that solves your problem. Or call Boeing.
https://books.google.com/books?id=40ZfT7SWT64C&pg=PA50&dq=boeing+computer+services+computerworld&hl=en&sa=X&ved=0ahUKEwilm7W5kqPKAhUCyGMKHdwtBWIQ6AEIPjAG#v=onepage&q=boeing%20computer%20services%20computerworld&f=false
(computerworld) In automating an office, one must often choose between piece and harmony.
https://books.google.com/books?id=tBXQZbbSyeQC&pg=RA1-PA66&dq=boeing+computer+services+computerworld&hl=en&sa=X&ved=0ahUKEwilm7W5kqPKAhUCyGMKHdwtBWIQ6AEIRjAI#v=onepage&q=boeing%20computer%20services%20computerworld&f=false
TRIVIA: some analogy between cloud services and online CP67 services; CSC and the commercial CP67 services wanted to provide 7x24 availability ... even initially when there was little or no online dialin. One of the things in the 60s, IBM mainframes were still rented/leased, charges based on the "system meter" that ran whenever any CPU and/or channel was busy. Early effort was special terminal channel program that would allow the channel to go idle, but "instant on" whenever characters were arriving. Note: everything had to be idle for at least 400ms for system meter to come to a halt (long after IBM moved to sales, MVT/MVS still had a timer event that went off every 400ms). There was a also lot of work allowing CP67 to run completely dark room with no people present.
A large cloud service can have scores of megadatacenters around the world, each with half million or more blade servers. They had so optimized system costs, that power&cooling was increasingly major cost. They optimized blades so power use drops to zero when idle ... but can be instantly on when needed.
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 3081 Date: 28 Aug, 2025 Blog: FacebookAmdahl wins battle to make IBM ACS, 360-compatible. Then ACS/360 is killed (folklore was concern that it would advance state-of-the-art too fast) and Amdahl leaves IBM (before Future System); end ACS/360:
Early 70s, Amdahl had talk in MIT auditorium (and many of us at the science center attend). One of the questions for Amdahl was what justification did he use with the financial investment people. He said that even if IBM were to completely walk away from 370s, customers had already spent billions on their 360&370 software, and that would keep him in business until end of century (sort of implied that he knew about FS, which he later consistently denied).
Future System, completely different from 370 and was going to complete
replace it; during FS, internal politics was killing 370 efforts and
claim is that lack of new 370s during FS is what gave clone 370
markers (including Amdahl), their market foothold:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
When FS imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 (3081 significant increase in circuits, needed TCMs in order to package system in reasonable volume, see more in memo125). Note 308x were originally going to be multiprocessor *only* and initial 2-CPU 3081D had less aggregate MIPs than Amdahl single processor. The 3081 processor caches were doubled in size for 3081K, bringing aggregate MIPS up to about the same as single processor Amdahl (although MVS multiprocessor overhead claimed its 2-CPU support only had 1.2-1.5 times the throughput of single processor, continuing to give Amdahl distinct throughput advantage, MVS 2-CPU 3081K only .6-.75 throughput of MVS Amdahl 1-CPU).
Then concern Amdahl was going to take the whole ACP/TPF market (because ACP/TPF didn't have multiprocessor support, decision was made to come out with 3083 (a 3081 with the 2nd processor removed, however the 2nd processor was in the middle of the box ... which would have left the box top-heavy ... so it required recabling the box to have the 1st processor in the middle).
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 23Jun1969 Unbundling, HONE, and APL Date: 30 Aug, 2025 Blog: FacebookIBM 23jun69 unbundling started charging for application software, SE services, maint, etc.
The charging for SE services pretty much put an end to new SEs trainee as part SE support teams at customer sites ... sort of as apprentices (since they couldn't figure out how not to charge for new, inexperienced SEs at customer site). In reaction, HONE (Hands-On Network Environment) was setup ... a number of internal cp67 (precursor to VM370) data centers providing virtual machine access to SEs in the branch office working with guest operating systems. The concept was that SEs could get hands-on operating experience via remote access running in (CP67) virtual machines. Note: one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and HONE was one of the 1st (and long time) customer.
CSC had also ported apl\360 to CMS (for cms\apl), had to redo storage management from 16kbyte swapped workstapces to demand paged, large virtual memory workspaces; also did API for system services for things like file I/O; enabling many real world applications. A number of sales & marketing support applications were developed in CMS\APL and (also) deployed on HONE. Relatively quickly the sales&marketing applications came to dominate all HONE activity (personal computing, time-sharing) ... and the original objective of SE training (using guest operating systems in virtual machines) withered away. HONE would ask me to go along for the early non-US installs (some of my first non-US business trips after graduation). HONE easily/early became the largest deployment of APL around the world.
In the morph of CP67->VM370, lots of features were simplified and/or dropped (including multiprocessor support). 1974, I started moving a lot of stuff (including kernel reorg for multiprocessor support, but not the actual multiprocessor support) to VM370R2 base for my internal CSC/VM ... lots of stuff for HONE. I also moved my CMS page-mapped filesystem with a bunch of fancy shared segment stuff (HONE was looking at redoing some of the largest used CPU-intensive APL-apps in FortH and needed to dynamically switch in&out between half-mbyte shared memory APL and Fortran (a very small subset of this was released in VM370R3 as DCSS). US HONE consolidated all their datacenters in Palo Alto (trivia: when FACEBOOK 1st moves into silicon valley, it was into a new bldg built next door to the former US HONE consolidated datacenter). For VM370R3-based CSC/VM, I put multiprocessor support back in, initially for US HONE so they can upgrade all their 168s to two-processor systems.
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM 23jun69 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-couled, shared memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
CMS paged-mapped filesystem, dynamic shared segments, ets
https://www.garlic.com/~lynn/submain.html#mmap
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM DASD, CKD, FBA Date: 30 Aug, 2025 Blog: FacebookI transfer from the cambridge science center to bldg28/SJR in San Jose and get to wander around lots of IBM (and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test. They were running 7x24, prescheduled, stand-alone testing and had tried MVS (for concurrent test) ... but it had 15min MTBF (requiring re-ipl) in that environment. I offer to rewrite I/O supervisor making it bullet proof and never fail, allowing any amount of on-demand concurrent testing ... greatly improving testing productivity (i.e. after joining IBM one of my hobbies was enhanced production operating systems for internal datacenters).
Bldg15, then gets 1st engineering 3033 (outside POK processor engineering) and since testing only took a percent or two of CPU, we scrounge up a 3830 controller and a 3330 string, setting up our own private online service. Note with the implosion of Future System, there was mad rush to get stuff back into the 370 product pipelines, including kicking off 3033&3081 efforts in parallel.
3033 started out remapping 168 logic to 20% faster chips ... and taking a 158 engine with just the integrated channel microcode for the 303x channel director. A 3031 was two 158 engines, one with just 370 microcode and one with just the integrated channel microcode. A 3032 was 168-3 configured to use the (158) 303x channel director for external channels. A 3033 to get up to three 303x channel directors for up to 16 channels.
As part of rewriting I/O supervisor, I added a missing interrupt handler ("MIH"). Then found that some of the controllers and channels could hang, requiring manual reset. Then find that some controllers as well 303x channel director could be forced to IMPL under program control. For the controllers, had to hit each controller subchannel with HDV/CLRIO combination in quick loop. For the early 303x channel directors that required manual reset, use HCH/CLRCH combination quickly for each of its six channel addresses, and it would force it to IMPL.
A few years later ... as 3380s were getting ready to ship ... FE had test of 57 simulated errors that they thought were likely to occur. For all 57, MVS was failing (requiring manual re-ipl), and in 2/3rds of the cases, no indication what was causing failure.
Along the way, air bearing simulation (part of thin film head design)
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/
was getting a couple turn arounds a month on SJR's (MVT) 370/195. We
then get it setup on bldg15 3033 and there were getting multiple turn
arounds/day. First used for FBA 3370s and later for CKD 3380s
(although 3380s were already on the way for FBA, can be seen in
records/track calculations where record length have to be rounded up
to multiple of fixed cell size).
I also tried to get (2305-like) multiple-exposure approved for 3350FH (fixed head, i.e. multiple subchannel addresses queue multiple channel programs for paging overlapped with arm seek) ... however there was a group in POK working on an electronic paging device, VULCAN and got it shot down ... worried it might impact their forecast. VULCAN eventually gets shot down, were told that IBM was selling all memory made as processor memory (at higher markup than electronic paging device) ... but it was too late to resurrect multiple-exposure for 3350FH.
Eventually got "IBM 1655" (actually vendor SSD) for internal use ... that otherwise simulated CKD 2305-2. It could also be configured as FBA 3mbyte/sec data streaming ... that internal VM370 could use (MVS never ships FBA support, no CKD has been made for decades, all being simulated on industry standard fixed-block devices).
IBM internal CP67l, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
some posts mentioning 3350FH, VULCAN, 1655
https://www.garlic.com/~lynn/2024g.html#70 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024c.html#61 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024.html#29 IBM Disks and Drums
https://www.garlic.com/~lynn/2023g.html#84 Vintage DASD
https://www.garlic.com/~lynn/2023f.html#49 IBM 3350FH, Vulcan, 1655
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2021j.html#65 IBM DASD
https://www.garlic.com/~lynn/2021f.html#75 Mainframe disks
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Example Programs Date: 31 Aug, 2025 Blog: Facebookat IBM San Jose Research, we had collected "user interface example programs" (aka games) and got into battle during corporate audit that demanded all such example programs had to be removed. At the time, most company 3270 logon screens said "For Business Purposes ONLY" ... but SJR screens said "For Management Approved Uses". Early 80s, we had modified 6670 (sort of IBM copier3 with computer link, laser printers around the bldg in departmental areas) driver to add random quotations to the colored/alternate separator page. In after hrs sweep looking for unsecured classified documents, the auditors found one output with the following (random) quotation:
and they were complained bitterly to management that we placed it there on purpose (for them to find) as part of ridiculing them.
recent post about transferring from cambridge science center to SJR
https://www.garlic.com/~lynn/2025d.html#58 IBM DASD, CKD, FBA
https://www.garlic.com/~lynn/2025d.html#45 Some VM370 History
and getting to wander around IBM (and non-IBM) datacenters. One was
TYMSHARE, drop by their datacenter and/or see them at monthly meetings
hosted by Stanford SLAC. TYMSHARE had provided their CMS-based online
computer conferencing system, free to SHARE starting Aug1976
... archive here
http://vm.marist.edu/~vmshare
I cut a deal with TYMSHARE to get monthly tape dump of all files for
putting up on the internal network and systems ... including the
online sales&market support HONE systems
https://www.garlic.com/~lynn/2025d.html#57 IBM 23Jun1969 Unbundling, HONE, and APL
I got some push back, concern that internal employees might be contaminated by exposure to unfiltered customer information. Something like this showed up in 1974 when CERN presented a comparison of VM370/CMS and MVS/TSO at SHARE (even though presentation was freely available, inside IBM copies had been stamped "IBM Confidential - Restricted" aka only available on need-to-know. Then a couple years later (after "Future System" implodes), the head of IBM POK (high-end mainframe) managed to convince corporate to kill VM370/CMS product, shutdown the development group, and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission for the mid-range).
On one stop by TYMSHARE, they demonstrate a new game called ADVENTURE,
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure
they had found on Stanford SAIL PDP10 and ported to VM370/CMS ... I
got executable and source for making available on internal systems.
https://www.garlic.com/~lynn/2025d.html#37 TYMSHARE, VMSHARE, ADVENTURE
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Mainframe Unix Date: 31 Aug, 2025 Blog: FacebookDate: 04/03/80 18:57:33
A major issue (for both IBM and Amdahl) is field support required mainframe EREP ... and to implement mainframe EREP directly in Unix was several times the effort of just porting Unix to the mainframe. As a result both IBM and Amdahl ran their Unix/Unix-like under VM370 (relying on VM370 EREP). There was an earlier IBM/Bell effort started with TSS/370 would be stripped down to just low level hardware support (and EREP) as "SSUP" and Unix facilities were to be scaffolded on top. trivia: I would also see Amdahl people after monthly meetings hosted at Stanford SLAC ... and they would interrogate me on the subject.
trivia: Late 80s, a senior disk engineer got talk scheduled at annual, internal, communication group world-wide conference, supposedly on 3174 performance. However, he opened the talk with statement that communication group would be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing mainframe datacenters to more distributed computing friendly platforms ... and had come up with a number of solutions. However the communication group was constantly vetoing the solutions (with their corporate ownership of everything that crossed datacenter walls) trying to preserve their dumb terminal paradigm and install base. Senior disk division exec partial countermeasures were 1) funding POSIX support in MVS and 2) investing in distributed computing startups that would use IBM disks ... and would periodically ask us to visit his investments to offer help.
It wasn't just disks and a couple years later, IBM has one of the
losses in the history of US companies and was being reorged into the
13 baby blues in preparation for breaking up the company (take off
on the "baby bell" breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp hdqtrs) Armonk asking us to help with the corporate breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup (but it isn't long before the disk division is gone).
Demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
some recent posts mentioning tss/370 ssup and unix
https://www.garlic.com/~lynn/2025.html#67 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024c.html#63 UNIX 370
https://www.garlic.com/~lynn/2024c.html#50 third system syndrome, interactive use, The Design of Design
https://www.garlic.com/~lynn/2024c.html#45 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024b.html#85 IBM AIX
https://www.garlic.com/~lynn/2024.html#15 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2022c.html#42 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2021k.html#64 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021k.html#63 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021e.html#83 Amdahl
https://www.garlic.com/~lynn/2020.html#33 IBM TSS
https://www.garlic.com/~lynn/2019d.html#121 IBM Acronyms
https://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017d.html#82 Mainframe operating systems?
https://www.garlic.com/~lynn/2017d.html#80 Mainframe operating systems?
https://www.garlic.com/~lynn/2017d.html#76 Mainframe operating systems?
https://www.garlic.com/~lynn/2017.html#20 {wtf} Tymshare SuperBasic Source Code
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Amdahl Leaves IBM Date: 01 Sept, 2025 Blog: FacebookAmdahl won the battle to make ACS, 360 compatible. Then ACS/360 was killed (folklore was concern it would advance the state of art too fast and IBM would loose control of the market) ... and Amdahl leaves IBM (this was before Future System).
Early 70s, Amdahl had talk in MIT auditorium (and many of us at the science center attend). One of the questions for Amdahl was what justification did he use with the financial investment people. He said that even if IBM were to completely walk away from 370s, customers had already spent billions on their 360&370 software, and that would keep him in business until end of century (sort of implied that he knew about FS, which he later consistently denied).
Future System, completely different from 370 and was going to complete
replace it; during FS, internal politics was killing 370 efforts and
claim is that lack of new 370s during FS is what gave clone 370
markers (including Amdahl), their market foothold:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
After joining IBM, I got to continue attending user group meeting and drop by customers. A director of one of the largest financial datacenters liked me to drop by and talk technology. At some point the IBM branch manager horribly offended the customer and in retaliation, the customer was ordering a Amdahl system (single Amdahl in a vast sea of blue). Up until that time, Amdahl had been selling into the academic/scientific/technical market, but this would be the first in commercial market. I was asked to go onsite for a year (to help obfuscate why the customer was ordering Amdahl). I talk it over with the customer and then decline IBM's offer. I then was told the branch manager was good sailing buddy of IBM's CEO, and if I don't do this, I can forget career, promotions, raises (first time, among many, that somebody reminds me that in IBM "business ethics" is an oxymoron).
When FS imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 (3081 significant increase in circuits, needed TCMs in order to package system in reasonable volume, see more in memo125). Note 308x were originally going to be multiprocessor *only* and initial 2-CPU 3081D had less aggregate MIPs than Amdahl single processor. The 3081 processor caches were doubled in size for 3081K, bringing aggregate MIPS up to about the same as single processor Amdahl (although MVS multiprocessor overhead claimed its 2-CPU support only had 1.2-1.5 times the throughput of single CPU, continuing to give Amdahl distinct throughput advantage). Then concern Amdahl was going to take the whole ACP/TPF market (because ACP/TPF didn't have multiprocessor support, decision was made to come out with 3083 (a 3081 with the 2nd processor removed, however the 2nd processor was in the middle of the box ... which would have left the box top-heavy ... so it required recabling the box to have the 1st processor in the middle).
Note 23July1969 unbundling announcement started to charge for
(application) software (managed to make the case that kernel software
was still free), SE Services, maint, etc. Then after FS implosion and
rise of clone 370 makers, the decision was made to transition to start
charging for kernel software. One of my hobbies after joining IBM was
enhance production operating systems for internal datacenters
.... starting with incremental add-ons. more here:
https://www.garlic.com/~lynn/2025d.html#56 IBM 3081
https://www.garlic.com/~lynn/2025d.html#57 IBM 23Jun1969 Unbundling, HONE, and APL
https://www.garlic.com/~lynn/2025d.html#58 IBM DASD, CKD, FBA
... from above, my HONE two-CPU systems were getting twice the throughput of the previous single CPU systems.
With the FS implosion and the rise of the clone 370 makers, there was decision to charge for kernel software, starting with incremental add-ons and some of internal enhancements were selected for guinea pig (I had to spend quite a bit of time with lawyers and business planners on kernel charging policies). Initial policy was all hardware support would still be free ... and one of the last things VM370 group wanted to do was ship tightly-coupled, multiprocessor support (i.e. after FS implode, the head of POK had convinced corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA, Endicott eventually was able to get the VM370 product mission for the mid-range, but had to recreate development group from scratch). The problem was that I had included the kernel re-org for multiprocessor in my charged for add-on. The eventual decision was transfer that and a bunch of other stuff out of my charged for kernel add-on into the free base.
Also after FS implosion, I got sucked into helping with ECPS for
138/148, a 370/125 5-CPU multiprocessor project, and a generalized
16-CPU multiprocessor project (and we con the 3033 processor engineers
into working on it in their spare time, a lot more interesting than
remapping 168 logic into 20% faster chips). Old archived post with
initial analysis for ECPS:
https://www.garlic.com/~lynn/94.html#21
Then Endicott complained that the 125/5-cpu would overlap 148 performance and in the escalation ... I was required to argue both sides (Endicott wins). For 16-CPU tightly-coupled, everybody thought it was great until somebody tells head of POK that it could be decades before POK favorite son operating system ("MVS") had (effective) 16-CPU support, i.e. existing MVS 2-CPU (overhead) only got 1.2-1.5 times throughput of single CPU (POK doesn't ship a 16-CPU system until after the turn of the century). Then some of us were told to never visit POK again and the 3033 processor engineers were directed to heads down and no distractions.
After I transfer to SJR, I got permission to give presentations on how ECPS was done. After some of the meetings, Amdhal would interrogate me for additional details. They said that they had done MACROCODE (370-like instructions running in microcode mode) to trivially/quickly respond to series of trivial 3033 microcode changes (each required to run MVS) ... and were in the process of doing HYPERVISOR ("multiple domain"). Note: IBM doesn't respond with LPAR/PRSM until nearly decade later for 3090.
It was in about the same period, that transition to charging for full kernel code (not just kernel add-on)
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management
https://www.garlic.com/~lynn/subtopic.html#fairshare
SMP, tightly-coupled, shared memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
23jun69 unbundling announce
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
370/125 five CPU posts
https://www.garlic.com/~lynn/submain.html#bounce
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
misc. posts mentioning business ethics is oxymoron
https://www.garlic.com/~lynn/2025c.html#35 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#31 IBM Downfall
https://www.garlic.com/~lynn/2025b.html#3 Clone 370 System Makers
https://www.garlic.com/~lynn/2023c.html#92 TCP/IP, Internet, Ethernet, 3Tier
https://www.garlic.com/~lynn/2023c.html#89 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#1 IBM Oxymorons
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023b.html#35 When Computer Coding Was a 'Woman's' Job
https://www.garlic.com/~lynn/2022h.html#24 Inventing the Internet
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#42 IBM Bureaucrats
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#59 IBM CEO: Only 60% of office workers will ever return full-time
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021k.html#18 IBM's social media policy
https://www.garlic.com/~lynn/2021j.html#39 IBM Registered Confidential
https://www.garlic.com/~lynn/2021i.html#82 IBM Downturn
https://www.garlic.com/~lynn/2021h.html#61 IBM Starting Salary
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021e.html#15 IBM Internal Network
https://www.garlic.com/~lynn/2021d.html#86 Bizarre Career Events
https://www.garlic.com/~lynn/2021c.html#42 IBM Suggestion Program
https://www.garlic.com/~lynn/2021c.html#41 Teaching IBM Class
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#12 IBM "811", 370/xa architecture
https://www.garlic.com/~lynn/2021.html#83 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2018f.html#96 IBM Career
https://www.garlic.com/~lynn/2018d.html#13 Workplace Advice I Wish I Had Known
https://www.garlic.com/~lynn/2017e.html#9 Terminology - Datasets
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2017.html#78 IBM Disk Engineering
https://www.garlic.com/~lynn/2016c.html#25 Globalization Worker Negotiation
https://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014h.html#52 EBFAS
https://www.garlic.com/~lynn/2014c.html#65 IBM layoffs strike first in India; workers describe cuts as 'slaughter' and 'massive'
https://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
https://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2011b.html#59 Productivity And Bubbles
https://www.garlic.com/~lynn/2010g.html#44 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010g.html#0 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010f.html#20 Would you fight?
https://www.garlic.com/~lynn/2010b.html#38 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009r.html#50 "Portable" data centers
https://www.garlic.com/~lynn/2009p.html#87 IBM driving mainframe systems programmers into the ground
https://www.garlic.com/~lynn/2009p.html#60 MasPar compiler and simulator
https://www.garlic.com/~lynn/2009p.html#57 MasPar compiler and simulator
https://www.garlic.com/~lynn/2009o.html#57 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009o.html#52 Revisiting CHARACTER and BUSINESS ETHICS
https://www.garlic.com/~lynn/2009o.html#36 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2009e.html#37 How do you see ethics playing a role in your organizations current or past?
https://www.garlic.com/~lynn/2009.html#53 CROOKS and NANNIES: what would Boyd do?
https://www.garlic.com/~lynn/2007j.html#72 IBM Unionization
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Amdahl Leaves IBM Date: 02 Sept, 2025 Blog: Facebookre:
working w/3033 processor engineers ... once the 3033 was out the door, the 3033 processor engineers started on trout/3090.
Guy responsible for retrofitting AR subset to 3033, was out on west coast and did some work with him. He then left for HP (former head of YKT CS was there) and started getting email asking if I would go to.
3033 >16mbyte, 370 CCW IDALs (w/31bits) could be used to transfer data
to/from >16mbytes. Somebody sent me copy of how they were going to
deal with real addresses >16mbyte ... doing IDAL I/O. I sent them back
hack on pair of page table entries to do it with MVCL
Date: 01/04/80 10:58:57
From: wheeler
came up with method for doing MVCL w/o real addressibility. Create new
subroutine in DMKPSA and allocated two page slots in CP's system
relocation table. Call the subroutine with two addresses and length;
'from' read, 'to' real, and length to move. Calculate the 'from' real
page address and set that into PTE slot 1, and add the displacement to
the CP virtual 'from' address. Calculate the 'to' real page address
and set that into PTE slot 2, and add the displacement to the CP
virtual 'to' address.
L R2,=a(sysvm) system vmblok
LCTL CR1,CR1,VMSEG-VMBLOK(r2) load system page table
STOSM =al1(4) enter translate mode
MVCL to,from move data
STNSM =al1(255-4) leave translate mode
IPTE FROM invalidate virt. 'FROM'
IPTE TO invalidate virt. 'TO'
return
data has now been moved into correct storage locate and return to
caller. All places in CP that move data in/out of virtual storage can
now call this subroutine. All places in CP that directly access
virtual storage will have to be rewritten (relatively few). subroutine
works w/o hardware changes but can be upgraded at any later date if
new hardware comes along.
... snip ... top of post, old email index
Date: 01/21/80 11:39:17
From: wheeler
To: somebody in Endicott (worked with on original ECPS)
I got a call at home from YKT telling me about POK processor plans are
slipping but they want to maintain revenue flow. Proposal is to offer
>16meg real storage/processor. Use of additional storage would only be
via use of current Must BE Zero bits in existing page table entries
(i.e. two additional address lines from the PTE would allow
addressibility thru 64meg., instruction decode/admissibility,
etc. remain unaltered and limited to 24bit addresses). POK/VM proposal
was to limit all of CP and control blocks to <16meg. Virtual pages
would only be allowed >16meg. Any CP instruction simulation
encountering page >16meg. would result in page being written out to
DASD and read back into storage <16meg. Flag would also be set in
SWPFLAG for that page indicating that in the future that page could
only be page into addresses <16meg (major problem they overlooked,
other than the obvious overhead to do the page out / page in, is that
after some period of time, most pages would get the <16meg flag turned
on.
I countered with subroutine in DMKPSA of about 25-50 instructions
which is supplied real address in CP control block (<16 meg), real
address in virutal page (possibly >16meg), and length. Subroutine
would 'insert' real addresses in two available PTEs in CP's virtual
address tables. It would then enter translate mode, supervisor state,
perform an MVCL and then revert to non-translate mode. -- No page
out/page in, and no creeping overhead problem where most pages
eventually get the >16meg flag turned on. Also if special case MVCL
was ever created to handle >16meg. addresses it would be a very small
hit to the subroutine only. -- It also has the attraction that access
to virtual machine storage is concentrated in one place. It makes it
much simpler to modify large sections of CP to run in relocate mode
all of the time. Movement of most CP code to 'psuedo' virtual machine/
virtual address space leaves something behind which is much more
nearly containable entirely in microcode.
... snip ... top of post, old email index
trivia: as undergraduate in the 60s (before joining IBM), I had modified CP67, making parts of the kernel pageable ... it wasn't delivered in product CP67, but was picked up for VM370 (another use of the "system" address space more than decade later).
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Amdahl Leaves IBM Date: 02 Sept, 2025 Blog: Facebookre:
Note: the VM370 people moved to POK for MVS/XA, did do a simplified XA virtual machine facility (VMTOOL) supporting MVS/XA development, it required new facility to move in/out of virtual machine mode "SIE" (which wasn't needed in 360 & 370 mode). In wasn't intended for production throughput, in part because 3081 didn't have sufficient microcode space ... and so required "paging" microcode when entering&exiting "SIE".
Then customers weren't migrating from MVS to MVS/XA (on 3081) as planned ... but Amdahl was having more success because they could run MVS & MVS/XA concurrently with its HYPERVISOR (multiple domain). Then it was decided to release VMTOOL as VM/MA (migration aid) then VM/SF (system facility).
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Amdahl Leaves IBM Date: 02 Sept, 2025 Blog: Facebookre:
other trivia: 1972, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
Late 80s, a senior disk engineer got talk scheduled at annual, internal, communication group world-wide conference, supposedly on 3174 performance. However, he opened the talk with statement that communication group would be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing mainframe datacenters to more distributed computing friendly platforms ... and had come up with a number of solutions. However the communication group was constantly vetoing the solutions (with their corporate ownership of everything that crossed datacenter walls) trying to preserve their dumb terminal paradigm and install base. Senior disk division exec partial countermeasures were 1) funding POSIX support in MVS and 2) investing in distributed computing startups that would use IBM disks ... and would periodically ask us to visit his investments to offer help.
The communication group stranglehold on mainframe datacenters wasn't
just disks and a couple yrs later, IBM has one of the largest losses
in the history of US companies and was being reorged into the 13 baby blues
in preparation for breaking up the company (take off on the
"baby bell" breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp hdqtrs) Armonk asking us to help with the corporate breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup (but it isn't long before the disk division is gone).
FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
Demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM San Jose, Cottle Rd Date: 02 Sept, 2025 Blog: Facebook1972, Learson tried (and failed) to block bureaucrats, careerists, and MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of management briefings 1958-1988
Late 80s, a senior disk engineer got talk scheduled at annual, internal, communication group world-wide conference, supposedly on 3174 performance. However, he opened the talk with statement that communication group would be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing mainframe datacenters to more distributed computing friendly platforms ... and had come up with a number of solutions. However the communication group was constantly vetoing the solutions (with their corporate ownership of everything that crossed datacenter walls) trying to preserve their dumb terminal paradigm and install base. Senior disk division exec partial countermeasures were 1) funding POSIX support in MVS and 2) investing in distributed computing startups that would use IBM disks ... and would periodically ask us to visit his investments to offer help.
The communication group stranglehold on mainframe datacenters wasn't
just disks and a couple yrs later, IBM has one of the largest losses
in the history of US companies and was being reorged into the 13 baby blues
in preparation for breaking up the company (take off on the
"baby bell" breakup a decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup (but it
isn't long before the disk division is gone).
Demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Multics and Science Center Date: 02 Sept, 2025 Blog: FacebookSome of the CTSS/7094 people went to the 5th flr for Multics. Others went to the IBM Science Center on 4th flr and did virtual machines (wanted 360/50 to add virtual memory hardware, but all the spare 360/50s were going to FAA/ATC and so had to settle for 360/40 and did CP/40; it morphs into CP67 when 360/67 standard with virtual memory becomes available; later morphs into vm370), internal network, lots of interactive apps, invented GML in 1969, etc, etc. Little friendly rivalry between 4th & 5th flrs.
When I graduated and joined the science center, one of my hobbies was
enhanced production operating systems for internal IBM datacenters,
initially CP67L and then my CSC/VM (built on VM370) To better place
the two efforts on somewhat level playing field during mid & late
70s ... one of the comparisons was the total number of all
installations that ever ran Multics ... some listed here
https://www.multicians.org/sites.html
was about the same number as peak number of internal installations
running my csc/vm
I had transferred out to SJR on the west coast and doing some work
with Jim Gray and Vera Watson on the original SQL/relational, System/R
(all work done on VM370) ... then technology was transferred to
Endicott ("under the radar" while the company preoccupied with
"EAGLE") for SQL/DS (later "EAGLE" implodes and request for how fast
could be port made to MVS, which eventually ships as DB2). Note
Multics ships RDBMS (w/o SQL) well before IBM
https://en.wikipedia.org/wiki/Multics_Relational_Data_Store
https://www.mcjones.org/System_R/mrds.html
One of Multics showcase installations
https://www.multicians.org/site-afdsc.html
Spring 1979 AFDSC wanted to come by and talk about getting 20 VM/4341
systems, by the time they drop by in the fall, it had grown to 210.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Original SQL/Relational posts
https://www.garlic.com/~lynn/submain.html#systemr
posts mentioning CP67L, CSC/VM, and/or SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
a few past posts referencing number of Multics and CSC/VM sites
https://www.garlic.com/~lynn/2023f.html#66 Vintage TSS/360
https://www.garlic.com/~lynn/2022.html#17 Mainframe I/O
https://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2014g.html#74 Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2013m.html#38 Quote on Slashdot.org
https://www.garlic.com/~lynn/2013i.html#11 EBCDIC and the P-Bit
https://www.garlic.com/~lynn/2010q.html#41 Old EMAIL Index
https://www.garlic.com/~lynn/2010l.html#11 Titles for the Class of 1978
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: SNA & TCP/IP Date: 03 Sept, 2025 Blog: FacebookAbout same time as early SNA in the 70s, my wife was co-author of AWP39, had to qualify title "peer-to-peer networking" ... since SNA had misused "network"; wasn't a system, wasn't a network and wasn't an architecture.
Early 80s, got HSDT, T1 and faster computer links (both terrestrial and satellite) and battles with communication group (60s, IBM had 2701 that supported T1, going into SNA and its various issues caped links at 56kbits/sec). For a time, I reported to same executive as author of AWP164 (which turns into APPN). I badgered him to come over and work on real networking since the SNA group would never appreciate him. When APPN was to be announce, SNA non-concurred and it took some time to rewrite the APPN announcement letter to not imply any relationship between SNA and APPN.
Co-worker at CSC responsible for CP67-based wide-area network
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
It morphs into the corporate internal network, larger than the
arpanet/internet from just about the beginning until sometime mid/late
80s; about the time the internal network was forced to convert to
SNA). His technology was also used for the corporate sponsored
univ. BITNET. Trivia from one of the members at the science center
responsible for invention of GML in 1969 (precursor to SGML & HTML):
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
other trivia: HSDT was working with NSF director and was suppose to
get $20M to interconnect the NSF supercomputing centers. Then congress
cuts the budget, some other things happen and eventually an RFP is
released (in part based on what we already had running). NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.
OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems
Interconnection standards to become the global protocol for computer
networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."
... snip ...
Note: Communication group tried (& failed) to block me being a member of the Chessin's XTP Technical Advisory Board. There were some gov. organizations participating and so took it to (ISO chartered) X3S3.3 standards body. Eventually they said that ISO requirements were standards had to conform to OSI Model. XTP didn't because 1) supported internetworking protocol (non-existent in OSI), 2) went directly to LAN MAC interface, skipping layer 3/4 interface, and 3) went directly to LAN MAC interface which doesn't exist in OSI Model, sitting somewhere in the middle of layer 3. Had a joke at the time that (internet) IETF standards required at least two interoperable implementations to proceed with standardization while ISO didn't even require a standard be implementable.
Trivia: mid-80s, the communication group tried to block release of mainframe TCP/IP support. When they lost, they claimed that since they had corporate responsibility for everything that crossed datacenter walls, it had to be release through them; what shipped got aggregate 44kbytes/sec using nearly whole 3090 CPU. I then added RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, I got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).
Late 80s, a univ. did analysis comparing workstation standard TCP implementation with a 5K pathlength compared to a mainframe VTAM LU6.2 implementation of 160K instruction pathlength.
Early 90s, the communication group hired a silicon valley contractor to implement TCP/IP directly in VTAM. He initially demo'ed TCP much faster than LU6.2. He was then told that everybody knows that a "correct" TCP/IP implementation is much slower than LU6.2, and they would only be paying for a "correct" implementation.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
Internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
some posts mentioning AWP39 and AWP164
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#0 IBM APPN
https://www.garlic.com/~lynn/2024.html#84 SNA/VTAM
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2017d.html#29 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2015h.html#99 Systems thinking--still in short supply
https://www.garlic.com/~lynn/2014e.html#15 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014.html#99 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#26 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013g.html#44 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2012o.html#52 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012c.html#41 Where are all the old tech workers?
https://www.garlic.com/~lynn/2011l.html#26 computer bootlaces
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010e.html#5 What is a Server?
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?
https://www.garlic.com/~lynn/2009q.html#83 Small Server Mob Advantage
https://www.garlic.com/~lynn/2009l.html#3 VTAM security issue
https://www.garlic.com/~lynn/2009i.html#26 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009e.html#56 When did "client server" become part of the language?
https://www.garlic.com/~lynn/2008d.html#71 Interesting ibm about the myths of the Mainframe
https://www.garlic.com/~lynn/2007r.html#10 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007q.html#46 Are there tasks that don't play by WLM's rules
https://www.garlic.com/~lynn/2007o.html#72 FICON tape drive?
https://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications
https://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?
https://www.garlic.com/~lynn/2007b.html#48 6400 impact printer
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: VM/CMS: Concepts and Facilities Date: 07 Sept, 2025 Blog: FacebookVM/CMS: Concepts and Facilities
When I joined IBM science center, one of my hobbies was enhanced production operating sysems for internal datacenters and one of the first (and long time) was the internal online sales&marketing support HONE systems. It initially started out as several US HONE CP67 installations for branch office SEs to practice running guest operating systems in virtual machines. However, CSC had also ported APL\360 to CP67 as CMS\APL ... and HONE started offering sales&market support applications, which came to dominate all HONE activity (guest operating system use withered away).
In the decision to add virtual memory to all virtual memory to all 370s, also had decision to do CP67->VM370 and some of the CSC people (on 4th flr) move to the 3rd flr taking over the IBM Boston Programming Center for the VM370 development group. In the morph of CP67->VM370 lots of stuff was simplified and/or dropped (including shared memory multiprocessor support). In 1974, for VM370R2-base, I started adding a lot of stuff back in (including kernel reorg for SMP, but not the actual support) for my CSC/VM systems and US HONE consolidates all their systems in Palo Alto (when FACEBOOK 1st moves into silicon valley it is into new bldg built next door to the former US HONE dataceenter). HONE does load-balancing and fall-over recovery clustering for eight system complex. Then for VM370R3-base, I also add more stuff back in, including SMP support, initially for HONE so they can upgrade their 168s to 2-CPU systems.
When VM370 development outgrew the 3rd flr, they moved out to the emptry (former IBM SBC) bldg at burlington mall (on 128). IBM was in its Future System period (compleletly different from 370 and going to completely replace it). Internal politics during FS was killing off 370 efforts and claim that the lack of new 370s during the period is credited with giving clone 370 makers their market foothold. When FS implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel. The head of POK (high-end 370), also convinces corporate to kill the VM370 product, shutdown the development group and move everybody to POK for MVS/XA. They weren't planning on telling the group until just before the move (to minimize the number that could escape into the Boston area). The information leaked early and several managed to escape (this was in the infancy of VMS and joke was that head of POK was major contributor to VMS). Eventually Endicott manages to save the VM370 product mission (for the mid-range, 4300s that competed with DEC VAX).
In 2009, with some VM cluster announcements, I coined a tome about not releasing any software (70s HONE VM clustering) before its time.
trivia: 1988, I got HA/6000 effort, initially for NYTimes to move
their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on improving DLM scale-up performance). S/88 Product
Administrator started taking us around to their customers and also had
me write a section for the corporate continuous availability document
(it gets pulled when both AS400/Rochester and mainframe/POK complain
they couldn't meet requirements). trivia: previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson.
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid92 and 128-system cluster ye92. Mid-jan1992, convinced FSD to bid HA/CMP for gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*), and we were told couldn't work on anything with more than 4-system clusters, then leave IBM a few months later.
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to MIPS
reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Executive we reported to, had gone over to head-up Somerset/AIM
(Apple, IBM, Motorola) and does single chip Power/PC ... using
Motorola 88K bus&cache ... enabling multiprocessor configurations
... enabling beefing up clusters with multiprocessor systems.
Later 70s, I had transferred to SJR on the west coast and get to wander around IBM (and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test across the street. They were running 7x24, pre-scheduled, stand-alone testing and mentioned that they had recently tried MVS, but it had 15min MTBF (in that environment) requiring manual re-ipl. I offer to rewrite I/O system to make it bullet proof and never fail, allowing any amount of on-demand testing, greatly improving productivity. Bldg15 then got 1st engineering 3033 (outside POK processor engineering) and since I/O testing only used a percent or two of CPU, we scrounge up a 3830 and a 3330 string for our own, private online service. At the time, air bearing simulation (for thin-film head design) was only getting a couple turn arounds a month on SJR 370/195. We set it up on bldg15 3033 (slightly less than half 195 MIPS) and they could get several turn arounds a day.
I write a research report on the work and happen to mention MVS 15min MTBF, bringing down the wrath of the MVS group on my head.
Also bldg15, late 70s gets engineering 4341 (a year before first product ship) and branch office finds out and cons me into doing benchmark for national lab that is looking to get 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami ... that I was working on a decade later).
Early 70s, my wife was in the gburg jes group and one of the catchers
for (loosely-coupled) ASP/JES3 ... then was coned to go to POK to be
responsible for "loosely-coupled" architecture (mainframe for cluster)
... where she was responsible for peer-coupled shared data
architecture. She didn't remain long because 1) repeated battles with
the communication group trying to force her into using SNA/VTAM for
loosely-coupled operation 2) little uptake (until 90s with sysplex and
parallel sysplex) except for IMS hot-standby. She has story about
asking Vern Watts who he would ask permission, he says "nobody" ... he
would just tell them when it was all done.
https://www.vcwatts.org/ibm_story.html
In the early 80s, SJR was doing VM/4341 cluster project
... cluster-wide coordination took less than second to do cluster-wide
operations. Then the communication group told them if they wanted to
release it, it had to use SNA/VTAM (same thing that they were trying
to force on my wife) ... tests showed that their cluster-wide
coordination operations increased to over 30secs. post mentioning
vm/4341 project
https://www.garlic.com/~lynn/2010c.html#1 "The Naked Mainframe" (Forbes Security Article)
Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM (systems for internal datacenters) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
Sales&Marketing Support HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HASP, ASP, JES2, JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
Peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
some posts mentioning announce no software before its time
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2021.html#0 IBM "Wild Ducks"
https://www.garlic.com/~lynn/2018.html#27 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017d.html#42 What are mainframes
https://www.garlic.com/~lynn/2014l.html#11 360/85
https://www.garlic.com/~lynn/2011p.html#77 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2010q.html#35 VMSHARE Archives
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: VM/CMS: Concepts and Facilities Date: 07 Sept, 2025 Blog: Facebookre:
I took two credit hr intro to fortran/computers. At the end of semester, I was hired to reimplement 1401 MPIO for 360/30. The Univ was getting 360/67 for TSS/360, replacing 709/1401 and got a 360/30 temporarily (replacing 1401) pending 360/67s. When 360/67 comes in, I was hired fulltime responsible for OS/360 (tss/360 never came to fruition).
Some of the MIT CTSS/7094 people go to the 5th flr for multics. Others went to the IBM Cambridge Science Center ("CSC") on 4th flr and did virtual machines (wanted 360/50 to add virtual memory hardware, but all the spare 360/50s were going to FAA/ATC and so had to settle for 360/40 and add virtual memory hardware, doing virtual machine CP40; it morphs into CP67 when 360/67 standard with virtual memory becomes available; later morphs into vm370), internal network, lots of interactive apps, invented GML in 1969, etc, etc. Little friendly rivalry between 4th & 5th flrs.
CSC came out to univ for CP67 install (3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my 48hr weekend dedicated time. I initially work on pathlengths for running OS/360 in virtual machine. Test stream ran 322secs on real machine, initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to channel transfer peak of 270). Six months after univ initial install, CSC was giving one week class in LA. I arrive on Sunday afternoon and asked to teach the class, it turns out that the people that were going to teach it and resigned the Friday before to join one of the 60s CP67 commercial online spin-offs.
Before I graduate I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all data processing into independent business unit). I think Boeing Renton datacenter largest in the world. When I graduate, I join CSC (instead of staying with Boeing CFO).
Trivia: old archived post, a decade of VAX ships (sliced/diced by
year, model, us&non-us), with some VAXCluster:
https://www.garlic.com/~lynn/2002f.html#0
Early last decade, I was asked to track down decision to add virtual memory to all 370s, I found staff to executive making decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used, so a typical 1mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. Going to 16mbyte virtual address space (VS2/SVS, sort of like running MVT in a CP67 16mbye virtual machine) allows concurrent regions to increase by factor of four times (capped at 15, 4bit storage protect keys), with little or no paging. Ludlow was doing the initial implementation on 360/67 offshift in POK and I dropped by a few times. Initially a little bit of code for the virtual memory tables and some simple paging. Biggest effort was all channel programs passed to EXCP/SVC0 had virtual addresses, EXCP had to make copies replacing virtual addresses with real and Ludlow borrows CP67 CCWTRANS to craft into EXCP (EXCPVR was for special subsystems that could fix real storage and passed channel programs with real addresses).
Original SQL/relational was System/R done on VM/145 in the 70s San Jose Research. I worked with Jim Gray and Vera Watson on it after transferring to SJR in 2nd half of 70s. Was able to do tech transfer to Endicott in the early 80s for SQL/DS ("under the radar", while the corporation was preoccupied with the next great DBMS, "EAGLE"). When "EAGLE" finally implodes, there was request for how fast could System/R be ported to MVS (eventually released as DB2, originally for decision-support only).
The first (non-SQL) relational product was done by Multics (5th flr,
flr above CSC on the 4th):
https://en.wikipedia.org/wiki/Multics_Relational_Data_Store
Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Original SQL/Relational, System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
posts mentioning tracking down decision to add virtual memory to all
370s
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
misc posts mentioning Ludlow doing initial VS2/SVS implementation for
370 virtual memory
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory
https://www.garlic.com/~lynn/2025c.html#108 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#79 IBM System/360
https://www.garlic.com/~lynn/2025b.html#95 MVT to VS2/SVS
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#95 IBM Mainframe Channels
https://www.garlic.com/~lynn/2024g.html#86 Progenitors of OS/360 - BPS, BOS, TOS, DOS (Ways To Say How Old
https://www.garlic.com/~lynn/2024g.html#72 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024.html#27 HASP, ASP, JES2, JES3
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#43 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023b.html#103 2023 IBM Poughkeepsie, NY
https://www.garlic.com/~lynn/2022h.html#93 IBM 360
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022f.html#41 MVS
https://www.garlic.com/~lynn/2022e.html#91 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#58 Computer Security
https://www.garlic.com/~lynn/2022.html#10 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2021h.html#48 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2013l.html#18 A Brief History of Cloud Computing
https://www.garlic.com/~lynn/2013i.html#47 Making mainframe technology hip again
https://www.garlic.com/~lynn/2011o.html#92 Question regarding PSW correction after translation exceptions on old IBM hardware
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: OS/360 Console Output Date: 09 Sept, 2025 Blog: FacebookAfter Future System implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off 303x&3081 efforts in parallel
aka, 303x used 158 engine with just the integrated channel microcode for the channel director & a 3031 was two 158 engines, one with just 370 microcode and the other with just the integrated channel protocol. A 3032 was 168-3 redone with 303x channel director and 3033 started out 168-3 logic remapped for 20% faster chips.
Note: 158 channel microcode (even with dedicated 158 engine for channel director) was much slower than 168 channels (and even 4341 channels). I had done benchmarks for channel program processing performance on 145, 158, 168, 3033, 4341s, non-IBM clone 370 channels, IBM DASD controllers and non-IBM clone DASD controllers ... and 158 (and its channel director follow-on) were by far the slowest.
I took at two credit hour intro to fortran/computers and at the end of
the semester, was hired to rewrite 1401 MPIO in assembler for
360/30. The univ had 709/1401 (709 tape->tape with 1401 unit record
front end for 709)
https://en.wikipedia.org/wiki/IBM_1401
https://en.wikipedia.org/wiki/IBM_709
and was getting 360/67 for tss/360 (replacing 709/1401)
... temporarily getting 360/30 replacing 1401 pending 360/67
availability (360/30 had 1401 microcode compatibility, but 360/30 was
part of getting 360 experience). The univ. shutdown the datacenter on
weekends and I would have the whole place dedicated (although 48hrs
w/o sleep made monday classes hard). I was given a pile of hardware &
software manuals and got to design and implement my own monitor,
device drivers, interrupt handlers, error recovery, storage
management, etc and within a few weeks had a 2000 card assembler
program. This assembled under os/360 in 30mins, but ran as stand-alone
monitor; txt deck loaded/run with BPS loader. I then add OS/360 system
services and with assemble option for either assemble for stand-alone
(30mins) or OS/360 get/put (60mins, each DCB macro taking 5-6mins).
https://en.wikipedia.org/wiki/IBM_System/360_Model_30
Within a year of taking intro class, 360/67 arrives and I was hired
fulltime responsible for OS/360 (tss/360 never came to production so
ran as 360/65).
https://en.wikipedia.org/wiki/IBM_System/360_Model_65
https://en.wikipedia.org/wiki/IBM_System/360_Model_67
709 (tape->tape) did student fortran in under second. Initially with os/360 (360/67 as 360/65) student fortran ran over a minute (and console was 1052-7 selectric "golf-ball"). I add HASP to MFT-R9.5 and cuts time in half. I then redo MFT-R11 STAGE2 SYSGEN to carefully place datasets and PDS members to optimize disk arm seek and (PDS directory) multi-track search, cutting another 2/3rds to 12.9secs (also redoing SYSGEN so it can run in production job stream w/HASP, radically cutting SYSGEN elapsed time). Student Fortran never got better than 709 until I installed Univ. Waterloo WATFOR (ran 20,000cards/min, 333card/sec on 360/65).
also got a 1443 bar printer (at 150 lines/minute) added for console
output hard copy.
https://en.wikipedia.org/wiki/IBM_1443
Problems with 360&370 console hard copy had been around for a decade or two
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
some recent posts mentioning student fortran, 709, 360/65, watfor
https://www.garlic.com/~lynn/2025d.html#15 MVT/HASP
https://www.garlic.com/~lynn/2025c.html#118 Library Catalog
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#108 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#87 The Rise And Fall Of Unix
https://www.garlic.com/~lynn/2025c.html#80 IBM CICS, 3-tier
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025b.html#121 MVT to VS2/SVS
https://www.garlic.com/~lynn/2025b.html#98 Heathkit
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025.html#103 Mainframe dumps and debugging
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2025.html#26 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#8 IBM OS/360 MFT HASP
https://www.garlic.com/~lynn/2024g.html#98 RSCS/VNET
https://www.garlic.com/~lynn/2024g.html#69 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024g.html#62 Progenitors of OS/360 - BPS, BOS, TOS, DOS (Ways To Say How Old
https://www.garlic.com/~lynn/2024g.html#54 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#29 Computer System Performance Work
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024f.html#15 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024e.html#2 DASD CKD
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#117 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: OS/360 Console Output Date: 10 Sept, 2025 Blog: Facebookre:
... i hsd rewritten IOS to make it bullet proof and never fail so they could use in bldg14 (disk engineering) and bldg15 (disk product test) any amount of on demand concurrent testing .. they had been running prescheduled 7x24 stand alone testing (had tried MVS but had 15min MTBF requring manual reipl). bldg15 got engineering 4341 1978 (before FCS summer 1979). tweeked 4341 channel microcode to do data streamimg 3mbyte/sec ... for 3380 testing
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Boeing, IBM, CATIA Date: 11 Sept, 2025 Blog: FacebookAs undergraduae in the 60s, Univ. hired me fulltime responsible for OS/360 (when 709/1401 was replaced with 360/67 for tss/360 ... but ran as 360/65 with os/360). Then before I graduate, was hired fulltime into small group in the Boeing CFO office to help with formation of Boeing Computer Services, consolidate all dataprocessing into independent business unit. I think Boeing Renton datacenter largest in the world, 360/65s arriving faster than they could be installed (joke that Boeing was getting 360/65s like other companies got keypunches).
Then when I graduate, I join IBM Cambridge Scientific Center; some of the MIT CTSS/7094 people went to multics on the 5th flr, others went to CSC on the 4th flr, did virtual machines (modified 360/40 with virtual memory hardware and did CP/40, morphs into CP/67 when 360/67 standard with virtual memory became available, did CP67-based science center wide-area network which morphs into the internal network, technology also used for corporate sponsored univ BITNET, lots of other stuff) ... instead of staying with Boeing CFO.
Kept in contact with various Boeing people, in the 80s they complained that they had to report CATIA problems to IBM (STL), who would then report them to CATIA (in France), answers returned to STL, which were eventually forwarded to Boeing. In mid-90s, after leaving IBM, was brought into office in Bellevue (just off i90) to discuss a CATIA problem ... CATIA and databases were proprietary. 7x7 were being built with lots of common parts and down the road, Boeing wanted to be able to quickly identify specific replacement part (in planes being used for spare parts). They wanted to be able to extract all the sub-assembly and part information for a database that was easily queried.
some posts mentioning Boeing and CATIA
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022g.html#63 IBM DPD
https://www.garlic.com/~lynn/2021h.html#32 IBM Graphical Workstation
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2019e.html#110 ROMP & Displaywriter
https://www.garlic.com/~lynn/2016h.html#53 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016h.html#47 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2006e.html#28 MCTS
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Boeing, IBM, CATIA Date: 11 Sept, 2025 Blog: Facebookre:
Last product did at IBM was HA/6000, originally for NYTimes to move
their newspaper system (ATEX) off DEC VAXluster to RS/6000. I rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on improving scale-up performance). trivia: previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson. S/88
Product Administrator started taking us around to their customers and
also had me write a section for the corporate continuous availability
document (it gets pulled when both AS400/Rochester and mainframe/POK
complain they couldn't meet requirements).
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid92 and 128-system cluster ye92. Mid-jan1992, convinced FSD to bid HA/CMP for gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*), and we were told couldn't work on anything with more than 4-system clusters, then leave IBM a few months later.
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to MIPS
reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Former executive we had reported to, goes over to head up Somerset/AIM
(Apple, IBM, Motorola), single chip RISC with M88k bus/cache enabling
multiprocessor systems.
Lots of the crowd had been trying to overload RS/6000 with MVS/VTAM features (even though that wasn't the market). Note also for AWD PC/RT, they did their own cards, including 4mbit token-ring. Then for RS/6000 w/microchannel, they were told they could NOT do their own cards, but had to use standard PS2 microchannel cards (that had been heavily performance kneecaped by the communication group). Turns out the PS2 16mbit token-ring microchannel card had lower throughput than the PC/RT 4mbit token-ring card. New Almaden research bldg had been heavily provisioned with IBM CAT wiring (assuming 16mbit T/R), but they found that 10mbit Ethernet (over CAT) LAN had lower latency and higher aggregate LAN throughput, besides $69 10mbit Ethernet cards having much higher throughput than $800 PS2 16mbit T/R cards. Also for 300 workstations, the difference in card cost (300*69=$20,700, 300*800=$240,00), $219,300 ... could get a few high-performance TCP/IP routers, 16 high-performance Ethernet interfaces/router and IBM channel interface (also had non-IBM mainframe channel interfaces, telco T1 & T3, and FDDI LAN options).
trivia: a few years earlier, communication group was trying to block release of mainframe TCP/IP support. When they lost, they claimed that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got aggregate 44kbytes/sec using nearly whole 3090 CPU. I then added RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, I got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
some recent posts mentioning IBM Almaden, Ethernet, Token-ring:
https://www.garlic.com/~lynn/2025d.html#46 IBM OS/2 & M'soft
https://www.garlic.com/~lynn/2025d.html#8 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#2 Mainframe Networking and LANs
https://www.garlic.com/~lynn/2025c.html#114 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#88 IBM SNA
https://www.garlic.com/~lynn/2025c.html#56 IBM OS/2
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#41 SNA & TCP/IP
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2024g.html#101 IBM Token-Ring versus Ethernet
https://www.garlic.com/~lynn/2024f.html#39 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024e.html#64 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024c.html#47 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#83 IBM's Near Demise
https://www.garlic.com/~lynn/2023b.html#50 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#34 Online Terminals
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#18 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021d.html#15 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021b.html#45 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Multitasking Date: 11 Sept, 2025 Blog: FacebookSome of the MIT CTSS/7094 people went to the 5th flr to do Multics. Others went to the IBM science center on the 4th flr and did virtual machines, the science center wide-area network (morphs into the corporate internal network, larger than arpanet/internet from just about the beginning until sometime middle/late 80s when it was forced to convert to SNA, also used for the corporate sponsored univ. BITNET; lots of other stuff.
I had taken 2 credit hr intro to fortran/computers and at the end of semester was hired to rewrite 1401 MPIO for 360/30. Univ. was getting 360/67 for tss/360 replacing 709/1401 and temporarily pending 360/67s, got a 360/30 replacing 1401. Univ. datacenter shutdown on weekends (I had the datacenter dedicate, but 48hrs w/o sleep made monday classes hard) and I was given a bunch of hardware & software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. When the 360/67 arrived, I was hired fulltime responsible for os/360 (tss/360 didn't come to production and so ran as 360/65).
A couple yrs later, CSC came out to univ for CP67/CMS (before morph into VM370/CMS) install (3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my 48hr weekend dedicated time. I initially work on pathlengths for running OS/360 in virtual machine. Test stream ran 322secs on real machine, initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, (dynamic adaptive resource manager/default fair share policy) scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to channel transfer peak of 270). Six months after univ initial install, CSC was giving one week class in LA. I arrive on Sunday afternoon and asked to teach the class, it turns out that the people that were going to teach it and resigned the Friday before to join one of the 60s CP67 commercial online spin-offs.
More than decade later, I'm involved in some work on UNIX, and find the UNIX scheduler looked very much like the early CP67 scheduler, before I completely rewrote it (some conjecture that it, original CP67, MULTICS, all can trace back to CTSS). I was also doing some work with Jim Gray and Vera Watson on the original SQL/Relational, System/R (originally all work done on VM370/CMS).
trivia: Charlie had invented CAS (compare-and-swap, chosen for his initials, C.A.S.) when he was working on CP67 fine-grain multiprocessor locking at CSC. Trying to get it adopted for 370s was initially rejected, saying that the POK favorite son (batch) operating system people claimed that simple locking ("test-and-set) was sufficient. To get it justified would require something more than simple multiprocessor locking. Thus was born the examples for multi-threaded applications use (like large DBMS), for both single processor as well as multiprocessor systems.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management/scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET
https://www.garlic.com/~lynn/subnetwork.html#bitnet
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
original sql/relational, System/R
https://www.garlic.com/~lynn/submain.html#systemr
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Virtual Memory Date: 12 Sept, 2025 Blog: FacebookI was asked to track down decision to add virtual memory to all 370s ... and found staff member to exec making decision. Basically MVT storage management was so bad that region sizes were specified four times larger than use, resulting in typical 1mbyte 370/165 only running four concurrent regions (insufficient to keep system busy and justified). Going to 16mbyte virtual memory (sort of like running MVT in CP67 16mbyte virtual machine) allowed number of concurrent regions to be increase by factor of four (capped at 15 because 4bit storage protect key) with little or no paging (as high-end got larger, they need to increase past 15, so VS2/SVS morphs into VS2/MVS).
Early (classified) 370 virtual memory document managed to leak to industry press. The resulting search for leak didn't find the source, but resulted in all internal copiers being retrofitted with copier ID that appeared on every page copied.
Then for Future System effort:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
they go to specially modified VM370/CMS systems where all FS docs were kept softcopy, could only be viewed by special CMS IDs, from designated 3270 terminals. In the initial morph from CP67 to VM370, there were a lot of stuff greatly simplified or dropped. When I initially joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters (including HONE was one of the 1st and long-time customer). Early 1974, for VM370R2-base, I started migrating lots of stuff to VM370. Cambridge Science Cenrter had a special CP67 that could emulate 370 virtual memory virtual machines (in addition to 360/67) and I could do initial testing in CP67 virtual 370 machines. At one point I needed to start doing real 370 testing and get some weekend time at the VM370 development group. I drop by Friday afternoon to make sure everything is ready for my weekend time. While I'm there, they start bragging about the super security for classified Future System softcopy documents ... and even if I was left totally alone in the machine room for the weekend, I wouldn't be able to access the documents. After awhile, I got tired of it. I ask them to disable access to the system (for terminals not in the machine room). From front panel, I flip a bit in computer memory and then everything typed as a password is accepted as valid.
trivia: one of the things dropped in CP67->VM370 was multiprocessor support. For VM370R2-base for my CSC/VM, I include kernel re-org for SMP (but not the actual SMP support). Then for VM370R3-base, I include the multiprocessor support, initially for online sales&marketing support HONE so they could upgrade all their 168s to 2-CPU (HONE had consolidate all their US datacenters in Palo Alto; trivia: when FACEBOOK 1st moves into silicon valley, it is into a new bldg built next door to the former US HONE datacenter).
other trivia: FS was totally different from 370 and was going to
completely replace it. During FS, internal politics was killing off
370 projects and lack of new 370 is credited with giving the clone 370
makers, their market foothold. F/S implosion, from 1993 Computer
Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
I would periodically criticize what they were doing (drawing analogy with long playing cult film down at central sq) ... which wasn't exactly career enhancing.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
posts mentioning to move from MVT to VS2/SVS (storage management),
then VS2/SVS to VS2/MVS (more than 15), then VS2/MVS to MVS/XA (MVS
overhead on verge of taking over all 16mbytes in every application
address space):
https://www.garlic.com/~lynn/2025d.html#22 370 Virtual Memory
https://www.garlic.com/~lynn/2025c.html#108 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#79 IBM System/360
https://www.garlic.com/~lynn/2025c.html#59 Why I've Dropped In
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#11 what's a segment, 80286 protected mode
https://www.garlic.com/~lynn/2024f.html#31 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024b.html#108 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#107 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Virtual Memory Date: 12 Sept, 2025 Blog: Facebookre:
Amdahl wins battle to make ACS, 360-compatible. Then ACS/360 is killed
(folklore was concern that it would advance state-of-the-art too fast)
and Amdahl then leaves IBM (before Future System); end ACS/360:
https://people.computing.clemson.edu/~mark/acs_end.html
Future System, completely different from 370 and was going to complete
replace it; during FS, internal politics was killing 370 efforts and
claim is that lack of new 370s during FS is what gave clone 370
markers (including Amdahl), their market foothold:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
Early 70s, Amdahl had talk in MIT auditorium (and many of us at the science center attend). One of the questions for Amdahl was what justification did he use with the financial investment people. He said that even if IBM were to completely walk away from 370s, customers had already spent billions on 360&370 software that would keep him in busy until end of century (sort of implied that he knew about FS, which he later consistently denied).
When FS imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081. Note 308x were originally going to be multiprocessor *only* and initial 3081D had less aggregate MIPs than the single processor Amdahl. The processor caches were doubled for 3081K bringing aggregate MIPS up to about the same as single processor Amdahl (although MVS multiprocessor overhead claimed its 2-CPU support only had 1.2-1.5 times the throughput of single processor, .6-.75 throughput of the Amdahl single processor).
Something akin to the MVT virtual machine usermods were available from IBM for VS1 ("handshaking") and saw something similar: 1) VS1 2k pages were for small system memory, memory sizes had increased and VM370 4k pages were more efficient, 2) included reflecting virtual page faults (& page I/O complete) to VS1 enabling task switching 3) my paging from 60s CP67 were back in VM370 product, much better algorithm than VS1 (& MVS), 4) my page I/O pathlength was about 1/5th that of VS1 (& 1/10 that of MVS).
Mid-70s, I was pointing out that systems were getting (bigger &) faster than disks were getting faster. Early 80s, wrote a tome that disk relative system throughput had declined by order of magnitude since 360 announce (systems got 40-50 times faster, disks had gotten 3-5 times faster). Disk division executive took exception and assigned the division performance group to refute the claim. After a couple weeks they came back and essentially said that I had slightly understated the problem. The performance group then respun the analysis for SHARE presentation (16Aug1984, SHARE 63, B874) on how to configure disks for improved system throughput.
In the FS implosion, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. The head of POK was also in the process of convincing corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA. They weren't planning on telling the people until the people until very last minute (minimizing the number that might escape into the Boston area). It managed to leak early and several managed to escape (DEC VMS was in its infancy and joke was that head of POK was major contributor to VMS). Endicott managed to save the VM370 product mission (for mid-range) but had to recreate a development group from scratch.
Endicott also cons me into helping with the 138/148 ECPS microcode
assist ... old archived post with initial analysis
https://www.garlic.com/~lynn/94.html#21
as well as presenting business case to US regions and WTC planners. Then Endicott wanted to pre-install VM370 on every 138/148 (and later 4331/4341) shipped (sort of like late 80s 3090 LPAR&PR/SM) ... but POK managed to convince corporate veto it.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
page replacement and page i/o posts
https://www.garlic.com/~lynn/subtopic.html#clock
some recent posts mentioning SHARE B874 talk:
https://www.garlic.com/~lynn/2025d.html#23 370 Virtual Memory
https://www.garlic.com/~lynn/2025c.html#108 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#79 IBM System/360
https://www.garlic.com/~lynn/2025c.html#20 Is Parallel Programming Hard, And, If So, What Can You Do About It?
https://www.garlic.com/~lynn/2025b.html#33 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#107 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#65 Where did CKD disks come from?
https://www.garlic.com/~lynn/2024f.html#9 Emulating vintage computers
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#109 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#92 IBM DASD 3380
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#36 360/85
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#0 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Virtual Memory Date: 12 Sept, 2025 Blog: Facebookre:
related FS ("computer wars") trivia: 1972, Learson tried (and failed)
to block bureaucrats, careerists, and MBAs from destroying Watson
culture/legacy, pg160-163, 30yrs of IBM management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
late 80s, senior disk engineer gets talk scheduled at annual,
internal, world-wide communication group conference, supposedly on
3174 performance. However, the opening was that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly being vetoed by the communication group (with
their corporate ownership of everything that crossed the datacenter
walls) trying to protect their dumb terminal paradigm. The
communication group stranglehold on mainframe datacenters wasn't just
disk and a couple years later (and 20yrs after Learson's failure), IBM
has one of the largest losses in the history of US companies. BM was
being reorganized into the 13 baby blues in preparation for breaking
up the company (take off on "baby bells" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
communication group and dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Virtual Memory Date: 13 Sept, 2025 Blog: Facebookre:
Transfer to San Jose Research in 2nd half 70s and got to wander around IBM (and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test across the street. They were running 7x24, pre-scheduled, stand-alone testing and mentioned that they had recently tried MVS, but it had 15min MTBF (in that environment) requiring manual re-ipl. I offer to rewrite I/O system to make it bullet proof and never fail, allowing any amount of on-demand testing, greatly improving productivity. Bldg15 then got 1st engineering 3033 (1st outside POK processor engineering) and since I/O testing only used a percent or two of CPU, we scrounge up a 3830 and a 3330 string for our own, private online service. I wrote an internal research report on all the I/O integrity work and happen to mention the MVS 15min MTBF, bringing down the wrath of the MVS group on my head
At the time, air bearing simulation (for thin-film head design) was only getting a couple turn arounds a month on SJR 370/195. We set it up on bldg15 3033 (slightly less than half 195 MIPS) and they could get several turn arounds a day. Bldg15 also gets engineering 4341 in '78 (nearly year before shipping to customers), Jan1979, branch office hears about it and cons me into doing benchmark for national lab looking at getting 70 for compute farm (sort of leading edge of the coming cluster supercomputing tsunami).
About the same started being blamed for online computer conferencing on the internal network (larger than arpanet/internet from juust about the beginning until sometime mid/late 80s, about the time it was forced to convert to SNA/VTAM). It really took off spring '81 when I distributed trip report of visit to see Jim Gray at Tandem. Only about 300 participated but claims 25,000 were reading (folklore when corporate executive committee was told, 5o6 wanted to fire me). For that and some other transgressions was transferred to Yorktown Research, but left in San Jose; having to commute to YKT a couple times a month (redeye monday night SFO to JFK, bright and early in YKT, return Friday afternoon). Was told with executive committee wanted to fire me, would never be made an IBM Fellow, but if I would keep my head low, funds would be diverted so could operate as one.
I got HSDT, T1 and faster computer links (terrestrial and satellite) and lots of battles with communication group (60s, IBM had 2701 that supported T1, going into SNA and its various issues caped links at 56kbits/sec). For a time, I reported to same executive as author of AWP164 (which turns into APPN). I badgered him to come over and work on real networking since the SNA group would never appreciate him. When APPN was to be announce, SNA non-concurred and it took some time to rewrite the APPN announcement letter to not imply any relationship between SNA and APPN.
Was working with NSF director and was suppose to get $20m to
interconnect the NSF Supercomputing Centers. Then congress cuts the
budget, some other things happen and eventually an RFP is released (in
part based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.
Also got a custom designed Ku-band TDMA satellite system, initially with 3-nodes, Los Gatos lab, Yorktown Research, and Austin. Bldg14 was undergoing seismic retrofit and disk engineering temporarily relocated to two-story bldg, just south of main plant site ... and had EVE (hardware VSLI design verification box (tens of thousand times faster than running software verification on 3033). San Jose had T3 collins digital radio microwave system, and put in T1 tail circuit between Los Gatos lab and bldg 86 EVE, so Austin can use the EVE for RIOS (RS/6000) chip design verification.
other trivia: For some reason (possibly to cut my criticsm of their activities), the IBM NSF group asked me to be the "red team" for the NSFNET T3 upgrade (and couple dozen people from half dozen IBM labs were "blue team"). At final review, I presented 1st, then 5mins into the "blue team" presentation, the executive running the review pounded on table and said he would lay down in front of garbage truck before he let anything but the "blue team" proposal go forward (I and few others get up and leave).
... also the communiction group was fighting off release of mainframe TCP/IP support. When they lost, they change strategy and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got aggregate 44kbyte/sec using nearly full 3090 CPU. I then add RFC1044 support and in some tuning tests at Cray Research, between Cray and 4341 get sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
online computer conferencing trivia:
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ...
Some of results were IBM task forces, official conferencing software, official sanctioned and moderated forums. Also a researcher was paid to study how I communicated, sitting in back of my (mostly Los Gatos) office (rather than SJR) taking notes on face-to-face & telephone communication. Also got copies of all incoming & outgoing email and logs of all instant messages. Result was (IBM) research reports, conferenced papers and talks, books, and Stanford Phd (joint with language and computer AI, Winograd was advisor on the AI side).
getting to play disk enginneer posts
https://www.garlic.com/~lynn/subtopic.html#disk
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFnet/NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
misc. posts mentioning LSM & EVE
https://www.garlic.com/~lynn/2025d.html#1 Chip Design (LSM & EVE)
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023f.html#59 Vintage IBM Power/PC
https://www.garlic.com/~lynn/2023f.html#16 Internet
https://www.garlic.com/~lynn/2023c.html#75 IBM Los Gatos Lab
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2021i.html#67 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021e.html#28 IBM Cottle Plant Site
https://www.garlic.com/~lynn/2021e.html#14 IBM Internal Network
https://www.garlic.com/~lynn/2021c.html#53 IBM CEO
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#62 Mainframe IPL
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2021.html#5 LSM - Los Gatos State Machine
https://www.garlic.com/~lynn/2018b.html#84 HSDT, LSM, and EVE
https://www.garlic.com/~lynn/2014b.html#67 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014b.html#5 IBM Plans Big Spending for the Cloud ($1.2B)
https://www.garlic.com/~lynn/2014b.html#4 IBM Plans Big Spending for the Cloud ($1.2B)
https://www.garlic.com/~lynn/2010m.html#52 Basic question about CPU instructions
https://www.garlic.com/~lynn/2007o.html#67 1401 simulator for OS/360
https://www.garlic.com/~lynn/2007l.html#53 Drums: Memory or Peripheral?
https://www.garlic.com/~lynn/2007h.html#61 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings
https://www.garlic.com/~lynn/2007f.html#73 Is computer history taught now?
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2005q.html#17 Ethernet, Aloha and CSMA/CD -
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004o.html#65 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2003o.html#38 When nerds were nerds
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2002j.html#26 LSM, YSE, & EVE
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Virtual Memory Date: 14 Sept, 2025 Blog: Facebookre:
IBM 23Jun69 unbundling announcement started to charge for (application) software (they made case that kernal software should still be free), SE services, maint, etc. Mainstream organizations was having hard time adapting to the change. Example was JES2 NJE, original code was from HASP (had "TUCC" in cols68-71). Requirement was that monthly price covered original development and ongoing development, support & maint. Business process did forecasts at low, medium, and high price ... but there was no NJE price where forecasted revenue met requirements ($300/$600/$1200 per month).
VM370 RSCS was somewhat easier, originally done by co-worker at the science center for the CP67-based science center wide-area network, which morphs into the internal network (and technology also used for the corporate sponsored univ. BITNET). Basic forecast for RSCS product met the requirement at $30/month. However, FS was imploding and head of POK was in the process of convincing corporate to kill VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA ... and he was never going to concur with announcing the "new" VM370 RSCS product. Now VNET/RSCS had nice clean layered architecture and to enable HASP/JES2 machines to connect to the internal network, a VNET/RSCS driver was done that emulated NJE.
Note NJE had some implementation issues which created difficultings resulting in restricting them to internal network boundary nodes 1) original HASP implementation (still in JES2 NJE) was network nodes were defined in spare entries in the 255 psuedo device table (usually around 160-180) but internal network was already past 255 ... and NJE would trash traffic if either the origin OR destination nodes weren't in the local table, 2) NJE fields were somewhat intermixed with job control fields ... and JES2 NJE systems at different release levels had habit of destination JES2 crashing the MVS system (a body of internal VNET/RSCS NJE driver code grew up that attempted to recognize difference between origin and immediate destination ... and adjust fields (as countermeasure to crashing destination MVS system).
In any case, the gburg JES group cut a deal to announce RSCS as joint NJE/RSCS product at $600/month (getting around POK veto for releasing RSCS) ... effectively nearly all the RSCS revenue underWriting NJE.
Later in the 80s, the rules were further relaxed ... only thing necessary was for different products be in the same development organization (folklore was that was why VM370 performance products were moved into the same group as MVS ISPF).
trivia: with the rise of clone 370 markers during FS, the decision was also made to start charging for kernel software (initially kernel add-ons, eventually transitioning to all kernel software in the 80s; which was then followed by OCO). One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters, and a bunch of my stuff was selected to be the initial guinea pig for kernel charging (and I had to spend a bunch of time with planners and lawyers on kernel software charging policies).
network trivia; internal network (starting with science center
wide-area network) was larger than arpanet/internet from just about
the beginning until sometime mid/late 80s (about the time it was
forced to convert to SNA/VTAM). The great cutover of arpanet/internet
from IMP/HOST to internetwork on 1/1/1983, there was approx. 100 IMPs
and 255 HOSTS ... at a time when internal network was rapidly approach
1000 ... which it passed early that summer. Old archived post with
selection of 1983 weekly updates as well as list of all corporate
locations that added one or more nodes during 1983:
https://www.garlic.com/~lynn/2006k.html#8
IBM 23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
ASP, HASP, JES3, JES2, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Virtual Memory Date: 14 Sept, 2025 Blog: Facebookre:
HA/6000 was approved in 1988, also in 1988 branch office asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes "fibre-channel standard" ("FCS", including some stuff I did in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec). Then POK gets their serial stuff shipped (when it is already obsolete) as ESCON (initially 10mbytes/sec, later upgraded to 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy weight protocol that radically reduces throughput, eventually released as FICON.
2010, a z196 "Peak I/O" benchmark released, getting 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time a FCS is announced for E5-2600 server blade claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend that SAPs (system assist processors that actually do I/O) be kept to 70% CPU (or 1.5M IOPS). Max z196 (80 cores) benchmarked at 50BIPS and went for $30M. The E5-2600 server blade (16 cores) benchmarked at 500BIPS (ten times max configured z196) and IBM had base list price of $1815 (benchmark was industry standard number of program iterations/sec compared to industry benchmark reference platform).
fibre-channel standard and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Token-Ring Date: 14 Sept, 2025 Blog: FacebookMy wife was co-author of early "token" patent that was used for Series/1 chat-ring. Later Evans asked my wife to audit 8100, and not long later, 8100 was canceled.
AWD did their own cards for PC/RT (16bit PC/AT bus) ... including 4mbit Token-Ring card. For the RS/6000 (w/microchannel, 32bit bus), AWD was told they couldn't do their own cards, but had to use the (heavily performance kneecapped by the communication group) PS2 microchannel cards. It turns out the PS2 microchannel 16mbit Token-Ring card had lower throughput than the PC/RT 4mbit Token-Ring card (i.e. joke that PC/RT server with 4mbit T/R would have higher throughput than RS/6000 server with 16mbit T/R)
New Almaden research bldg had been heavily provisioned with IBM CAT wiring (assuming 16mbit T/R), but they found that 10mbit Ethernet (over CAT) LAN had lower latency and higher aggregate LAN throughput, besides $69 10mbit Ethernet cards having much higher throughput than $800 PS2 16mbit T/R cards. Also for 300 workstations, the difference in card cost (300*69=$20,700, 300*800=$240,00), $219,300 ... could get a few high-performance TCP/IP routers, 16 high-performance Ethernet interfaces/router and IBM channel interface (also had non-IBM mainframe channel interfaces, telco T1 & T3, and FDDI LAN options). 1988 ACM SIGOPS had a 10mbit ethernet study, standard aggregate Ethernet sustained 8.5mbit and Ethenet cards also 8.5mbit (aka Ethernet server providing 8.5mbit service). They then did a test with 30 PC Ethernet test, where all PCs ran low-level device driver loop constantly transmitting minimum sized packets and effective aggregate throughput dropped off to 8mbit/sec).
trivia: communication group was trying to block release of mainframe TCP/IP. When they lost, they changed their tactics, they said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be release through them; what shipped got aggregate 44kbytes/see using nearly whole 3090 processor. I then added RFC1044 and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times increase in bytes moved per instruction executed).
RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
some recent posts mentioning AWD, PC/RT, RS/6000, token-ring, ethernet
https://www.garlic.com/~lynn/2025d.html#73 Boeing, IBM, CATIA
https://www.garlic.com/~lynn/2025d.html#46 IBM OS/2 & M'soft
https://www.garlic.com/~lynn/2025d.html#8 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#2 Mainframe Networking and LANs
https://www.garlic.com/~lynn/2025c.html#114 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#88 IBM SNA
https://www.garlic.com/~lynn/2025c.html#74 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#56 IBM OS/2
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025.html#106 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2024g.html#101 IBM Token-Ring versus Ethernet
https://www.garlic.com/~lynn/2024g.html#18 PS2 Microchannel
https://www.garlic.com/~lynn/2024f.html#42 IBM/PC
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024e.html#102 Rise and Fall IBM/PC
https://www.garlic.com/~lynn/2024e.html#81 IBM/PC
https://www.garlic.com/~lynn/2024e.html#71 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#64 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#47 OS2
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023e.html#30 Apple Versus IBM
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023d.html#27 IBM 3278
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#83 IBM's Near Demise
https://www.garlic.com/~lynn/2023b.html#50 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#13 IBM/PC
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 360/67 Virtual Memory Date: 15 Sept, 2025 Blog: FacebookLots of places got 360/67s for tss/360, but most places just used them as 360/65s for OS/360. CSC had done CP40, modifying 360/40 with hardware virtual memory, it morphs into CP67 when 360/67 standard with virtual memory became available, and a lot of places started using 360/67s for CP67/CMS. I was undergraduate but hired fulltime responsible for OS/360 when CSC came out to install CP67 (3rd installation after CSC itself and MIT Lincoln Labs). UofMichigan did their own virtual memory system (MTS) for 360/67 (later ported to MTS/370).
Paging system trivia: as undergraduate, part of my (CP67)rewrite included Global LRU page replacement algorithm ... at a time when there was ACM articles about Local LRU. I had transferred to San Jose Research and worked with Jim Gray and Vera Watson on (original SQL/relational) System/R. Then Jim leaves for Tandem, fall of 1980 (palming some stuff off on me). At Dec81 ACM SIGOPS meeting, Jim asks me if I can help Tandem co-worker get their Stanford Phd. It involved global LRU page replacements and the forces from the late 60s ACM local LRU are lobbying to block giving Phd for anything involving global LRU. I had lots of data on my undergraduate Global LRU work and at CSC, that run 768kbyte 360/67 (104 pageable pages after fixed requirement) with 75-80 users. I also had lots of data from the IBM Grenoble Science Center that had modified CP67 to conform to the 60s ACM local LRU literature (1mbyte 360/67, 155 pageable pages after fixed requirement). CSC with 75-80 users had better response and throughput (104 pages) than Grenoble running 35 users (similar workloads and 155pages).
Early last decade, I was asked to track down decision to add virtual memory to all 370s, I found staff to executive making decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used, so a typical 1mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. Going to 16mbyte virtual address space (VS2/SVS, sort of like running MVT in a CP67 16mbye virtual machine) allows concurrent regions to increase by factor of four times (capped at 15, 4bit storage protect keys), with little or no paging. Ludlow was doing the initial implementation on 360/67 offshift in POK and I dropped by a few times. Initially a little bit of code for the virtual memory tables and some simple paging. Biggest effort was all channel programs passed to EXCP/SVC0 had virtual addresses, EXCP had to make copies replacing virtual addresses with real and Ludlow borrows CP67 CCWTRANS to craft into EXCP (EXCPVR was for special subsystems that could fix real storage and passed channel programs with real addresses).
paging posts
https://www.garlic.com/~lynn/subtopic.html#clock
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
old email about helping with PHD
https://www.garlic.com/~lynn/2006w.html#email821019
in this archived post
https://www.garlic.com/~lynn/2006w.html#46
other posts mention 821019 communication
https://www.garlic.com/~lynn/htm/2025.html#50 The Paging Game
https://www.garlic.com/~lynn/2023g.html#105 VM Mascot
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2022h.html#56 Tandem Memos
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021j.html#18 Windows 11 is now available
https://www.garlic.com/~lynn/2021i.html#82 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#62 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021c.html#38 Some CP67, Future System and other history
https://www.garlic.com/~lynn/2019b.html#5 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2018f.html#63 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#62 LRU ... "global" vs "local"
https://www.garlic.com/~lynn/2018c.html#95 Tandem Memos
https://www.garlic.com/~lynn/2017j.html#78 thrashing, was Re: A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017d.html#66 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#52 Some IBM Research RJ reports
https://www.garlic.com/~lynn/2017b.html#26 Virtualization's Past Helps Explain Its Current Importance
https://www.garlic.com/~lynn/2017b.html#24 Disorder
https://www.garlic.com/~lynn/2016g.html#40 Floating point registers or general purpose registers
https://www.garlic.com/~lynn/2016e.html#2 S/360 stacks, was self-modifying code, Is it a lost cause?
https://www.garlic.com/~lynn/2015g.html#90 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2015c.html#39 Virtual Memory Management
https://www.garlic.com/~lynn/2014m.html#138 How hyper threading works? (Intel)
https://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2014i.html#98 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2014g.html#97 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2014e.html#14 23Jun1969 Unbundling Announcement
https://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013i.html#30 By Any Other Name
https://www.garlic.com/~lynn/2013f.html#42 True LRU With 8-Way Associativity Is Implementable
https://www.garlic.com/~lynn/2013d.html#7 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013c.html#49 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013c.html#17 I do not understand S0C6 on CDSG
https://www.garlic.com/~lynn/2012m.html#18 interactive, dispatching, etc
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012g.html#21 Closure in Disappearance of Computer Scientist
https://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Addresses?
https://www.garlic.com/~lynn/2011p.html#53 Odd variant on clock replacement algorithm
https://www.garlic.com/~lynn/2011f.html#73 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2011d.html#82 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011c.html#88 Hillgang -- VM Performance
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)
https://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2010n.html#41 Central vs. expanded storage
https://www.garlic.com/~lynn/2010m.html#5 Memory v. Storage: What's in a Name?
https://www.garlic.com/~lynn/2010l.html#23 OS idling
https://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by dataset
https://www.garlic.com/~lynn/2010g.html#44 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010g.html#0 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010f.html#85 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2008m.html#7 Future architectures
https://www.garlic.com/~lynn/2008k.html#32 squirrels
https://www.garlic.com/~lynn/2008j.html#6 What is "timesharing" (Re: OS X Finder windows vs terminal window weirdness)
https://www.garlic.com/~lynn/2008h.html#79 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#70 New test attempt
https://www.garlic.com/~lynn/2008f.html#3 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008e.html#16 Kernels
https://www.garlic.com/~lynn/2008c.html#65 No Glory for the PDP-15
https://www.garlic.com/~lynn/2007f.html#18 What to do with extra storage on new z9
https://www.garlic.com/~lynn/2007c.html#56 SVCs
https://www.garlic.com/~lynn/2007c.html#47 SVCs
---
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Card Deck Stripe Date: 16 Sept, 2025 Blog: FacebookI was undergraduate when Univ. got 360/67 for TSS/360 replacing 709/1401 and I was hired fulltime responsible for OS/360 (TSS/360 never came to production fruition). Then CSC came out to install CP67 (3rd install after CSC itself and MIT Lincoln Labs).
At the time all the CP67 source was kept on OS/360 and assembled, with all the TXT decks being placed in card tray with BPS loader on the front and card deck IPL'ed to write core image to disk for system IPL. Each module TXT deck had diagonal stripe across the top with name of module. Updating and assembling individual module, it was possible to easily identify the module TXT to be replaced in the card tray.
A couple months later, all the source was on CMS and with OS/360 assembler running; to update CP67 ... EXEC would punch the virtual card tray of TXT to the virtual reader and IPL'ed virtually for writing to the CP67 IPL disk.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Water Cooled Systems Date: 17 Sept, 2025 Blog: FacebookFuture System, completely different from 370 and was going to complete replace it; during FS, internal politics was killing 370 efforts and claim is that lack of new 370s during FS is what gave clone 370 markers (including Amdahl), their market foothold:
Amdahl had won the battle to make ACS, 360 compatible. Then ACS/360
was killed (and Amdahl leaves IBM), folklore was concern that it would
advance state-of-the-art too fast and IBM loose control of the market
(note following has some ACS/360 features that show up more than 20yrs
later with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html
also could claim that TCMs were required for 3081 in order to package huge number of circuits into reasonable volume.
When FS imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081. Note 308x were originally going to be multiprocessor *only* and initial 3081D had less aggregate MIPs than the single processor Amdahl. The processor cache sizes were doubled for 3081K bringing aggregate MIPS up to about the same as single processor Amdahl (although MVS multiprocessor overhead claimed its 2-CPU support only had 1.2-1.5 times the throughput of single processor, aka .6-.75 throughput of Amdahl single processor, even with the same aggregate MIPS).
Note: they took a 158 engine with just the integrated channel microcode for the 303x channel director. A 3031 was two 158 engines, one with just the 370 microcode and one with just the channel microcode. A 3032 was 168 reworked to use 303x channel director for external channels. A 3033 started out 168 logic remapped to 20% faster chips. Trivia: 165 370 microcode avg. 2.1 machine cycles per 370 instruction, this was improved for 168 to 1.6 machine cycles per 370 instruction and avg of single machine cycle per 370 instruction for 3033.
other trivia: also after FS imploded, I got con'ed into helping with 370 16-CPU multiprocessor (and con the 3033 processor engineers into working on it in their spare time, a lot more interesting than remapping 168 logic into 20% faster chips) that everybody thought was great until somebody tells the head of POK that it could be decades before POK's favorite son operating system ("MVS") has ("effective") 16-CPU support. The head of POK then invites some of us to never visit POK again, and directs the 3033 processor engineers, heads down and no distractions (POK doesn't ship 16-CPU systems until after the turn of the century). Note: once 3033 was out the door, the 3033 processor engineers start work on trout/3090.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Water Cooled Systems Date: 19 Sept, 2025 Blog: Facebookre:
refs:
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
a big part of it was "single level store" ... like tss/360 (original
system for 360/67)
http://www.bitsavers.org/pdf/ibm/360/tss/
and like MULTICS
http://www.bitsavers.org/pdf/honeywell/large_systems/multics/
https://multicians.org/
Lots of installations that got 360/67 for tss/360 ... just used it as
360/65 for os/360. I was undergraduate and been hired fulltime
responsbile for os/360 at one such univ. When CP/67 was 1st installed
at Univ., there was still tss/360 SE around, IBM hoping that tss/360
would come to fruition. Before I started reWriting a lot of CP67,
https://www.garlic.com/~lynn/2025d.html#74 Multitasking
we did simulated interactive fortran, edit, compile and execute for CP67/CMS (precursor to VM/370) and TSS/360. CP67 with 35 emulated users got better throughput and response than TSS/360 got with four emulated users. Later I would rewrite CMS filesystem for page-mapped operation (getting something like four times throughput, with much better scale-up than starndard CMS filesystem), claiming I learned what not to do from tss/360 ... also contributing to periodically ridiculing what FS was doing.
Note, one of the last nails in the FS coffin was study by the IBM Houston Science Center; if 370/195 applications were redone for FS machine made out of the fastest available hardware technology, they would have throughput of 370/145 (about factor of 30 times slowdown).
F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
trivia: also after FS imploded, the head of POK was convincing corporate to kill the VM370/CMS product, shutdown the development group and transfer all the people to POK for MVS/XA. They weren't planning on telling the people until the very last minute, but it managed to leak early and several people managed to escape into the Boston area (DEC VAX/VMS was in its infancy and joke was that head of POK was major contributor to VMS). Endicott eventually managed to save the VM370/CMS product mission for the mid-range, but had to recreate a development group from scratch.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Cray Supercomputer Date: 19 Sept, 2025 Blog: FacebookIBM communication group was fighting off release of mainframe TCP/IP support. When they lost, they change strategy and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got aggregate 44kbyte/sec using nearly full 3090 CPU. I then add RFC1044 support and in some tuning tests at Cray Research, between Cray and 4341 get sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
Then 1988, IBM branch office asks me if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980 ... FCS: initially 1gbit transfer, full-duplex, aggregate 200mbyte/sec). Then IBM POK gets their serial stuff shipped (when it is already obsolete) as ESCON (initially 10mbytes/sec, later upgraded to 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy weight protocol that radically reduces throughput, eventually released as FICON. 2010, a z196 "Peak I/O" benchmark released, getting 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time a FCS is announced for E5-2600 server blade claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend that SAPs (system assist processors that actually do I/O) be kept to 70% CPU (or 1.5M IOPS). Max z196 (80 cores) benchmarked at 50BIPS and went for $30M.
Also 1988 HA/6000 was approved, originally for NYTimes to move their
newspaper system (ATEX) off DEC VAXluster to RS/6000. I rename it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on improving scale-up performance). trivia: previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson. S/88
Product Administrator started taking us around to their customers and
also had me write a section for the corporate continuous availability
document (it gets pulled when both AS400/Rochester and mainframe/POK
complain they couldn't meet requirements).
Also get LLNL UNICOS "LINCS" ported to HA/CMP and get NCAR's filesystem spin-off "Mesa Archival" work done on HA/CMP.
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer ("SP1", for technical/scientific *ONLY*), and
couple weeks later Computerworld news 17feb1992 ... IBM establishes
laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
We were also told couldn't work on anything with more than 4-system clusters, then leave IBM a few months later.
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to MIPS
reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Former executive we had reported to, goes over to head up Somerset/AIM
(Apple, IBM, Motorola), single chip RISC with M88k bus/cache enabling
multiprocessor systems.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
original SQL/relational, System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370/158 (& 4341) Channels Date: 20 Sept, 2025 Blog: Facebookaka, 303x used 158 engine with just the integrated channel microcode for the channel director & a 3031 was two 158 engines, one with just 370 microcode and the other with just the integrated channel protocol. A 3032 was 168-3 redone with 303x channel director and 3033 started out 168-3 logic remapped for 20% faster chips. Note: 158 channel microcode (even with dedicated 158 engine for channel director) was much slower than 168 channels (and even 4341 channels). I had done benchmarks for channel program processing performance on 145, 158, 168, 3033, 4341s, non-IBM clone 370 channels, IBM DASD controllers and non-IBM clone DASD controllers ... and 158 (and its channel director follow-on) were by far the slowest.
.. i had rewritten IOS to make it bullet proof and never fail so they could use in bldg14 (disk engineering) and bldg15 (disk product test) for any amount of on demand concurrent testing .. they had been running prescheduled 7x24 stand alone testing (had tried MVS but had 15min MTBF requiring manual reipl). bldg15 got engineering 4341 1978 (before FCS summer 1979?) tweeked 4341 channel microcode to do data streaming 3mbyte/sec ... for 3380 testing
posts mentioning getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
some posts mentioning channel performance benchmark/tests
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2015f.html#88 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2010m.html#15 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2009p.html#12 Secret Service plans IT reboot
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370/158 (& 4341) Channels Date: 20 Sept, 2025 Blog: Facebookre:
Industry MIPS benchmark became number of program iterations compared to industry standard MIPS platform .... aka effectively normalized benchmark MIPS to same base (regardless of machine instruction set, CISC, RISC, etc).
trivia: 1988 IBM branch office asks if I could help LLNL (national lab) standardize some serial they were working with, which quickly becomes fibre-channel standard, "FCS" (not First Customer Ship), initially 1gbit/sec transfer, full-duplex, aggregate 200mbye/sec (including some stuff I had done in 1980). Then POK gets some of their serial stuff released with ES/9000 as ESCON (when it was already obsolete, initially 10mbytes/sec, later increased to 17mbytes/sec).
Then some POK engineers become involved with FCS and define a heavy-weight protocol that drastically cuts throughput (eventually released as FICON). 2010, a z196 "Peak I/O" benchmark released, getting 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time a FCS is announced for E5-2600 server blade claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend that SAPs (system assist processors that actually do I/O) be kept to 70% CPU (or 1.5M IOPS).
Max z196 (80 cores) benchmarked at 50BIPS (625MIPS/core) and went for $30M. The E5-2600 server blade (16 cores) benchmarked at 500BIPS (ten times max configured z196 & 31BIPS/core) and IBM had base list price of $1815 (benchmark was industry standard number of program iterations/sec compared to industry benchmark reference platform).
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Internet Date: 20 Sept, 2025 Blog: FacebookARPAnet cutover from IMPs/HOST to internetworking 1/1/1983. Earlier in the 80s, got HSDT project, T1 and faster computer links (both sat. & terrestrial) and battles with the IBM communication group (in 60s, IBM had 2701 product supporting T1, but with 70s transition to SNA and issues, controllers were caped at 56kbits/sec). Was also working w/NSF director and was suppose to get $20M to interconnect the NSF supercomputing centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid). As regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet
called for T1 links, PC/RT just had 440kbit links .... they get T1 trunks with telco multiplexors to run multiple 440kbit links ... if you squinted ... it sort of looked like T1 (I also ridiculed them in other ways). I was then asked to be "red team" for the T3 upgrade (they possibly thinking they could shutdown the ridicule) and couple dozen from half dozen labs around the world were blue team. At final review, I presented 1st and then 5mins into "blue team" presentation, the executive pounded on table and said he would lay down in front of garbage truck before he let any but the "blue team" presentation go forward (some of us then walk out).
other trivia: ibm communication group was fighting to block release of mainframe tcp/ip support, when that failed, they changed tactics and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then add RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel media throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
In 1988, IBM branch office asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes "fibre-channel standard" ("FCS", including some stuff I did in 1980; initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec). Then IBM POK gets their serial stuff shipped (when it is already obsolete) as ESCON with ES/9000 (initially 10mbytes/sec, later upgraded to 17mbytes/sec).
Also 1988, got HA/6000 project, initially for NYTimes to move their
newspaper system (ATEX) off DEC VAXCluster to RS/6000. I then rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on improving scale-up performance); also plans for 1gbit "FCS"
with HA/CMP. trivia: previously worked on original SQL/relational,
System/R with Jim Gray and Vera Watson. S/88 Product Administrator
started taking us around to their customers and also had me write a
section for the corporate continuous availability document (it gets
pulled when both AS400/Rochester and mainframe/POK complain they
couldn't meet requirements).
Got LLNL UNICOS "LINCS" ported to HA/CMP and get NCAR's filesystem spin-off "Mesa Archival" work done on HA/CMP.
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer ("SP1", for technical/scientific *ONLY*), and
couple weeks later Computerworld news 17feb1992 ... IBM establishes
laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
We were also told couldn't work on anything with more than 4-system clusters, then leave IBM a few months later.
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to MIPS
reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Some claim that donations of tax deductible/free bandwidth and
equipment was major issue for the non-commercial use policies, part of
file from archive long ago and far away
INDEX FOR NSFNET Policies and Procedures
3 Jun 93
This directory contains information about the policies and procedures
established by the National Science Foundation Network (NSFNET) and
its associated networks. These documents were collected by the NSF
Network Service Center (NNSC). With thanks to the NNSC and Bolt
Berenek and Newman, Inc., they are now available by anonymous FTP from
InterNIC Directory and Database Services on ds.internic.net.
... snip ...
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Internet Date: 21 Sept, 2025 Blog: Facebookre:
Internet trivia: not long after leaving IBM, was brought in as
consultant to small client/server company, two of the former Oracle
people (that had been in the jan92 Hester/Ellison meeting) were there
responsible for something called "commerce server" and they wanted to
do payment transactions. The startup had also invented this technology
they called "SSL" they wanted to use, it is now sometimes called
"electronic commerce". I had responsibility for everything between
servers and the payment networks. I then do a talk on "Why Internet
Isn't Business Critical Dataprocessing" (based on documentation,
procedures, software had to do for "electronic commerce") that (RFC
Editor) Postel sponsored at USC/ISI.
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
Payment Network gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
some recent posts mentioning "Why Internet Isn't Business Critical
Dataprocessing" talk
https://www.garlic.com/~lynn/2025d.html#42 IBM OS/2 & M'soft
https://www.garlic.com/~lynn/2025b.html#97 Open Networking with OSI
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024g.html#80 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024f.html#47 Postel, RFC Editor, Internet
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM VM370 And Pascal Date: 21 Sept, 2025 Blog: FacebookSome of the MIT CTSS/7094 people went to the 5th flr to do MULTICS and others went to the IBM Cambrdige Science Center ("CSC") on the 4th flr and did virtual machines, science center wide-area network (also morphs into IBM internal network and technology used for the corporate sponsored univ BITNET), invented GML in 1969 (morphs into ISO SGML a decade later and HTML at CERN after another decade), other online apps. Originally CSC wanted 360/50 to do hardware to add virtual memory but all the extra 360/50s were going to FAA/ATC and so they had to settle for 360/40. The add hardware virtual memory to 360/40 and do CP40/CMS. Then when 360/67 standard with virtual memory becomes available, CP40 morphs into CP67. lots more info:
Lots of univ and technology companies getting 360/67 for tss/360, but tss/360 had difficulty coming to production ... and so lots of installations ran it as 360/65 with os/360. I was undergraduate at univ and took 2credit hr intro to fortran/computers,it was getting 360/67 for tss/360 replacing 709/1401 ... 360/67 comes in within a year my taking intro class and I was hired fulltime with responsibility for OS/360 (univ. datacenter shutdown on weekends and I would have it dedicated, although 48hrs w/o sleep made monday classes hard). Student Fortran ran under second on 709, but well over a minute on OS360R9.5 (360/67 as 360/65). I install HASP cutting time in half. I then start redoing OS360R11 STAGE2 SYSGEN, carefully placing datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student Fortran never got better than 709 until I install UofWaterloo WATFOR (20,000 cards/min on 360/65, 333/sec).
Then CSC comes out to install CP67 (3rd install after CSC itself and MIT Lincoln Labs; at the time there were 1100 people in IBM TSS/360 organization and 11 in CP67/CMS group) and I mostly play with it during my dedicated weekend time ... reWriting CP67 for running OS/360 in virtual machines. Test stream ran 322secs on real machine, initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs.
I then start rewriting the dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to channel transfer peak of 270). Six months after univ initial install, CSC was giving one week class in LA. I arrive on Sunday afternoon and asked to teach the class, it turns out that the people that were going to teach it and resigned the Friday before to join one of the 60s CP67 commercial online spin-offs. Before I graduate I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all data processing into independent business unit). I think Boeing Renton datacenter largest in the world. When I graduate, I join CSC (instead of staying with Boeing CFO).
One of my hobbies at CSC was enhanced production operating systems for internal datacenters and the online sales&marketing support HONE systems were one of my first (and long time) customers.
Early last decade, I was asked to track down decision to add virtual memory to all 370s and I find staff member to executive making decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used. As a result, typical 1mbyte 370/165 only would run four concurrent regions, insufficient to keep system busy and justified. Going to MVT running in 16mbyte virtual memory, allowed number of regions to be increased by factor of four times (capped at 15 because 4bit storage protect keys) with little or no paging (sort of like running MVT in CP67 16mbyte virtual machine) ... aka VS2/SVS. Ludlow was doing the initial implemenatation on 360/67s and I would drop in periodically. He needed some code for the virtual memory tables and to do simple paging. Biggest issue was EXCP/SVC0 was being passed channel programs with virtual addresses (and channel required real addresses) and channel program copies needed to be made, replacing virtual addresses with real (and Ludlow borrows CP67 CCWTRANS for crafting into EXCP/SVC0).
archived post about decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
Once 370 virtual memory architecture specification was available, CSC
does a CP67 "H" modification to also emulate 370 virtual memory
architecture and the new 370 instructions. Then a further CP67 "I"
modification is made for running on 370 architecture. In part because
CSC had online users from Boston/Cambridge institutions (staff,
professors, students):
my CP67L ran on the real 360/67
... CP67H ran in a CP67L 360/67 virtual machine
...... CP67I ran in a CP67H 370 virtual machine
CP67I was in general use a full year before the first virtual memory
engineering 370(/145) was ready to IPL. Then the 370/165 engineers
were complaining if they had to implement the full 370 virtual memory
architecture, it would result in six month slip in
announce/delivery. As a result, everything had to retrench to the
370/165 subset. Three engineers come out from San Jose to CSC and add
3330 & 2305 device support to CP67I for CP67SJ ... which was in
wide-spread use internally, even well after VM370 was available.
The decision to add virtual memory to all 370s, had resulted in doing VM370/CMS and some of the CSC people go to the 3rd floor taking over the IBM Boston Programming Center for the VM370 development group. In the morph of CP67->VM370 lots of stuff was simplified or dropped (like multiprocessor support). When the VM370 group outgrows the 3rd flr, they move out to the empty (former IBM SBC) bldg at Burlington Mall (on rt128). In 1974, I start adding lots of stuff back into a VM370R2-base for my internal CSC/VM (including kernel re-org for SMP, but not the actual multiprocessor support). Then for VM370R3-base CSC/VM, I add multiprocessor back in, initiall for HONE (US HONE had consolidated all its datacenters in Palo Alto, trivia: when FACEBOOK 1st moves into silicon valley, it is into a new bldg built next door to the former US HONE datacenter), so can upgrade all their 168s to 2-CPU 168s (where with some slight of hand, was able to get twice the throughput of 1-CPU machines).
During Future System (early/mid 70s), internal politics was killing
off 370 efforts and the lack of new 370s is credited with given the
clone 370 makers their market foothold. Then when FS implodes
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
there is mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. I get asked to help with doing a 370 16-CPU multiprocessor and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells head of POK that it could be decades before POK's favorite son operating system ("MVS") has ("effective") 16-CPU support; MVS documentation had 2-CPU support only getting 1.2-1.5 times the throughput of 1-CPU systems (because of MVS SMP overhead; note POK doesn't ship a 16-CPU machine until after turn of century). Head of POK then asks some of us to never visit POK again and directs the 3033 processor engineers, heads down and no distractions.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM VM370 And Pascal Date: 21 Sept, 2025 Blog: Facebookre:
After leaving IBM in the early 90s, IBM was going through a lot of troubles and offloading all sort of stuff (real estate, tools, software, people, organizations, etc) ... including lots of VLSI design tools to the major industry tools vendor. The Los Gatos VLSI lab had done a lot of Pascal VLSI tools (they had started using Metaware's TWS for various purposes, including doing mainframe Pascal) and they were all being offloaded. However, the major industry VLSI platform was SUN and so all those tools had to be first ported to SUN. I got a contract from LSG to port a Pascal 50K statement physical layout application (in retrospect it would have been easier to have ported it to SUN "C", it wasn't clear that SUN Pascal had ever been used for anything else than educational purposes). It was easy to drop into SUN hdqtrs, but aggravating the situation was SUN had outsourced Pascal to an organization on the opposite of the planet (responsible for space station).
LSG was primarily VM370 shop and late 70s & early 80s had enhanced CMS OS/360 simulation .. making it easier to migrate MVS apps to VM370. Major IBM VLSI shops on east coast were running into problems with MVS 7mbyte brick wall (requiring special MVS systems with CSA caped at 1mbyte, for large fortran apps, any changes constantly banging against the 7mbyte limit). LSG had demonstrated they could move the east coast apps to VM370/CMS with nearly the full 16mbytes (minus 192k). The problem was after Future System imploded, the head of IBM POK (high end mainframe) was convincing corporate to kill the VM370 product, shutdown the development center and transfer all the people to POK for MVS/XA (Endicott manages to save the VM370 product mission but had to recreate a development group from scratch). It would be a major loss of face for POK if the majority of IBM east coast VLSI MVS machines moved to VM370.
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
posts mentioning moving pascal 50K statement VLSI app to SUN:
https://www.garlic.com/~lynn/2015g.html#51 [Poll] Computing favorities
https://www.garlic.com/~lynn/2013g.html#87 Old data storage or data base
https://www.garlic.com/~lynn/2013c.html#53 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012m.html#21 The simplest High Level Language
https://www.garlic.com/~lynn/2011m.html#27 "Best" versus "worst" programming language you've used?
https://www.garlic.com/~lynn/2010n.html#54 PL/I vs. Pascal
https://www.garlic.com/~lynn/2009l.html#36 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2008j.html#77 CLIs and GUIs
https://www.garlic.com/~lynn/2005o.html#11 ISA-independent programming language
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM VM370 And Pascal Date: 21 Sept, 2025 Blog: Facebookre:
Co-worker at science center responsible for the science center
wide-area network (that morphs into IBM internal network and
technology used for corporate sponsored univ BITNET) ... from one of
the GML inventors:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
Co-worker
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
Early 80s, got HSDT project, T1 and faster computer links (both sat. & terrestrial) and battles with the IBM communication group (in 60s, IBM had 2701 product supporting T1, but with 70s transition to SNA and issues, controllers were caped at 56kbits/sec).
Mid-80s, IBM communication group was fighting to block release of mainframe tcp/ip support, when that failed, they changed tactics and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then add RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel media throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
HSDT was also working w/NSF director and was suppose to get $20M to
interconnect the NSF supercomputing centers. Then congress cuts the
budget, some other things happen and eventually an RFP is released (in
part based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid). As regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.
1988, got HA/6000 project, initially for NYTimes to move their
newspaper system (ATEX) off DEC VAXCluster to RS/6000. I then rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on improving scale-up performance); also plans for 1gbit "FCS"
with HA/CMP. trivia: previously worked on original SQL/relational,
System/R with Jim Gray and Vera Watson. S/88 Product Administrator
started taking us around to their customers and also had me write a
section for the corporate continuous availability document (it gets
pulled when both AS400/Rochester and mainframe/POK complain they
couldn't meet requirements).
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer ("SP1", for technical/scientific *ONLY*), and
couple weeks later Computerworld news 17feb1992 ... IBM establishes
laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
we are also told we couldn't work on anything with more than four
processors (and we leave IBM a few months later). Some speculation
that it would eat the mainframe in the commercial market. 1993
benchmarks (number of program iterations compared to MIPS reference
platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Internet trivia: not long after leaving IBM, was brought in as
consultant to small client/server company, two of the former Oracle
people (that had been in the jan92 Hester/Ellison meeting) were there
responsible for something called "commerce server" and they wanted to
do payment transactions. The startup had also invented this technology
they called "SSL" they wanted to use, it is now sometimes called
"electronic commerce". I had responsibility for everything between
servers and the payment networks. I then do a talk on "Why Internet
Isn't Business Critical Dataprocessing" (based on documentation,
procedures, software had to do for "electronic commerce") that
(Internet Standards RFC Editor) Postel sponsored at USC/ISI.
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Original SQL/Relational posts
https://www.garlic.com/~lynn/submain.html#systemr
payment network gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM VM370 And Pascal Date: 22 Sept, 2025 Blog: Facebookre:
In the mid-70s, I was characterizing systems were getting faster, faster than disks were getting faster. In the early 80s, wrote a tome that since 360 days, relative system disk throughput had declined by order of magnitude (systems got 40-50 times faster while disks got 3-5 times faster). A disk division executive took exception and assigned the division performance group to refute it. After a couple weeks, they came back and essentially said I had slightly understated the problem. They then respin the analysis into how to configure disks to improve system throughput for SHARE presentation (16Aug1984, SHARE 63, B874).
Then as systems were getting larger, needed to exceed the (VS2/SVS) concurrent 15 cap ... resulting in VS2/SVS morphing into MVS where each region is given its own 16mbyte address space (using virtual memory address spaces to isolate different regions). However, OS/360 (and descendants) have a heavily pointer passing API ... so they map a image of the MVS kernel into 8mbytes of every virtual address space leaving 8mbytes. Then because subsystems were (also) moved into their own virtual address, needed 1mbyte "common segment area" ("CSA") for passing information (leaving 7mbytes). Then as systems increase, CSA requirement was somewhat proportional to number of concurrently executing regions and subsystems and by 3033, "CSA" (renamed "common system area") was typically 5-6mbytes (leaving only 2-3mbytes), and threatening to become 8mbytes (leaving zero).
This was part of the mad rush to MVS/XA (as countermeasure to MVS being left with zero space for applications).
posts mentioning the need to move from MVS to MVS/XA
https://www.garlic.com/~lynn/2025d.html#22 370 Virtual Memory
https://www.garlic.com/~lynn/2025c.html#108 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#79 IBM System/360
https://www.garlic.com/~lynn/2025c.html#59 Why I've Dropped In
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#11 what's a segment, 80286 protected mode
https://www.garlic.com/~lynn/2024f.html#31 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024c.html#67 IBM Mainframe Addressing
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#29 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM VM370 And Pascal Date: 22 Sept, 2025 Blog: Facebookre:
Before decision to add virtual memory to all 370s because of problems with MVT storage management ... Boeing Huntsville had been one of the customers that got a (2-CPU) 360/67 for tss/360 (& lots of 2250 for cad/cam work). TSS/360 wasn't coming to production so they ran it as two MVT systems supporting CAD/CAM 2250 work. However, MVT's storage management problems were already recognized and Boeing Huntsville modified MVTR13 to run in virtual memory mode (subset of the later VS2/SVS), didn't actually do any paging, but used virtual memory mode as partial compensation for MVT's storage management problem. While I was in the Boeing CFO office (and undergraduate) helping with the formation of Boeing Computer Services (consolidating all dataprocessing into an independent business unit), they transferred the Boeing Huntsville 360/67 up to Seattle. There were also lots of politics between the Renton datacenter (possibly largest in the world) director and the CFO, who only had a 360/30 up at Boeing field (for payroll; although they enlarge the machine room to install a single-CPU 360/67 for me to play with when I wasn't doing other stuff).
posts mentioning being in Boeing CFO office helping with formation
of "BCS" ... and Huntsville modifying MVT to run in virtual memory
mode:
https://www.garlic.com/~lynn/2025b.html#117 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#95 MVT to VS2/SVS
https://www.garlic.com/~lynn/2025b.html#33 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2021g.html#39 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2018f.html#35 OT: Postal Service seeks record price hikes to bolster falling revenues
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM VM370 And Pascal Date: 22 Sept, 2025 Blog: Facebookre:
Trivia: 23Jun1969 unbundling announce, starting to charge for (application) software (made the case that kernel software should still be free), SE services, maint, etc .... there was issue with trainee SEs. SE training use to be part of large group at customer site, but they could figure out how NOT to charge for those trainee SEs (at the customer). This lead to lots of internal "HONE" CP67/CMS datacenters, where branch office SEs could logon and practice with guest operating systems running in virtual machines. When I graduate and join CSC, one of my hobbies was enhanced production operating systems for internal datacenters and HONE was one of my first (and long time) customers.
CSC also ported APL\360 to CMS for CMS\APL (had to redo storage management, changing from 16kbyte swapped workspaces to large virtual memory, demand paged workspaces ... and also added API to invoke system services ... like file I/O ... enabling a lot of real-world applications). HONE then started offering lots of CMS\APL-based sales & marketing support applications ... which came to dominate all HONE activity (guest operating system practice withering away). For VM370, PASC added support 370/145 (APL) microcode assist ... becoming APL\CMS. Along with virtual memory being added to all 370, they needed an APL that could run on both VM370 and MVS ... eventually APL\SV and then VS\APL.
US HONE had consolidated all its datacenters in Palo Alto (trivia: when FACEBOOK 1st moved into silicon valley, it was into new bldg built next door to the former consolidated US HONE datacenter) ... at the same time as other HONE datacenters were sprouting up all over the world (HONE easily became the largest user of APL).
IBM 23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Downturn Date: 22 Sept, 2025 Blog: Facebook1972, Learson tried (and failed) to block bureaucrats, careerists, and MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of management briefings 1958-1988
First half of 70s, was IBM's Future System; FS was totally different
from 370 and was going to completely replace it. During FS, internal
politics was killing off 370 projects and lack of new 370 is credited
with giving the clone 370 makers, their market foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
Note: late 80s, senior disk engineer gets talk scheduled at annual, internal, world-wide communication group conference, supposedly on 3174 performance. However, the opening was that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing mainframe datacenters to more distributed computing friendly platforms. The disk division had come up with a number of solutions, but they were constantly being vetoed by the communication group (with their corporate ownership of everything that crossed the datacenter walls) trying to protect their dumb terminal paradigm. Senior disk software executive partial countermeasure was investing in distributed computing startups that would use IBM disks (he would periodically ask us to drop in on his investments to see if we could offer any assistance).
The communication group stanglehold on mainframe datacenters wasn't
just disks and a couple years later, IBM has one of the largest losses
in the history of US corporations and was being reorganized into the
13 baby blues in preparation for breaking up the company
(baby blues take-off on the "baby bell" breakup decade
earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
Former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
Pension posts
https://www.garlic.com/~lynn/submisc.html#pension
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Supercomputer Date: 24 Sept, 2025 Blog: FacebookDuring Future System (1st half of 70s)
internal politics was killing off 370 efforts and lack of new 370 is credited with giving the clone 370 makers their market foothold. When FS implodes there is mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. I get asked to help with doing a 370 16-CPU multiprocessor and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells head of POK that it could be decades before POK's favorite son operating system ("MVS") has ("effective") 16-CPU support; MVS documentation had 2-CPU support only geting 1.2-1.5 times the throughput of 1-CPU systems (because of MVS SMP overhead; note POK doesn't ship a 16-CPU machine until after turn of century). Head of POK then asks some of us to never visit POK again and directs the 3033 processor engineers, heads down and no distractions (once 3033 is out the door, they start on trout/3090).
And I transfer out to SJR on west coast and get to wander around IBM (and non-IBM) datacenters in silicon valley, including disk (bldg14) engineering and (bldg15) product test across the street. They were running 7x24, pre-scheduled, stand-alone testing and mention they had recently tried MVS, but it had 15min MTBF (requiring manual re-ipl) in that environment. I offer to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand, concurrent testing ... greatly improving productivity. Bldg15 gets 1st engineering 3033 (outside POK processor engineering) and then 1st engineering 4341 (outside Endicott). In Jan1979, branch office hears about bldg15 4341 and con me into doing benchmark for national lab that was looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). I then do an internal research report on the I/O integrity work and happen to mentioning MVS MTBF, bringing down the wrath of the MVS organization on my head.
Decade later, 1988 IBM branch office asks me if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980 ... FCS: initially 1gbit transfer, full-duplex, aggregate 200mbyte/sec). Then IBM POK gets their serial stuff shipped (when it is already obsolete) as ESCON (initially 10mbytes/sec, later upgraded to 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy weight protocol that radically reduces throughput, eventually released as FICON.
2010, a z196 "Peak I/O" benchmark released, getting 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time a FCS is announced for E5-2600 server blade claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend that SAPs (system assist processors that actually do I/O) be kept to 70% CPU (or 1.5M IOPS).
Note max configured z196, 80 core, industry benchmark 50BIPS (625MIPS/core), $30M. E5-2600 server blade, 16 core, industry benchmark 500BIPS (10times max configured z196 & 31BIPS/core), IBM base list price $1815. Shortly later, industry press had server hardware makers saying they were shipping half their products (cpu, memory, etc) directly to large cloud datacenters (that claim they assemble their own systems at 1/3rd the cost of brand name servers) ... and IBM sells off its server business.
Also 1988, HA/6000 was approved, originally for NYTimes to move their
newspaper system (ATEX) off DEC VAXluster to RS/6000. I rename it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on improving scale-up performance). trivia: previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson. S/88
Product Administrator started taking us around to their customers and
also had me write a section for the corporate continuous availability
document (it gets pulled when both AS400/Rochester and mainframe/POK
complain they couldn't meet requirements).
Also get LLNL UNICOS "LINCS" ported to HA/CMP and get NCAR's filesystem spin-off "Mesa Archival" work done on HA/CMP ... was planning on using FCS.
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer ("SP1", for technical/scientific *ONLY*), and
couple weeks later Computerworld news 17feb1992 ... IBM establishes
laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
We were also told couldn't work on anything with more than 4-system clusters, then leave IBM a few months later.
Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to
industry MIPS reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
Former executive we had reported to, goes over to head up Somerset/AIM
(Apple, IBM, Motorola), single chip RISC with M88k bus/cache enabling
multiprocessor systems.
After leaving IBM was brought in as consultant to small client/server company, two of the former Oracle people (that had been in the jan92 Hester/Ellison meeting) were there responsible for something called "commerce server" and they wanted to do payment transactions. The startup had also invented this technology they called "SSL" they wanted to use, it is now sometimes called "electronic commerce". I had responsibility for everything between servers and the payment networks. I then do a talk on "Why Internet Isn't Business Critical Dataprocessing" (based on documentation, procedures, software had to do for "electronic commerce") that (Internet Standards RFC Editor) Postel sponsored at USC/ISI.
i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:
IBM PowerPC 440: 1,000MIPS
Pentium3: 2,054MIPS (twice PowerPC 440)
Dec2000, IBM ships 1st 16-processor mainframe (industry benchmark):
z900, 16 processors 2.5BIPS (156MIPS/processor)
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
original sql/relational posts
https://www.garlic.com/~lynn/submain.html#systemr
electronic commerce webservers to payment network posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
a few past posts mentioning i86 hardware translation to RISC micro-ops
https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024e.html#105 IBM 801/RISC
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#62 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2022g.html#82 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022b.html#64 Mainframes
https://www.garlic.com/~lynn/2021d.html#55 Cloud Computing
https://www.garlic.com/~lynn/2021b.html#66 where did RISC come from, Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021.html#1 How an obscure British PC maker invented ARM and changed the world
https://www.garlic.com/~lynn/2015c.html#110 IBM System/32, System/34 implementation technology?
https://www.garlic.com/~lynn/2015.html#44 z13 "new"(?) characteristics from RedBook
https://www.garlic.com/~lynn/2014h.html#68 Over in the Mainframe Experts Network LinkedIn group
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Fortran Date: 27 Sept, 2025 Blog: FacebookI took two credit hr intro to fortran/computers. At the end of semester, I was hired to rewrite 1401 MPIO for 360/30. The univ was getting 360/67 (for tss/360) replacing 709/1401 and got a 360/30 temporarily replacing 1401 (unit record front end for 709 tape->tape) pending 360/67 available. Univ. shutdown datacenter on weekends and I would get the whole place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a large stack of hardware & software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and within a few weeks had 2000 card assembler program. Within a yr of taking intro class, the 360/67 arrives and I was hired fulltime responsible for os/360 (running as 360/65, tss/360 wasn't production). 709 ran student fortran in under second (tape->tape), initially w/MFTR9.5 ran in over a minute. I install HASP, cutting time in half. I then start redoing (MFTR11) STAGE2 SYSGEN (also running production w/HASP), carefully placing datasets and PDS members (optimize arm seek and multi-track search), cutting another 2/3rds to 12.9secs. OS/360 student fortran never got better than 709 until I install UofWaterloo WATFOR (360/65 w/HASP, 20,000 cards/min, 333 card/sec ... typical student fortran 30-60 cards; operations would tend to accumulate a tray of student fortran cards before making a WATFOR run).
Then CSC comes out to install CP67 (precursor to VM/370, 3rd install
after CSC itself and MIT Lincoln Labs; at the time there were 1100
people in IBM TSS/360 organization and 11 in CP67/CMS group) and I
mostly play with it during my dedicated weekend time ... rewriting
CP67 for running OS/360 in virtual machines. Test stream ran 322secs
on real machine, initially 856secs in virtual machine (CP67 CPU
534secs), after a couple months I have reduced CP67 CPU from 534secs
to 113secs. I then start rewriting the dispatcher, scheduler, paging,
adding ordered seek queuing (from FIFO) and mutli-page transfer
channel programs (from FIFO and optimized for transfers/revolution,
getting 2301 paging drum from 70-80 4k transfers/sec to channel
transfer peak of 270). Six months CP67 install at the univ, CSC was
giving one week class in LA. I arrive on Sunday afternoon and asked to
teach the class, it turns out that the people that were going to teach
it had resigned the Friday before to join one of the 60s CP67
commercial online spin-offs. More history
https://www.leeandmelindavarian.com/Melinda#VMHist
CP67 arrived (at univ) with 1052 & 2741 terminal support with
automagic terminal identification, used SAD CCW to switch port
terminal type scanner. Univ. had some number of TTY33&TTY35 terminals
and I had TTY ASCII terminal support integrated with automagic
terminal ID. I then wanted to have a single dial-in number ("hunt
group") for all terminals. It didn't quite work, IBM had taken short
cut and had hard-wired line speed for each port. This kicks off univ
effort to do our own clone controller, built channel interface board
for Interdata/3 programmed to emulate IBM controller with the addition
it could do auto line-speed/(dynamic auto-baud). It was later upgraded
to Interdata/4 for channel interface with cluster of Interdata/3s for
port interfaces. Interdata (and later Perkin-Elmer) were selling as
clone controller and four of us get written up responsible for (some
part of) the IBM clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
Then before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter largest in the world (360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room; joke that Boeing was getting 360/65s like other companies got keypunch machines). Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I wasn't doing other stuff). Renton did have a (lonely) 360/75 (among all the 360/65s) that was used for classified work (black rope around the area, heavy black felt draopped over console lights & 1403s with guards at perimeter when running classified). Enormous amounts of Fortran.
Boeing Huntsville had gotten a 2-CPU 360/67 with lots of 2250s for CAD/CAM work, however it was being run as two OS/360 MVT systems. They had run into the MVT storage management problem and had modified MVTR13 to run in virtual memory mode (w/o paging) to partially compensate for the problems (sort of subset of the adding virtual memory to all 370s and initially VS2/SVS, akin to running MVT in a CP67 16mbyte virtual machine).
When I graduate, I join IBM Cambridge Science Center (instead of
staying with Boeing CFO) and one of my hobbies was enhanced production
operating systems for internal datacenters. One of my first (and long
time) customer was the internal online sales&marketing support HONE
systems. I also got to continue to go to user group meetings (SHARE)
and drop in on customers. Director of one of the largest financial
industry datacenters liked me to stop by and talk technology. At some
point the IBM branch manager had horribly offended the customer and in
retaliation, they ordered an Amdahl system (single one in a large sea
of true blue machines). Note Amdahl had left IBM after winning the
battle to make ACS, 360 compatible and then ACS/360 gets killed
https://people.computing.clemson.edu/~mark/acs_end.html
Up until then Amdahl had been selling into technical/scientific and
univ market and this would be the 1st for true blue, commercial
account. IBM asked me to go live on-site for 6-12 months (apparently
to help obfuscate motivation for the order). I talk it over with the
customer and then decline IBM's offer. I was then told that the branch
manager was good sailing buddy of IBM's CEO and if I didn't do it, I
could forget career, promotions, raises. Trivia: this wasn't long
after Learson tried (and failed) to block the bureaucrats, careerists,
and MBAs from destroying Watson culture and legacy. pg160-163, 30yrs
of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
Note: 20yrs after Learson's failure, IBM has one of the largest losses
in the history of US companies and was being re-orged into the 13
baby blues in preparation for breaking up the company (take off on
the "baby bell" breakup a decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR.
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
terminal clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
a few recent posts mentioning watfor, boeing cfo, renton, huntsville
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Stanford WYLBUR, ORVYL, MILTON Date: 02 Oct, 2025 Blog: FacebookNo, but lots of places got 360/67s for tss/360, but most places just used them as 360/65s for OS/360. CSC had done CP40, modifying 360/40 with hardware virtual memory, it morphs into CP67 when 360/67 standard became available, and a lot of places started using 360/67s for CP67/CMS.
I was undergraduate but hired fulltime responsible for OS/360 when CSC
came out to install CP67 (3rd installation after CSC itself and MIT
Lincoln Labs). UofMichigan did their own virtual memory system (MTS)
for 360/67 (later ported to MTS/370).
https://en.wikipedia.org/wiki/Michigan_Terminal_System
Stanford did their own virtual memory system for 360/67 which included
WYLBUR .... which was later ported to MVS.
https://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML
ORVYL was first designed by Roger Fajman and, under the direction of
Rod Fredrickson, implemented at the Campus Facility of the Stanford
Computation Center in 1968. John Borgelt developed the original ORVYL
file system. In 1971, a means for batch access to the ORVYL file
system (VAM) was designed by Richard Levitt and later enhanced by
Carol Lennox. Throughout the period 1968-71, ORVYL also benefitted
from important contributions made by Richard Carr, Don Gold, and James
Moore.
... snip ...
Paging system trivia: as undergraduate, part of my (CP67) rewrite included Global LRU page replacement algorithm ... at a time when there was ACM articles about Local LRU. After graduation, I joined CSC and then transferred to San Jose Research and worked with Jim Gray and Vera Watson on (original SQL/relational) System/R. Then Jim leaves for Tandem, fall of 1980 (palming some stuff off on me). At Dec81 ACM SIGOPS meeting, Jim asks me if I can help Tandem co-worker get their Stanford Phd (one of the people that had work on ORVYL). It involved global LRU page replacements and the forces from the late 60s ACM local LRU are lobbying to block giving Phd for anything involving global LRU. I had lots of data on my undergraduate Global LRU work and at CSC, that run 768kbyte 360/67 (104 pageable pages after fixed requirement) with 75-80 users. I also had lots of data from the IBM Grenoble Science Center that had modified CP67 to conform to the 60s ACM local LRU literature (1mbyte 360/67, 155 pageable pages after fixed requirement). CSC with 75-80 users had better response and throughput (104 pageable pages) than Grenoble running 35 users (similar workloads and 155pages).
Early last decade, I was asked to track down decision to add virtual
memory to all 370s, I found staff to executive making
decision. Basically MVT storage management was so bad that region
sizes had to be specified four times larger than used, so a typical
1mbyte 370/165 only ran four regions concurrently, insufficient to
keep system busy and justified. Going to 16mbyte virtual address space
(VS2/SVS, sort of like running MVT in a CP67 16mbye virtual
machine) allows concurrent regions to increase by factor of four
times (capped at 15, 4bit storage protect keys), with little or no
paging. Ludlow was doing the initial implementation on 360/67 offshift
in POK and I dropped by a few times. Initially a little bit of code
for the virtual memory tables and some simple paging. Biggest effort
was all channel programs passed to EXCP/SVC0 had virtual addresses,
EXCP had to make copies replacing virtual addresses with real and
Ludlow borrows CP67 CCWTRANS to craft into EXCP (EXCPVR was for
special subsystems that could fix real storage and passed channel
programs with real addresses). Archived post with some email exchange
about adding virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Stanford WYLBUR, ORVYL, MILTON Date: 02 Oct, 2025 Blog: Facebookre:
other trivia: before I graduated, I was hired into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter largest in the world, 360/65s arriving faster than could be installed, boxes constantly staged in hallways around machine room; joke that Boeing was getting 360/65s like other companies got keypunches). Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing Field for payroll (although they enlarge the room to install 360/67 for me to play with when I wasn't doing other stuff). Boeing Huntsville had gotten a a 2-CPU 360/67 and lots of 2250 displays for CAD/CAM. TSS/360 never came to production so when it arrived, it was configured as two MVT systems. Boeing had already run into the MVT storage management problem and modified MVTR13 to run in virtual memory mode (w/o paging), sort of early subset of VS2/SVS. When I graduate, I join IBM CSC (instead of staying with Boeing CFO).
global LRU trivia: in late 70s and early 80s, I had been blamed for
online computer conferencing on the internal network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s, about the time it was forced to convert to SNA/VTAM). It really
took off spring of 1981 when I distributed a trip report of visit to
Jim Gray at Tandem; only about 300 actively participated but claims
that 25,000 were reading (folklore is when corporate executive
committee was told, 5of6 wanted to fire me). One of the results was
officially supported software and sanctioned "forums", also a
researcher was paid to sit in back of my office for nine months
studying how I communicated. Results were IBM research reports,
conference papers&talks, books and Stanford Phd (joint with Language
and Computer AI, Winograd was advisor on Computer AI side). From
IBMJargon:
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
Late '81, when I went to send the response to Jim's global LRU
request, it was blocked by IBM executives and it wasn't until nearly a
year later I was permitted to respond (I hoped it was punishment for
computer conferencing rather than they were meddling in academic
dispute).
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock
posts mentioning Boeing CFO, Renton, Huntsville,
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#95 IBM VM370 And Pascal
https://www.garlic.com/~lynn/2025b.html#117 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2021g.html#39 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#35 OT: Postal Service seeks record price hikes to bolster falling revenues
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Rapid Response Date: 02 Oct, 2025 Blog: FacebookI had originally modified CP67 with dynamic adaptive resource management as undergraduate in the 60s. When I graduate, I join IBM CSC and one of my hobbies was enhanced production operating systems for internal datacenters (the internal sales&marketing support HONE systems were one of my first, and long time customers). Later with the decision to add virtual memory to all 370s and morphing CP67->VM370, lots of stuff was simplified and/or dropped (including multiprocessor support). In 1974 for VM370R2 for my CSC/VM, I started putting stuff back in (including kernel re-org for multiprocessor, but not the actual support). Then for VM370R3-base, I put multiprocessor support back in, initially for HONE so they can upgrade all their 168s to two CPU systems (and with some slight of hand and cache-affinity), the 2-CPU systems were getting twice the throughput of the 1-CPU systems). US HONE had consolidated all their datacenters in Palo Alto (trivia: when FACEBOOK 1st moves into silicon valley, it is to a new bldg built next door to the former US HONE datacenter) ... and other HONE systems were starting to sprout up all over the world.
Early 80s, lots of studies showed that .25sec trivial response improved productivity, some number of internal datacenters (w/o my CSC/VM, later SJR/VM) were claiming .25sec trivial system response. 3272 channel attach controller hardware took .086sec (after joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters), and I had some large San Jose systems with .11sec trivial system response (for total .196sec). For the new 3278 (& 3274 controller), a lot of the electronics was moved back into the controllers (reducing 3278 manufacturing costs), driving up coax protocol chatter and 3274 hardware response .3 to .5 secs (depending on amount of data), making .25sec impossible. Letters to the 3278 product administrator complaining got a response that 3278 wasn't for interactive computing, but data entry (MVS/TSO never noticed since they rarely saw even 1sec system response; later IBM/PC 3277 emulation boards had 4-5times the throughput of 3278 emulation boards). Also 3270 terminals being half-duplex (not interactive computing), keyboard would lock if key was pressed same moment as write to screen (which then required stopping and reset). YKT did a 3277 FIFO box, unplug keyboard from head, plug FIFO box into head, plug keyboard into FIFO box (it would delay any keystrokes while screen was being written).
trivia: when Future System imploded there was mad rush to get stuff
back into the 370 product pipelines, including quick&dirty 3033&3081
efforts in parallel.
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
I get asked to help with a 16-CPU 370 effort and we con the 3033 processor (started out remapping 168 logic to 20% faster chips) engineers into working on it in their spare time (lot more interesting than the 168 logic remapping). Everybody thought it was great until somebody tells head of POK (IBM high-end 370) that it could be decades before POK's favorite son operating system ("MVS") had ("effective") 16-CPU support (MVS docs had 2-CPU throughput only 1.2-1.5 times throughput of single CPU because of high-overhead multiprocessor support, POK doesn't ship 16-CPU system until after turn of century). Head of POK then invites some of us to never visit POK again and directs the 3033 processor engineers heads down and no distractions.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
some interactive response, 3277, 3278 posts
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015g.html#58 [Poll] Computing favorities
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#53 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009e.html#19 Architectural Diversity
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Rapid Response Date: 02 Oct, 2025 Blog: Facebookre:
Very early 70s ... there was an IBM internal symposium at the old Marriott motel in Arlington side of the river. FSD (Harlan Mills) talk about super programmer and YKT had a talk by human factors group that studied peoples' perception threshold ... varied from .09sec to .2+sec ... depending on the person ... decade later there was article about how fast signals propagates in the brain ... had similar person-to-person variation.
trivia: in addition to YKT FIFO box for 3277 ... there was a soldering hack for 3277 keyboard that adjusted the key repeat delay and repeat rate ... at the fastest there was delay for screen update ... because screen update would continue after the key was released (required a little practice to get the screen update to exactly stop at the exact desired cursor position).
some past symposium posts
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2010q.html#14 Compressing the OODA-Loop - Removing the D (and mayby even an O)
https://www.garlic.com/~lynn/2005r.html#19 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2002i.html#49 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2001h.html#48 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Rapid Response Date: 02 Oct, 2025 Blog: Facebookre:
other trivia: 1974 CERN gave SHARE presentation on comparing MVS/TSO and VM370/CMS ... inside IBM, copies were stamped "IBM Confidential - Restricted" (2nd highest security classification), available on need to know only ... not wanting internal employees exposed to the information.
Later after FS implodes, head of POK manages to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott manages to save the VM370 product mission for the mid-range, but had to recreate a development group from scratch).
They weren't going to tell the people until the very last minute to minimize the number that might escape into the Boston area. The shutdown managed to leak early, it was about time of the infancy of DEC VAX/VMS and joke was that the head of POK was a major contributor to VMS.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
some recent posts mentioning head of POK getting VM370 canceled
and transferring all the people to POK
https://www.garlic.com/~lynn/2025d.html#92 IBM VM370 And Pascal
https://www.garlic.com/~lynn/2025d.html#85 IBM Water Cooled Systems
https://www.garlic.com/~lynn/2025d.html#79 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#76 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#68 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025d.html#61 Amdahl Leaves IBM
https://www.garlic.com/~lynn/2025d.html#59 IBM Example Programs
https://www.garlic.com/~lynn/2025d.html#51 Computing Clusters
https://www.garlic.com/~lynn/2025d.html#44 IBM OS/2 & M'soft
https://www.garlic.com/~lynn/2025d.html#35 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#34 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#30 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#23 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#18 Some VM370 History
https://www.garlic.com/~lynn/2025d.html#12 IBM 370/168
https://www.garlic.com/~lynn/2025d.html#11 IBM 4341
https://www.garlic.com/~lynn/2025d.html#6 SLAC and CERN
https://www.garlic.com/~lynn/2025c.html#111 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#110 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#63 mainframe vs mini, old and slow base and bounds, Why I've Dropped In
https://www.garlic.com/~lynn/2025c.html#12 IBM 4341
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025b.html#123 VM370/CMS and MVS/TSO
https://www.garlic.com/~lynn/2025b.html#118 IBM 168 And Other History
https://www.garlic.com/~lynn/2025b.html#84 IBM 3081
https://www.garlic.com/~lynn/2025b.html#69 Amdahl Trivia
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#27 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025.html#130 Online Social Media
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2025.html#94 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2025.html#70 VM370/CMS, VMFPLC
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#10 IBM 37x5
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Rapid Response Date: 02 Oct, 2025 Blog: Facebookre:
SJR got a 168/MVS and a 158/VM370 systems to replace a 195/MVT. All the 3330 strings were dual channel to both systems but heavy rule about never mounting MVS packs on VM370 designated 3830s & 3330 string. One morning operations happen to mount a MVS pack on a "VM370" designated string and within 5minutes, operations was getting irate calls from all over the bldg complaining about VM370 response. It turns out MVS pack had 2-3 cylinder PDS directories and was doing large number of full cylinder multi-track searches, locking up the (channel, controller, string, drive) for 19 revolutions @60rev/sec ... or each one was 19/60 secs (making it impossible to meet .25sec response)
Demand that the MVS pack be moved was met with operations saying they would get around to it 2nd shift. We then put up a highly VM370 optimized VS1 pack on a 168/MVS string ... and even though VS1 was running in virtual machine on a heavily loaded 158 VM370 system, it was able to bring the 168 to its knees ... and operations agreed to immediately move the MVS pack to a MVS string (if we moved the VS1 pack).
Another situation was large multi-CEC loosely-coupled system national grocery retailer ... hundreds of stores and controller workload distributed across the systems ... problem was that all the systems shared a single store controller application large dataset ... that topped out at 7 I/Os per second (because of PDS directory multi-track search) or two store controller app loads/sec (for hundreds of stores). All the usual IBM MVS experts had been brought through ... but couldn't recognize because the individual system activity was only avg one or two I/Os per second for that drive ... it wasn't until you aggregated all system activity and correlated the 7 I/Os with peak degradation that it was immediately obvious.
posts mentioning DASD, CKD, FBA, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd
some past posts reference SJR, 195/mvt, 168/mvs, 158/vm370 "problem"
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025.html#85 Stress-testing of Mainframes (the HASP story)
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2024.html#75 Slow MVS/TSO
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2019b.html#15 Tandem Memo
https://www.garlic.com/~lynn/2018.html#93 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2018.html#46 VSE timeline [was: RE: VSAM usage for ancient disk models]
https://www.garlic.com/~lynn/2013i.html#36 The Subroutine Call
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2011b.html#76 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011.html#36 CKD DASD
https://www.garlic.com/~lynn/2007f.html#20 Historical curiosity question
https://www.garlic.com/~lynn/2005u.html#44 POWER6 on zSeries?
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Rapid Response Date: 03 Oct, 2025 Blog: Facebookre:
note besides my CSC/VM for internal datacenters ... there was also the "Common System" (CSL) from Kingston lab ... early on had schedule/dispatching delayed response hack (similar to the MVS hack) ... until my CSC/VM dynamic adaptive resource manager showing improved productivity.
Part of the MVS culture was fixed manual tuning objectives (rather than dynamic adaptive). NOTE: 23Jun1969 unbundling started to charge for (application) software (but managed to make the case that kernel software was still free), SE services, maint, etc. With the rise of clone 370s makers and the FS implosion, there was decision to transition to kernel software charging and some of my CSC/VM was selected for initial guinea pig. Some (MVS culture) expert from corporate reviewed it and said he wouldn't sign-off because it didn't have any manual tuning knobs (and everybody knew that manual tuning knobs were state-of-the-art). So I add some manual tuning knobs, with full source and detailed description ... with embedded "joke" (from operations research, dynamic adaptive had more degree of freedom and could compensate for any manual tuning setting). Dynamic adaptive was in (DMK)STP module (from tv advertisements) and manual tuning knobs was in (DMK)SRM (as part of ridiculing MVS). By early 1980s, full kernel was being charged for ... followed by the OCO-wars (object code only).
Other trivia: in the transition of CP67->VM370, lots of stuff was simplified and/or dropped (including shared-memory multiprocessor support). For initial 1974, VM370R2-base CSC/VM, I put in kernel reorg for multiprocessor, but not the actual multiprocessor support ... and this was base for my charged-for kernel software guinea pig. Then for VM370R3-base CSC/VM, I included multiprocessor support, originally for the US (online sales&marketing application support) HONE to upgrade their 168 systems from 1-CPU to 2-CPU systems.
Initially charging for kernel software had policy that new hardware/device support had to be "free". They then want to release multiprocessor support in VM370R4 ... but it required the kernel reorg that was part of my charged-for product (which would have violated the policy that new hardware/device support was still free). Eventually decision was made to move 70-80 percent of code lines from my charged for product to the free VM370R4 base.
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
IBM 23jun1969 unbundling
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource manager posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Rapid Response Date: 03 Oct, 2025 Blog: Facebookre:
other trivia: before MS/DOS:
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, kildall worked on IBM CP/67 (VM370 precursor) at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
... and ... after transferring from CSC to SJR (on west coast) got to wander around IBM (and non-IBM) datacenters in silicon valley, including disk bldg14 (engineering) and bldg15 (product test) across the street.
They were doing 7x24, prescheduled, stand-alone mainframe test and said that they had recently tried MVS but it had 15min MBTF (requiring manual re-ipl, in that environment). I offer to rewrite I/O supervisor making it bullet proof and never fail so they could have any amount of on-demand, concurrent testing, greatly improving productivity. Bldg15 tended to get very early engineering systems (for I/O testing) and got first engineering 3033 (outside POK processor engineering). Testing took very little CPU, so we scrounge up 3830 and 3330 string and put up our own private online service.
At the time "air bearing" simulation (part of thin-film head design) was getting a couple turn-arounds a month on SJR 370/195 MVT system. So we sent them up on the bldg15 3033 and they could get multiple turn-arounds/day.
I then do an (internal IBM) research report on I/O integrity work and happen to mention the MVS 15min MTBF ... bringing down the wrath of the MVS organization on my head.
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: SASOS and virtually tagged caches Newsgroups: comp.arch Date: Sat, 04 Oct 2025 14:17:32 -1000John Levine <johnl@taugh.com> writes:
Problem was that as systems got larger/faster needed to move past 15 concurrent regions ... which resulted in giving each concurrently executing region/program, their own 16mbyte virtual address space (VS2/MVS). However, OS/360 & descendents were heavily pointer passing APIs (creating a different problem) and so they mapped a 8mbyte image of the MVS kernel into every 16mbyte virtual address space (leaving 8mbytes). Then because each subsystem was moved into their separate 16mbyte virtual address space, the 1mbyte "Common Segment Area" (CSA) was mapped into every virtual address space for passing arguments/data back and forth between applications and subsystems (leaving 7mbytes).
Then because the space requirements for passing arguments/data back and forth was somewhat proportional to number of subsystems and concurrently running regions/applications, the CSA started to explode becoming the Common System Area (CSA) running 5-6mbytes (leaving 2-3mbytes for regions/applications) and threatening to become 8mbytes (leaving zero for regions/applications). At the same time the number of concurrently running applications space requirements was exceeding 16mbytes real address ... and 2nd half 70s, 3033s were retrofitted for 64mbytes real addressing by taking two unused bits in page table entry and prefixing them to the 12bit (4k) real page number for 14bits or 64mbyte (instructions were still 16mbyte, but virtual pages could be loaded and run "above the 16mbyte line").
Then part of 370/xa "access registers" was retrofitted to 3033 for dual address space mode. Calls to subsystems, could move the caller's address space pointer into the secondary address space register and the subsystem address space pointer was moved into primary. Subsystems then could access the caller's (secondary) virtual address space w/o needing data be passed back&forth in CSA. For 370/xa, program call/return instructions could perform the address space primary/secondary switches all in hardware.
I had also started pontificating that lot of OS/360 had heavily leveraged I/O system to compensate for limited real storage (and descendents had inherited it). In early 80s, I wrote a tome that relative system disk I/O throughput had declined by an order of magnitude (disks throughput got 3-5 times faster while systems got 40-50 times faster (major motivation for constantly needing increasingly number of concurrently executing programs). Disk division executive took exception and directed the division performance organization to refute my claims. After a couple weeks, they came back and basically said that I had slightly understated the problem. They then respun the analysis for SHARE (user group) presentation on how to configure/manage disks for improved system throughput (16Aug1984, SHARE 63, B874).
3033 above the "16mbyte" line hack: There were problems with parts of system that required virtual pages below the "16mbyte line". Introduced with 370 was I/O channel program IDALs that were full-word addresses. Somebody came up with idea to use IDALs to write a virtual page (above 16mbyte) to disk and then read it back into address <16mbyte. I gave them a hack using virtual address space table that filled in page table entries with the >16mbyte page number and <16mbyte page number and use MVCL instruction to copy the virtual page from above 16mbyte line to below the line.
some recent posts mentioning MVT, SVS, MVS, CSA, B874
https://www.garlic.com/~lynn/2025d.html#94 IBM VM370 And Pascal
https://www.garlic.com/~lynn/2025c.html#108 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#79 IBM System/360
https://www.garlic.com/~lynn/2025b.html#33 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2019b.html#94 MVS Boney Fingers
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Internal Network, Profs and VMSG Date: 04 Oct, 2025 Blog: FacebookMIT CTSS/7094 had a form of email.
Then some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS. Others went to the IBM Science Center on the 4th flr and did virtual machines (1st modified 360/40 w/virtual memory and did CP40/CMS, morphs into CP67/CMS when 360/67 standard with virtual memory becomes available), science center wide-area network ... that morphs into corporate internal network, larger than arpanet/internet from science-center beginning until sometime mid/late 80s (about the time that it was forced to convert to SNA/VTAM); technology also used for the corporate sponsored univ BITNET, invented GML 1969 (precursor to SGML and HTML), lots of performance tools, etc. Later with decision was made to add virtual memory to all 370s, there was project that morphed CP67 into VM370 (although lots of stuff was initially simplified or dropped).
Account of science center wide-area network by one of the science
center inventors of GML
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
PROFS started out picking up internal apps and wrapping 3270 menus around. They picked up a very early version of VMSG for the email client. When the VMSG author tried to offer them a much enhanced version of VMSG, profs group tried to have him separated from the company. The whole thing quieted down when he demonstrated that every VMSG (and PROFS email) had his initials in a non-displayed field. After that he only shared his source with me and one other person.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
some recent posts mentioning PROFS & VMSG:
https://www.garlic.com/~lynn/2025d.html#43 IBM OS/2 & M'soft
https://www.garlic.com/~lynn/2025d.html#32 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025c.html#113 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025b.html#60 IBM Retain and other online
https://www.garlic.com/~lynn/2025.html#90 Online Social Media
https://www.garlic.com/~lynn/2024f.html#91 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#44 PROFS & VMSG
https://www.garlic.com/~lynn/2024e.html#99 PROFS, SCRIPT, GML, Internal Network
https://www.garlic.com/~lynn/2024e.html#48 PROFS
https://www.garlic.com/~lynn/2024e.html#27 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#109 IBM->SMTP/822 conversion
https://www.garlic.com/~lynn/2024b.html#69 3270s For Management
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023f.html#71 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#78 IBM TLA
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#32 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023c.html#5 IBM Downfall
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2022f.html#64 Trump received subpoena before FBI search of Mar-a-lago home
https://www.garlic.com/~lynn/2022b.html#29 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#89 IBM PROFs
https://www.garlic.com/~lynn/2021j.html#83 Happy 50th Birthday, EMAIL!
https://www.garlic.com/~lynn/2021j.html#23 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#86 IBM EMAIL
https://www.garlic.com/~lynn/2021i.html#68 IBM ITPS
https://www.garlic.com/~lynn/2021h.html#50 PROFS
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2021e.html#30 Departure Email
https://www.garlic.com/~lynn/2021d.html#48 Cloud Computing
https://www.garlic.com/~lynn/2021c.html#65 IBM Computer Literacy
https://www.garlic.com/~lynn/2021b.html#37 HA/CMP Marketing
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM System Meter Date: 05 Oct, 2025 Blog: FacebookIn the 60s with rented/leased systems, the system meter ran when ever any processor and/or channel was busy. All processors and channels had to be idle for at least 400ms before the system meter would stop. Cambridge Science Center and the commercial online spin-offs did a lot of work on CP/67 (precursor to VM370) for leaving the system up 7x24, and allow system meter to go idle when there was no activity (including channel programs that would go idle when there was no data arriving, but immediately on when characters started arriving) as well was (off-shift) dark room operation. trivia: Long after systems were converted to sales, MVS still had a timer event that woke up every 400msc.
In the wake of IBM's Future System implosion there was mad rush to get
stuff back into the 370 product pipelines (during FS, internal
politics had been killing off 370 efforts and lack of new 370s was
credited with giving the clone 370 makers, their market foothold)
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
Endicott cons me into helping with (VM370 microcode assist) ECPS for
138&148 (later 4300s) ... archived post with initial analysis for
selecting ECPS, running microcode ten times faster than native 370
instructions:
https://www.garlic.com/~lynn/94.html#21
Then Endicott tried to get corporate permission to pre-install VM370 on every 138&148 shipped (something like PRSM/LPAR), but head of POK was in the process of convincing corporate to kill the VM370 product, shutdown the development group and transfer everybody to POK for MVS/XA (Endicott did manage to acquire the VM370 product mission, but had to recreate a development group from scratch). In any case, corporate wouldn't give permission to allow VM370 pre-install on every 138&148 shipped.
In early 80s, I got permission to give user group presentations on how ECPS was done. After some of the meetings, some Amdahl people would corner me asking for more details about how ECPS was done. Amdahl people would corner after the meetings and ask for additional details. They said that they had created MACROCODE (370-like instructions running in microcode-mode) originally to be able to quickly respond to plethora of IBM's 3033 trivial microcode changes that were being required by MVS to IPL ... and Amdhal was then using it to implement HYPERVISOR ("multiple domain") ... note IBM doesn't respond until nearly a decade later with PRSM/LPAR for the 3090.
Note: POK then was finding that customers weren't converting to MVS/XA
as planned ... however, Amdahl was having much greater success with
being able to run MVS and MVS/XA concurrently on the same machine
... something like this account of customers not converting to
original MVS (as planned):
http://www.mxg.com/thebuttonman/boney.asp
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts mentioning virtual machine online commercial services
https://www.garlic.com/~lynn/submain.html#online
some recent posts mentioning system meter
https://www.garlic.com/~lynn/2025d.html#55 Boeing Computer Services
https://www.garlic.com/~lynn/2025c.html#75 MVS Capture Ratio
https://www.garlic.com/~lynn/2025b.html#83 Mainfame System Meter
https://www.garlic.com/~lynn/2024g.html#100 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#99 Terminals
https://www.garlic.com/~lynn/2024g.html#94 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#59 Cloud Megadatacenters
https://www.garlic.com/~lynn/2024g.html#55 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024g.html#23 IBM Move From Leased To Sales
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#61 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024c.html#116 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023d.html#78 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: ARPANET, NSFNET, Internet Date: 05 Oct, 2025 Blog: FacebookCo-worker at science center responsible for science center wide-area network,
newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
CSC wide-area network morphs into corporate internal network (larger
than arpanet/internet from just about the beginning until sometime
mid/late 80s, about the same time internal network forced to convert
to SNA/VTAM). Account by one of the inventors of "GML" (in 1969, it
then morphs into SGML&HTML) at the science center:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
... also technology used for the corporate sponsored univ BITNET:
https://en.wikipedia.org/wiki/BITNET
1Jan1983 IMPs/Host cut-over to internetworking, there were approx 100
IMPs and 255 hosts, while the internal network was rapidly approaching
1000 all over the world (one of the problems was corp. required all
links to be encrypted and gov. resistance, especially when links were
crossing national boundaries). Archived post of corporate locations
that added one or more nodes during 1983:
https://www.garlic.com/~lynn/2006k.html#8
Early 80s, got HSDT, T1 and faster computer links (both terrestrial
and satellite) and lots of battles with communication group (60s, IBM
had 2701 controller that supported T1 computer links, 70s transition
to SNA/VTAM and issues, capped computer links at 56kbytes/sec). Was
also working with NSF director and was suppose to get $20M to
interconnect the NSF supercomputer centers. Then congress cuts the
budget, some other things happened and finally a RFP is released (in
part based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
Around the same time, communication group was fighting release of mainframe TCP/IP support. When they lost, they changed their tactic and since they had corporate ownership of everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then add RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, 4341 got sustained channel throughput using only modest amount of the CPU (something like 500 times increase in bytes moved per instruction executed).
IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet
Early 90s, left IBM and was brought in as consultant to small client/server startup by two former Oracle employees (worked with on RDBMS) that were there responsible for something called "commerce server" and they wanted to do payment transactions on the server. The startup had also invented this technology called SSL they wanted to use, it is now sometimes called "electronic commerce". I had responsibility for everything between webservers and payment networks. Based on procedures, documentation and software I had to do for electronic commerce, I did a talk on "Why Internet Wasn't Business Critical Dataprocessing" that Postel sponsored at ISI/USC.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
payment gateway and electronic commerce posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
some recent posts mentioning "Business Critical Dataprocessing"
https://www.garlic.com/~lynn/2025d.html#98 IBM Supercomputer
https://www.garlic.com/~lynn/2025d.html#93 IBM VM370 And Pascal
https://www.garlic.com/~lynn/2025d.html#90 Internet
https://www.garlic.com/~lynn/2025d.html#42 IBM OS/2 & M'soft
https://www.garlic.com/~lynn/2025d.html#38 Mosaic
https://www.garlic.com/~lynn/2025b.html#97 Open Networking with OSI
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024g.html#80 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024f.html#47 Postel, RFC Editor, Internet
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Mainframe and Cloud Date: 05 Oct, 2025 Blog: FacebookI took 2 credit hr intro to fortran/computers and at the end of semester, univ hired me to rewrite 1401 MPIO for 360/30. Univ was getting 360/67 for tss/360 to replace 709/1401 and got 360/30 temporarily (replace 1401, mostly to start gainining 360 experience) pending 360/67 availability. Univ. shutdown datacenter on weekends, and I would have the whole place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a large pile of hardware&software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc and within a few weeks had a 2000 card assembler program. The 360/67 arrives within a year of taking intro class and was hired fulltime responsible for os/360 (tss/360 not coming to production) and continued to have my dedicated weekend time. Student Fortran jobs ran under 2nd on 709, but over minute on 360/67 (w/os360 as 360/65, MFT9.5). I install HASP and cuts the time in half. I then start redoing STAGE2 SYSGEN (MFT11) to carefully place datasets and PDS members to optimize arm seek and multi-track search, which cuts time by another 2/3rds to 12.9secs. Student Fortran never got better than 709 until I install UofWaterloo WATFOR.
CSC came out to univ for CP67/CMS (precursor to VM370/CMS) install (3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my 48hr weekend dedicated time. I initially work on pathlengths for running OS/360 in virtual machine. Test stream ran 322secs on real machine, initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, (dynamic adaptive resource manager/default fair share policy) scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to channel transfer peak of 270). Six months after univ initial install, CSC was giving one week class in LA. I arrive on Sunday afternoon and asked to teach the class, it turns out that the people that were going to teach it and resigned the Friday before to join one of the 60s CP67 commercial online spin-offs.
Univ library gets ONR grant to do online catalog, some of the money goes for 2321 datacell. Was also selected by IBM for original CICS product betatest and supporting CICS was added to my tasks.
Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit, including offering services to non-Boeing entities). I think Renton datacenter largest in the world, 360/65s arriving faster than could be installed, boxes constantly staged in the hallways around the machine room (although there was one, lonely 360/75, used for classified work) ... joke that Boeing was getting 360/65s like other companies got keypunches ... sort of early cloud megadatacenter. Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll, although they enlarge the machine room to install 360/67 for me to play with when I wasn't doing other stuff (747-3 was flying skies of Seattle getting FAA certification). When I graduate, I join IBM science center, instead of staying with Boeing CFO.
Now a large cloud operation will have score or more megadatacenters around the world, each with half million or more server blades, each server blade with ten times the processing of max. configured mainframe, along with enormous automation, megadatacenter operated with only 70-80 staff (upwards of 10,000 or more systems/staff).
recent mainframers post
https://www.garlic.com/~lynn/2025d.html#110 IBM System Meter
lots more from recent "internet" post (including GML->SGML->HTML):
https://www.garlic.com/~lynn/2025d.html#111 ARPANET, NSFNET, Internet
... and 1st HTML server in the US on the Stanford SLAC VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
... and from recent PROFS/VMSG post:
https://www.garlic.com/~lynn/2025d.html#109 Internal Network, Profs and VMSG
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#bdam
GML, SGML, HTML
https://www.garlic.com/~lynn/submain.html#sgml
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
recent posts mentioning WATFOR, Boeing CFO, BCS, Renton
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, index - home